This is Lowpass by Janko Roettgers, a newsletter on the ever-evolving intersection of tech and entertainment, syndicated just for The Verge subscribers once a week.
Technology
Casting is dead. Long live casting!
Last month, Netflix made the surprising decision to kill off a key feature: With no prior warning, the company removed the ability to cast videos from its mobile apps to a wide range of smart TVs and streaming devices. Casting is now only supported on older Chromecast streaming adapters that didn’t ship with a remote, Nest Hub smart displays, and select Vizio and Compal smart TVs.
That’s a stunning departure for the company. Prior to those changes, Netflix allowed casting to a wide range of devices that officially supported Google’s casting technology, including Android TVs made by companies like Philips, Polaroid, Sharp, Skyworth, Soniq, Sony, Toshiba, and Vizio, according to an archived version of Netflix’s website.
But the streaming service didn’t stop there. Prior to last month’s changes, Netflix also offered what the company called “Netflix 2nd Screen” casting functionality on a wide range of additional devices, including Sony’s PlayStation, TVs made by LG and Samsung, Roku TVs and streaming adapters, and many other devices. Basically, if a smart TV or streaming device was running the Netflix app, it most likely also supported casting.
That’s because Netflix actually laid the groundwork for this technology 15 years ago. Back in 2011, some of the company’s engineers were exploring ways to more tightly integrate people’s phones with their TVs. “At about the same time, we learned that the YouTube team was interested in much the same thing — they had already started to do some work on [second] screen use cases,” said Scott Mirer, director of product management at Netflix at the time, in 2013.
The two companies started to collaborate and enlist help from TV makers like Sony and Samsung. The result was DIAL (short for “Discovery and Launch”) — an open second-screen protocol that formalized casting.
In 2012, Netflix was the first major streaming service to add a casting feature to its mobile app, which at the time allowed PlayStation 3 owners to launch video playback from their phones. A year later, Google launched its very first Chromecast dongle, which took ideas from DIAL and incorporated them into Google’s own proprietary casting technology.
For a while, casting was extremely popular. Google sold over 100 million Chromecast adapters, and Vizio even built a whole TV around casting, which shipped with a tablet instead of a remote. (It flopped. Turns out people still love physical remotes.)
But as smart TVs became more capable, and streaming services invested more heavily into native apps on those TVs, the need for casting gradually decreased. At CES, a streaming service operator told me that casting used to be absolutely essential for his service. Nowadays, even among the service’s Android users, only about 10 percent are casting.
As for Netflix, it’s unlikely the company will change its tune on casting. Netflix declined to comment when asked about discontinuing the feature. My best guess is that casting was sacrificed in favor of new features like cloud gaming and interactive voting. Gaming in particular already involves multidevice connectivity, as Netflix uses phones as game controllers. Adding casting to that mix simply might have proven too complex.
However, not everyone has given up on casting. In fact, the technology is still gaining new supporters. Last month, Apple added Google Cast support to its Apple TV app on Android for the first time. And over the past two years, both Samsung and LG incorporated Google’s casting tech into some of their TV sets.
“Google Cast continues to be a key experience that we’re invested in — bringing the convenience of seamless content sharing from phones to TVs, whether you’re at home or staying in a hotel,” says Google’s Android platform PM Neha Dixit. “Stay tuned for more to come this year.”
Google’s efforts are getting some competition from the Connectivity Standards Alliance, the group behind the Matter smart home standard, which developed its own Matter Casting protocol. Matter Casting promises to be a more open approach toward casting and in theory allows streaming services and device makers to bring second-screen use cases to their apps and devices without having to strike deals with Google.
“We are a longtime advocate of using open technology standards to give customers more choice when it comes to using their devices and services,” says Amazon Device Software & Services VP Tapas Roy, whose company is a major backer of Matter and its casting tech. “We welcome and support media developers that want to build to an open standard with the implementation of Matter Casting.”
Thus far, support has been limited though. Fire TVs and Echo Show displays remain the only devices to support Matter Casting, and Amazon’s own apps were long the only ones to make use of the feature. Last month, Tubi jumped on board as well by incorporating Matter Casting into its mobile apps.
Connectivity Standards Alliance technology strategist Christopher LaPré acknowledges that Matter Casting has yet to turn into a breakthrough hit. “To be honest, I have Fire TVs, and I’ve never used it,” he says.
Besides a lack of available content, LaPré also believes Matter Casting is a victim of brand confusion. The problem: TV makers have begun to incorporate Matter into their devices to let consumers control smart lights and thermostats from the couch. Because of that, a TV that dons the Matter logo doesn’t necessarily support Matter Casting.
However, LaPré also believes that Matter Casting could get a boost from two new developments: Matter recently added support for cameras, which adds a new kind of homegrown content people may want to cast. And the consortium is also still working on taking casting beyond screens.
“Audio casting is something that we’re working on,” LaPré confirms. “A lot of speaker companies are interested in that.” The plan is to launch Matter audio casting later this year, at which point device makers, publishers, and consumers could also give video casting another look.
Technology
Jury finds Elon Musk’s ‘stupid tweets’ caused Twitter investors’ losses
A California jury determined that Elon Musk misled Twitter investors before making a $44 billion deal to buy the company in 2022, reports CNBC. The New York Times reports that Musk had testified this month that he didn’t believe his posts would spook markets, but he did say that “If this was a trial about whether I made stupid tweets, I would say I’m guilty.”
CNBC reports Musk’s attorneys are expected to file an appeal, as damages could reach as high as $2.6 billion, according to attorneys representing the plaintiffs.
While finding that Musk did not engage in a specific scheme to defraud shareholders, the jury cited two of Musk’s tweets, from May 13th and May 27th, 2022, as materially false or misleading, causing some investors to sell shares in Twitter at values below the $54.20 per share bid.
Twitter deal temporarily on hold pending details supporting calculation that spam/fake accounts do indeed represent less than 5% of users
20% fake/spam accounts, while 4 times what Twitter claims, could be *much* higher.
My offer was based on Twitter’s SEC filings being accurate.
Yesterday, Twitter’s CEO publicly refused to show proof of
This deal cannot move forward until he does.
Technology
AI smart glasses could generate fake photos instantly
NEWYou can now listen to Fox News articles!
Smart glasses are gaining new momentum thanks to artificial intelligence (AI). Companies like Google, Meta, Samsung and possibly Apple are exploring AI-powered glasses that combine cameras, speakers, voice assistants and computer vision in a wearable device.
At first glance, the features sound familiar. Smart glasses can take photos, give directions, answer questions and help you navigate the world hands-free. However, a recent demo hints at something much bigger.
These glasses may soon generate or alter photos instantly. In other words, the image you capture may no longer reflect what was actually there.
That raises an important question: If AI can change a photo the moment it is taken, how do we know what is real anymore?
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
SMART GLASSES DETECTOR APP WARNS IF YOU’RE BEING RECORDED
Google product lead Dieter Bohn demonstrates prototype AI smart glasses during a demo showing how the device can capture and modify photos using generative AI. (X/ @backlon)
A new AI trick inside smart glasses
During a demo of upcoming smart glasses, Google’s Dieter Bohn showed how the device could capture a photo and modify it using AI. The prototype, shown as Android XR glasses with a display, connects to Google’s generative AI tools, including Google Gemini and an experimental image generator called Nano Banana.
In the demonstration, Bohn asked the glasses to take a photo of people in the room. Then he gave another command. He asked the system to place those people in front of the famous church in Barcelona that he could not remember by name.
Within moments, the AI produced a new image showing the group standing in front of the Sagrada Família. The people in the photo never traveled to Spain. The background came from AI. To someone viewing the image later, it could look like a real travel photo.
Smart glasses are following the same playbook
The hardware approach behind these devices looks similar across the industry.
Most smart glasses include:
- A built-in camera
- Speakers for audio feedback
- A microphone and a voice assistant
- Computer vision powered by AI
- Navigation and contextual information
- Optional displays inside the lenses
This design mirrors products like the Ray-Ban Meta Smart Glasses, which combine sunglasses with an AI assistant and camera. Those glasses already allow users to capture photos, livestream video and ask questions using voice commands. However, the editing tools currently available inside Meta’s glasses focus more on artistic effects. For example, the system can transform photos into a cartoon or painting style. The goal is creative expression rather than photorealistic manipulation. Google’s demo hints at something different. It shows how AI can place people into entirely new scenes that never happened.
INSIDE MICROSOFT’S AI CONTENT VERIFICATION PLAN
A close-up of prototype Android XR glasses with a built-in display, part of Google’s concept for AI-powered smart glasses. (X/ @backlon)
Why this matters for photography
AI-generated images already exist across social media. Smartphones have also introduced powerful editing tools. Google’s Pixel phones, for example, have leaned heavily into AI photography with tools that remove objects, adjust lighting and generate backgrounds.
The difference with smart glasses is speed. The technology removes the delay between taking a photo and editing it. Instead of capturing an image and opening editing software later, the AI can change the photo immediately. That could make altered images far more common. Photos that once served as proof of where someone was or what happened may become harder to trust.
The demo still leaves open questions
It is important to note that the Google demo was short and carefully staged. The company acknowledged that parts of the video were edited. That suggests the AI process may take longer in real-world conditions.
There is also the question of reliability. Generative AI tools sometimes produce mistakes, strange artifacts or unrealistic details. Still, even an imperfect system could change how people interact with cameras and images. As the technology improves, the gap between real and AI-generated photos may shrink.
What this means for you
Smart glasses could soon become another everyday device. That means the way we capture and share images may shift again. If these tools become common, you may start seeing photos that were generated or heavily modified by AI. A picture posted online may look like a real moment from someone’s life. In reality, it could be a mix of real people and AI-generated scenery. That does not mean every image is fake. It does mean digital images may carry less proof than they once did. Understanding how AI editing works can help you approach viral photos, travel shots or dramatic images with a healthy level of skepticism.
Ray-Ban Meta smart glasses combine cameras, speakers and an AI assistant, showing how wearable devices are bringing artificial intelligence into everyday eyewear. (Meta)
How to spot AI-generated or altered photos
AI editing tools are becoming easier to use. That means altered images may appear more often online. A few habits can help you avoid being misled.
1) Question images that look too perfect
If a photo looks unusually polished or dramatic, pause before assuming it is real. AI images often create scenes that feel cinematic or unusually clean.
2) Look closely at small details
AI systems sometimes struggle with small elements. Check hands, reflections, shadows and background objects for strange shapes or mismatched lighting.
3) Check where the image came from
If a photo spreads quickly online, try to trace the original source. Reverse image search can reveal if the picture appeared somewhere else first.
4) Be cautious with viral travel or event photos
AI tools can place people into locations they have never visited. A convincing background does not guarantee that the moment actually happened.
5) Watch for photos used in scams or misinformation
AI-generated images can appear in fake travel posts, romance scams or misleading news claims. If a photo appears alongside urgent requests for money or emotional stories, take time to verify it before reacting. Avoid clicking suspicious links and consider using strong antivirus software that can block malicious websites and scam pages before they load. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
6) Treat photos online as information, not proof
Photos once served as strong evidence of where someone was or what occurred. With generative AI, an image may be a mix of real people and computer-generated scenes.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
Smart glasses promise convenience, hands-free computing and powerful AI tools. At the same time, they blur the line between photography and digital creation. Technology keeps pushing toward a world where capturing a moment and generating one can happen in the same instant. The devices themselves may become smaller and smarter. The challenge may be deciding how much we trust the images they produce.
So here is the question worth asking. If AI glasses can create realistic photos of places you’ve never visited, will pictures still count as proof of reality? Let us know by writing to us at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Microsoft is ending the Windows Update nightmare — and letting you pause them indefinitely
While Microsoft isn’t doing away with automatic updates entirely, Windows boss Pavan Davuluri is promising that in future, you’ll be able to pause them “for as long as you need.” You’ll be able to reboot or shut down your computer “without being forced to install them.” To be fair to Microsoft, I’ve seen an option to reboot or shutdown without updating for a while now.
Even if you fail to pause them, you’ll only have to reboot your computer once a month, Microsoft promises — though its says you’ll be able to get updates faster if you wish. If you’re the kind of user who wants new features so quickly that you’re part of the Windows Insider Program, Microsoft says it’ll make that easier and make it clearer what you’ll get.
And as part of those updates, Microsoft says that this year, it will improve performance, responsiveness and stability, reduce memory consumption, make File Explorer and other apps launch and run faster, reduce crashes, improve drivers, make devices wake up more reliably, and much, much more.
It feels like Microsoft has also taken our feedback about the recent ridiculous hour-plus setup process for some Windows handhelds and laptops to heart. Davuluri writes that we’ll have “the ability to skip updates during device setup to get to the desktop faster.” And even if you sit through, there should be “fewer pages and reboots to getting started is simpler.” Plus, Microsoft will finally let you use gamepad controls to create your PIN during setup, instead of making you smudge the touchscreen.
Bravo, Microsoft, if this is all true, and if you can implement it in a reasonable length of time.
Davuluri writes that his team has spent months analyzing the feedback of Windows users, and “What came through was the voice of people who care deeply about Windows and want it to be better.”
-
Detroit, MI2 days agoDrummer Brian Pastoria, longtime Detroit music advocate, dies at 68
-
Oklahoma6 days agoFamily rallies around Oklahoma father after head-on crash
-
Nebraska1 week agoWildfire forces immediate evacuation order for Farnam residents
-
Georgia5 days agoHow ICE plans for a detention warehouse pushed a Georgia town to fight back | CNN Politics
-
Massachusetts1 week agoMassachusetts community colleges to launch apprenticeship degree programs – The Boston Globe
-
Alaska6 days agoPolice looking for man considered ‘armed and dangerous’
-
Colorado1 week ago‘It’s Not a Penalty’: Bednar Rips Officials For MacKinnon Ejection | Colorado Hockey Now
-
Southwest1 week agoTalarico reportedly knew Colbert interview wouldn’t air on TV before he left to film it