This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more news about video game industry’s pushback against generative AI, follow Jay Peters. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.
Technology
Amazon adds controversial AI facial recognition to Ring
NEWYou can now listen to Fox News articles!
Amazon’s Ring video doorbells are getting a major artificial intelligence (AI) upgrade, and it is already stirring controversy.
The company has started rolling out a new feature called Familiar Faces to Ring owners across the United States. Once enabled, the feature uses AI-powered facial recognition to identify people who regularly appear at your door. Instead of a generic alert saying a person is at your door, you might see something far more personal, like “Mom at Front Door.” On the surface, that sounds convenient.
Privacy advocates, however, say this shift comes with real risks.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
GOOGLE NEST STILL SENDS DATA AFTER REMOTE CONTROL CUTOFF, RESEARCHER FINDS
Ring’s new Familiar Faces feature uses AI facial recognition to identify people who regularly appear at your door and personalize alerts. (Chip Somodevilla/Getty Images)
How Ring’s Familiar Faces feature works
Ring says Familiar Faces helps you manage alerts by recognizing people you know. Here is how it works in practice. You can create a catalog of up to 50 faces. These may include family members, friends, neighbors, delivery drivers, household staff or other frequent visitors. After labeling a face in the Ring app, the camera will recognize that person as they approach. Anyone who regularly passes in front of your Ring camera can be labeled by the device owner if they choose to do so, even if that person is unaware they are being identified.
From there, Ring sends personalized notifications tied to that face. You can also fine-tune alerts on a per-face basis, which means fewer pings for your own comings and goings. Importantly, the feature is not enabled by default. You must turn it on manually in the Ring app settings. Faces can be named directly from Event History or from the Familiar Faces library. You can edit names, merge duplicates or delete faces at any time.
Amazon says unnamed faces are automatically removed after 30 days. Once a face is labeled, however, that data remains stored until the user deletes it.
Why privacy groups are pushing back
Despite Amazon’s assurances, consumer protection groups and lawmakers are raising alarms. Ring has a long history of working with law enforcement. In the past, police and fire departments were able to request footage through the Ring Neighbors app. More recently, Amazon partnered with Flock, a company that makes AI-powered surveillance cameras widely used by police and federal agencies. Ring has also struggled with internal security. In 2023, the FTC fined Ring $5.8 million after finding that employees and contractors had unrestricted access to customer videos for years. The Neighbors app previously exposed precise home locations, and Ring account credentials have repeatedly surfaced online. Because of these issues, critics argue that adding facial recognition expands the risk rather than reducing it.
Electronic Frontier Foundation (EFF) staff attorney Mario Trujillo tells CyberGuy, “When you step in front of one of these cameras, your faceprint is taken and stored on Amazon’s servers, whether you consent or not. Today’s feature to recognize your friend at your front door can easily be repurposed tomorrow for mass surveillance. It is important for state regulators to investigate.” The Electronic Frontier Foundation is a well-known nonprofit organization that focuses on digital privacy, civil liberties and consumer rights in the tech space.
WASHINGTON COURT SAYS FLOCK CAMERA IMAGES ARE PUBLIC RECORDS
Once a face is labeled by the device owner, Ring can replace generic notifications with named alerts tied to that individual. (CyberGuy.com)
Where the feature is blocked and why that matters
Legal pressure is already limiting where Familiar Faces can launch. According to the EFF, privacy laws are preventing Amazon from offering the feature in Illinois, Texas and Portland, Oregon. These jurisdictions have stricter biometric privacy protections, which suggests regulators see facial recognition in the home as a higher-risk technology. U.S. Senator Ed Markey has also called on Amazon to abandon the feature altogether, citing concerns about surveillance creep and biometric data misuse.
Amazon says biometric data is processed in the cloud and not used to train AI models. The company also claims it cannot identify all locations where a face appears, even if law enforcement asks. Still, critics point out the similarity to Ring’s Search Party feature, which already scans neighborhoods to locate lost pets.
We reached out to Amazon for comment but did not receive a response before our deadline.
Ring’s other AI feature feels very different
Not all of Ring’s AI updates raise the same level of concern. Ring recently introduced Video Descriptions, a generative AI feature that summarizes motion activity in plain text. Instead of guessing what triggered an alert, you might see messages like “A person is walking up the steps with a black dog” or “Two people are peering into a white car in the driveway.”
HOW RESTAURANT RESERVATION PLATFORM OPENTABLE TRACKS CUSTOMER DINING HABITS
Ring’s Video Descriptions feature takes a different approach by summarizing activity without identifying people by name. (Amazon)
How Video Descriptions decides what matters
This AI focuses on actions rather than identities. It helps you quickly decide whether an alert is urgent or routine. Over time, Ring says the system can recognize activity patterns around a home and only notify you when something unusual happens. However, as with any AI system, accuracy can vary depending on lighting, camera angle, distance and environmental conditions. Video Descriptions is currently rolling out in beta to Ring Home Premium subscribers in the U.S. and Canada. Unlike facial recognition, this feature improves clarity without naming or tracking specific people. That contrast matters.
Video Descriptions turns motion alerts into short summaries, helping you understand what is happening without identifying who is involved. (Amazon)
Should you turn Familiar Faces on?
If you own a Ring doorbell, caution is wise. While Familiar Faces may reduce notification fatigue, labeling people by name creates a detailed record of who comes to your home and when. Given Ring’s past security lapses and close ties with law enforcement, many privacy experts recommend keeping the feature disabled. If you do use it, avoid full names and remove faces you no longer need. In many cases, simply checking the live video feed is safer than relying on AI labels. Not every smart home feature needs to know who someone is.
How to turn Familiar Faces on or off in the Ring app
If you want to review or change this setting, you can do so at any time in the Ring mobile app.
To enable Familiar Faces:
- Open the Ring app
- Tap the menu icon
- Select Control Center
- Tap Video and Snapshot Capture
- Select Familiar Faces
- Toggle the feature on and follow the on-screen prompts
To turn Familiar Faces off:
- Open the Ring app
- Go to Control Center
- Tap Video and Snapshot Capture
- Select Familiar Faces
- Toggle the feature off
Turning the feature off stops facial recognition and prevents new faces from being identified. Any labeled faces can also be deleted manually from the Familiar Faces library if you want to remove stored data.
Alexa is now answering your door for you
Amazon is also rolling out a very different kind of AI feature for Ring doorbells, and it lives inside Alexa+. Called Greetings, this update gives Ring doorbells a conversational AI voice that can interact with people at your door when you are busy or not home. Instead of identifying who someone is, Greetings focuses on what they appear to be doing. Using Ring’s video descriptions, the system looks at apparel, actions, and objects to decide how to respond.
For example, if someone in a delivery uniform drops off a package, Alexa can tell them exactly where to leave it based on your instructions. You can even set preferences to guide delivery drivers toward a specific spot, or let them know water or snacks are available. If a delivery requires a signature, Alexa can ask the driver when they plan to return and pass that message along to you. The feature can also handle sales representatives or service vendors. You might set a rule such as politely declining sales pitches without ever coming to the door yourself.
Greetings can also work for friends and family. If someone stops by while you are away, Alexa can greet them and ask them to leave a message for you. That interaction is saved so you can review it later. That said, the system is not perfect. Because it relies on visual context rather than identity, mistakes can happen. A friend who works in logistics could show up wearing a delivery uniform and be treated like a courier instead of being invited to leave a message. Amazon acknowledges that accuracy can vary. Importantly, Amazon says Greetings does not identify who a person is. It uses Ring’s video descriptions to determine the main subject in front of the camera and generate responses, without naming or recognizing individuals. That makes it fundamentally different from the Familiar Faces feature, even though both rely on AI.
Greetings is compatible with Ring Wired Doorbell Pro (3rd Gen) and Ring Wired Doorbell Plus (2nd Gen). It is available to Ring Premium Plan subscribers who have video descriptions enabled and is currently rolling out to Alexa+ Early Access users in the United States and Canada.
Thinking about a Ring doorbell?
If you are already in the Ring ecosystem or considering a video doorbell, Ring’s lineup includes models with motion alerts, HD video, night vision, and optional AI-powered features such as Video Descriptions. While Familiar Faces remains controversial and can be turned off, many homeowners still use Ring doorbells for basic security awareness and package monitoring.
If you decide Ring is right for your home, you can check out the latest Ring Video Doorbell models or compare features and pricing with other options by visiting Cyberguy.com and searching “Top Video Doorbells.”
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
Amazon Ring’s AI facial recognition feature shows how quickly convenience can collide with privacy. Familiar Faces may offer smarter alerts, but it also expands surveillance into deeply personal spaces. Meanwhile, features like Video Descriptions prove that AI can be useful without identifying people. As smart home tech evolves, the real question is not what AI can do but what it should do.
Would you trade fewer notifications for a system that recognizes and names everyone who comes to your door? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
OpenClaw founder Peter Steinberger is joining OpenAI
I could totally see how OpenClaw could become a huge company. And no, it’s not really exciting for me. I’m a builder at heart. I did the whole creating-a-company game already, poured 13 years of my life into it and learned a lot. What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone.
Technology
Why physical ID theft is harder to fix than credit card fraud
NEWYou can now listen to Fox News articles!
It started with a voicemail from a Hertz rental car location in Miami, Florida. A 57-year-old woman in Los Alamitos, California, was asked when she planned to return a Mercedes-Benz she had never rented. A thief had stolen her driver’s license, replaced the photo with their own and used it to rent the vehicle. The same identity was used to open a credit card account, book airline tickets and reserve hotel stays. By the time she learned what happened, the fraud involved businesses in multiple states.
Clearing her name required police reports in two jurisdictions, written disputes with the credit card issuer and repeated contact with the rental company and hotels. Her accounts were frozen while she submitted notarized copies of her identification and signed fraud affidavits. The process lasted more than a week. She reported losing $78,500 and spent nearly 10 days dealing with the fallout from one stolen ID.
Credit card fraud is usually limited to a single account number. Physical ID theft gives someone the ability to act as you in the real world. As a result, the cleanup process is longer, more intrusive and often tied to your legal record.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
5 MYTHS ABOUT IDENTITY THEFT THAT PUT YOUR DATA AT RISK
A stolen driver’s license can allow someone to rent cars, open accounts and sign contracts in your name. (Photo by Silas Stein/picture alliance via Getty Images)
How credit card fraud recovery works
Under the Fair Credit Billing Act, you report unauthorized charges to the card issuer within 60 days of the statement date. Federal law limits your liability to $50, and most major issuers waive that entirely. The bank cancels the compromised card number, issues a replacement and removes the disputed charges after an investigation. You may need to confirm transactions and sign a fraud affidavit. The account number changes. Your name, driver’s license and Social Security number stay the same. In most cases, fraud is resolved within one or two billing cycles. That structure gives consumers clarity. There is one issuer, one investigation and one account to correct.
Why physical ID theft recovery is more complicated
Physical ID theft creates problems that go far beyond one financial account. When someone uses your driver’s license, they step into your legal identity. Start with reporting requirements. Most states require you to file a police report before the DMV will issue a replacement linked to fraud. That report number becomes part of your official record. If the misuse happened in another state, you may need to file a second report there.
Next, understand what replacing the card actually does. A new physical card does not erase prior activity. Rental contracts, utility accounts, hotel stays, or police interactions tied to the stolen license still carry your name and license number. Fixing those records takes work. You must contact each business directly and submit documentation. No central agency reverses everything at once. Each company sets its own rules and timeline.
The stakes can rise quickly. For example, if someone abandons a rental car or commits a crime using your stolen ID, law enforcement databases may record your name. At that point, the situation shifts from financial inconvenience to legal exposure.
HOW TO PROTECT A LOVED ONE’S IDENTITY AFTER DEATH
Police reports and formal disputes are often required before businesses will remove fraudulent records. (Kurt “Cyberguy” Knutsson)
How to prove physical ID theft was not yours
With credit card fraud, the issuer investigates the charge. With physical ID theft, businesses and agencies often require you to prove that you did not authorize the activity. That process usually starts at IdentityTheft.gov. The FTC generates an Identity Theft Report, which serves as an official statement of fraud. Most banks, collection agencies and rental companies will not proceed without it.
You may also need:
- A local police report
- A copy of your driver’s license
- A notarized identity affidavit
- Proof of residence tied to the date of the fraud
When thieves open fraudulent accounts in your name, dispute each one separately. Act quickly. Send a written response within 30 days of the first collection notice to protect your rights under federal law. Fraud that appears on your credit report requires another step. Contact Equifax, Experian and TransUnion individually and submit formal disputes with supporting documentation. The credit bureaus then have up to 30 days to complete their investigations. No central agency manages these corrections for you. Instead, every company sets its own documentation rules and timeline. Therefore, you must track deadlines, follow up consistently and keep detailed records of every communication.
You cannot simply replace your driver’s license number after identity theft
When a credit card number is stolen, the bank issues a new one. When a driver’s license is stolen, the number usually remains the same. In California, if your driver’s license is lost or stolen, you can request a replacement card through the DMV online system or at a field office. The official process gets you a new physical card. No new license number is automatically assigned when the card is stolen.
If there is identity misuse tied to the license number, the DMV fraud review process allows you to submit documentation, including police reports, to support an identity theft claim before they take further action. A Social Security number is even harder to change. The Social Security Administration approves new numbers only in cases involving continued harm. Applicants must provide extensive documentation and appear in person.
A stolen physical ID, such as your license, includes:
- Full legal name
- Date of birth
- Address
- Driver’s license number
- Signature
That information is sufficient for in-person identity checks, rental contracts, certain loan applications and travel-related transactions.
Credit monitoring alerts can help you detect identity misuse before it spreads across multiple accounts. (Kurt “CyberGuy” Knutsson)
Why ongoing identity protection matters
There is no single agency that tracks misuse of your driver’s license across rental companies, lenders, collection agencies and law enforcement systems. That burden falls on you.
Identity theft services monitor your identity across all three credit bureaus and alert you to new credit inquiries, account openings and changes to your credit file. If fraud appears, you are assigned a dedicated U.S.-based case manager who helps:
- File disputes with Equifax, Experian and TransUnion
- Prepare and submit FTC Identity Theft Reports
- Contact creditors and collection agencies
- Track documentation deadlines and responses
- Assist with reimbursement claims when eligible
Plans can include identity theft insurance of up to $1 million per adult to cover eligible expenses such as lost wages, legal fees and document replacement costs related to identity theft recovery.
No service can prevent every misuse of a stolen ID. But when the issue involves police reports, credit bureaus, tax agencies and collection accounts, having structured support can make all the difference.
The California woman in this case was not enrolled in an identity theft protection service. Some businesses may reverse fraudulent charges, but it is unclear whether she recovered the full $78,500.
See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
Credit card fraud follows a defined path. You report the charge, the issuer investigates and your account number changes. In most cases, the disruption ends there. Physical ID theft moves differently. It spreads across rental companies, hotels, credit bureaus and sometimes law enforcement databases. Instead of one dispute, you may face several. Instead of replacing a number, you must protect a permanent identity marker tied to your name. That shift matters. A stolen driver’s license carries your legal identity into the real world. Therefore, recovery demands documentation, patience and persistence. Each business sets its own rules. Each agency runs its own timeline. You coordinate the process. The lesson is clear. Protecting your financial accounts is critical. However, protecting your physical identification may be even more important. Once someone uses it in person, the cleanup becomes personal, procedural and time-consuming. Layered monitoring, early alerts and fast reporting reduce long-term damage. The faster you respond, the more control you keep.
Have you ever dealt with physical ID theft, and did the recovery process take longer than you expected? Let us know your thoughts by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Technology
AI can’t make good video game worlds yet, and it might never be able to
Long before the generative AI explosion, video game developers made games that could generate their own worlds. Think of titles like Minecraft or even the original 1980 Rogue that is the basis for the term “roguelike”; these games and many others create worlds on the fly with certain rules and parameters. Human developers painstakingly work to make sure the worlds their games can create are engaging to explore and filled with things to do, and at their best, these types of games can be replayable for years because of how the environments and experiences can feel novel every single time you play.
But just as other creative industries are pushing back against an AI slop future, generative AI is coming for video games, too. Though it may never catch up with the best of what humans can make now.
Generative AI in video games has become a lightning rod, with gamers getting mad about in-game slop and half of developers thinking that generative AI is bad for the industry.
Big video game companies are jumping into the murky waters of AI anyway. PUBG maker Krafton is turning into an “AI First” game company, EA is partnering with Stability AI for “transformative” game-making tools, and Ubisoft, as part of a major reorganization, is promising that it would be making “accelerated investments behind player-facing Generative AI.” The CEO of Nexon, which owns the company that made last year’s mega-hit Arc Raiders, put it perhaps the most ominously: “I think it’s important to assume that every game company is now using AI.” (Some indie developers disagree.)
The bigger game companies often pitch their commitments as a way to streamline and assist with game development, which is getting increasingly expensive. But adoption of generative AI tools is a potential threat to jobs in an industry already infamous for waves of layoffs.
Last month, Google launched Project Genie, an “early research prototype” that lets users generate sandbox worlds using text or image prompts that they can explore for 60 seconds. Right now, the tool is only available in the US to people who subscribe to Google’s $249.99-per-month AI Ultra plan.
Project Genie is powered by Google’s Genie 3 AI world model, which the company pitches as a “key stepping stone on the path to AGI” that can enable “AI agents capable of reasoning, problem solving, and real-world actions,” and Google says the model’s potential uses go “well beyond gaming.” But it got a lot of attention in the industry: It was the first real indication of how generative AI tools could be used for video game development, just as tools like DALL-E and OpenAI’s Sora showed what might be possible with AI-generated images and video.
In my testing, Project Genie was barely able to generate even remotely interesting experiences. The “worlds” don’t let users do much except wander around using arrow keys. When the 60 seconds are over, you can’t do anything with what you generated except download a recording of what you did, meaning you also can’t plug in what you generated into a traditional video game engine.
Sure, Project Genie did let me generate terrible unauthorized Nintendo knockoffs (seemingly based off of the online videos Genie 3 is trained on), which raised a lot of familiar concerns about copyright and AI tools. But they weren’t even in the same universe of quality as the worlds in a handcrafted Nintendo game. The worlds were silent, the physics were sloppy, and the environments felt rudimentary.
The day after Project Genie’s announcement, stock prices of some of the biggest video game companies, including Take-Two, Roblox, and Unity, took a dip. That resulted in a little damage control. Take-Two president Karl Slatoff, for example, pushed back strongly on Genie in an earnings call a few days later, arguing that Genie isn’t a threat to traditional games yet. “Genie is not a game engine,” he said, noting that technology like it “certainly doesn’t replace the creative process,” and that, to him, the tool looks more like “procedurally generated interactive video at this point.” (The stock prices ticked back up in the days after.)
Google will almost certainly continue improving its Genie world models and tools to generate interactive experiences. It’s unclear if it will want to improve the experiences as games or if it will instead focus on finding ways for Genie to assist with its aspirational march toward AGI.
However, other leaders of AI companies are already pushing for interactive AI experiences. xAI’s Elon Musk recently claimed that “real-time” and “high-quality” video games that are “customized to the individual” will be available “next year,” and in December, he said that building an “AI gaming studio” is a “major project” for xAI. (Like with many of Musk’s claims, take his predictions and timelines with a grain of salt.) Meta’s Mark Zuckerberg, who is now pushing AI as the new social media after the company cut jobs in its metaverse group, envisions a future where people create a game from a prompt and share it to people in their feeds. Even Roblox, a gaming company, is pitching how creators will be able to use AI world models and prompts to generate and change in-game worlds in real time, something that it calls “real-time dreaming.”
But even in the most ambitious view where AI technology is feasibly able to generate worlds that are as responsive and interesting to explore as a video game that runs locally on a home console, PC, or your smartphone, there’s a lot more that goes into making a video game than just creating a world. The best games have engaging gameplay, include interesting things to do, and feature original art, sound, writing, and characters. And it takes human developers sometimes years to make sure all of the elements work together just right.
AI technology isn’t yet ready to generate games, and whoever thinks it might be is fooling themselves. But AI-generated video is still bad, and it was still used to make a bunch of bad ads for the Super Bowl, so tech companies are probably still going to put a lot of effort toward games made with generative AI. In an already unstable industry, even the idea that AI tools could rival what humans can make might have massive ramifications down the line.
But the complexity of games is different from AI video, which has improved considerably in a short period of time but has fewer variables to account for. AI game-making tools will almost certainly improve, but the results might never close the gap from what humans can make.
- In a long X post, Unity CEO Matthew Bromberg argues that world models aren’t a risk, but a “powerful accelerator.”
- While the video game industry probably shouldn’t feel threatened by AI world models just yet, generative AI tools will continue to be controversial in game development. Even Larian Studios, beloved for games like Baldur’s Gate 3, isn’t immune to backlash.
- Steam requires that developers disclose when their games use generative AI to generate content, but in a recent change, developers don’t have to disclose if they used “AI powered tools” in their game development environments.
- Some games, like the text-based Hidden Door and Amazon’s Snoop Dogg game on its Luna cloud gaming service, are embracing generative AI as a core aspect of the game.
- NYU games professor Joost van Dreunen has a take on the situation around Project Genie.
- Scientific American has a great explanation of how world models work.
-
Alabama1 week agoGeneva’s Kiera Howell, 16, auditions for ‘American Idol’ season 24
-
Culture1 week agoVideo: Farewell, Pocket Books
-
Illinois7 days ago2026 IHSA Illinois Wrestling State Finals Schedule And Brackets – FloWrestling
-
Technology1 week agoApple might let you use ChatGPT from CarPlay
-
Politics1 week agoHegseth says US strikes force some cartel leaders to halt drug operations
-
World1 week ago‘Regime change in Iran should come from within,’ former Israel PM says
-
Movie Reviews1 week agoWith Love Movie Review: A romcom with likeable leads and plenty of charm
-
News1 week ago
Hate them or not, Patriots fans want the glory back in Super Bowl LX