Connect with us

Technology

Amazon adds controversial AI facial recognition to Ring

Published

on

Amazon adds controversial AI facial recognition to Ring

NEWYou can now listen to Fox News articles!

Amazon’s Ring video doorbells are getting a major artificial intelligence (AI) upgrade, and it is already stirring controversy.

The company has started rolling out a new feature called Familiar Faces to Ring owners across the United States. Once enabled, the feature uses AI-powered facial recognition to identify people who regularly appear at your door. Instead of a generic alert saying a person is at your door, you might see something far more personal, like “Mom at Front Door.” On the surface, that sounds convenient.

Privacy advocates, however, say this shift comes with real risks.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

GOOGLE NEST STILL SENDS DATA AFTER REMOTE CONTROL CUTOFF, RESEARCHER FINDS

Ring’s new Familiar Faces feature uses AI facial recognition to identify people who regularly appear at your door and personalize alerts. (Chip Somodevilla/Getty Images)

How Ring’s Familiar Faces feature works

Ring says Familiar Faces helps you manage alerts by recognizing people you know. Here is how it works in practice. You can create a catalog of up to 50 faces. These may include family members, friends, neighbors, delivery drivers, household staff or other frequent visitors. After labeling a face in the Ring app, the camera will recognize that person as they approach. Anyone who regularly passes in front of your Ring camera can be labeled by the device owner if they choose to do so, even if that person is unaware they are being identified.

From there, Ring sends personalized notifications tied to that face. You can also fine-tune alerts on a per-face basis, which means fewer pings for your own comings and goings. Importantly, the feature is not enabled by default. You must turn it on manually in the Ring app settings. Faces can be named directly from Event History or from the Familiar Faces library. You can edit names, merge duplicates or delete faces at any time.

Amazon says unnamed faces are automatically removed after 30 days. Once a face is labeled, however, that data remains stored until the user deletes it.

Advertisement

Why privacy groups are pushing back

Despite Amazon’s assurances, consumer protection groups and lawmakers are raising alarms. Ring has a long history of working with law enforcement. In the past, police and fire departments were able to request footage through the Ring Neighbors app. More recently, Amazon partnered with Flock, a company that makes AI-powered surveillance cameras widely used by police and federal agencies. Ring has also struggled with internal security. In 2023, the FTC fined Ring $5.8 million after finding that employees and contractors had unrestricted access to customer videos for years. The Neighbors app previously exposed precise home locations, and Ring account credentials have repeatedly surfaced online. Because of these issues, critics argue that adding facial recognition expands the risk rather than reducing it.

Electronic Frontier Foundation (EFF) staff attorney Mario Trujillo tells CyberGuy, “When you step in front of one of these cameras, your faceprint is taken and stored on Amazon’s servers, whether you consent or not. Today’s feature to recognize your friend at your front door can easily be repurposed tomorrow for mass surveillance. It is important for state regulators to investigate.” The Electronic Frontier Foundation is a well-known nonprofit organization that focuses on digital privacy, civil liberties and consumer rights in the tech space. 

WASHINGTON COURT SAYS FLOCK CAMERA IMAGES ARE PUBLIC RECORDS

Once a face is labeled by the device owner, Ring can replace generic notifications with named alerts tied to that individual. (CyberGuy.com)

Where the feature is blocked and why that matters

Legal pressure is already limiting where Familiar Faces can launch. According to the EFF, privacy laws are preventing Amazon from offering the feature in Illinois, Texas and Portland, Oregon. These jurisdictions have stricter biometric privacy protections, which suggests regulators see facial recognition in the home as a higher-risk technology. U.S. Senator Ed Markey has also called on Amazon to abandon the feature altogether, citing concerns about surveillance creep and biometric data misuse.

Advertisement

Amazon says biometric data is processed in the cloud and not used to train AI models. The company also claims it cannot identify all locations where a face appears, even if law enforcement asks. Still, critics point out the similarity to Ring’s Search Party feature, which already scans neighborhoods to locate lost pets.

We reached out to Amazon for comment but did not receive a response before our deadline.

Ring’s other AI feature feels very different

Not all of Ring’s AI updates raise the same level of concern. Ring recently introduced Video Descriptions, a generative AI feature that summarizes motion activity in plain text. Instead of guessing what triggered an alert, you might see messages like “A person is walking up the steps with a black dog” or “Two people are peering into a white car in the driveway.”

HOW RESTAURANT RESERVATION PLATFORM OPENTABLE TRACKS CUSTOMER DINING HABITS

Ring’s Video Descriptions feature takes a different approach by summarizing activity without identifying people by name. (Amazon)

Advertisement

How Video Descriptions decides what matters

This AI focuses on actions rather than identities. It helps you quickly decide whether an alert is urgent or routine. Over time, Ring says the system can recognize activity patterns around a home and only notify you when something unusual happens. However, as with any AI system, accuracy can vary depending on lighting, camera angle, distance and environmental conditions. Video Descriptions is currently rolling out in beta to Ring Home Premium subscribers in the U.S. and Canada. Unlike facial recognition, this feature improves clarity without naming or tracking specific people. That contrast matters.

Video Descriptions turns motion alerts into short summaries, helping you understand what is happening without identifying who is involved. (Amazon)

Should you turn Familiar Faces on?

If you own a Ring doorbell, caution is wise. While Familiar Faces may reduce notification fatigue, labeling people by name creates a detailed record of who comes to your home and when. Given Ring’s past security lapses and close ties with law enforcement, many privacy experts recommend keeping the feature disabled. If you do use it, avoid full names and remove faces you no longer need. In many cases, simply checking the live video feed is safer than relying on AI labels. Not every smart home feature needs to know who someone is.

How to turn Familiar Faces on or off in the Ring app

If you want to review or change this setting, you can do so at any time in the Ring mobile app.

Advertisement

To enable Familiar Faces:

  • Open the Ring app
  • Tap the menu icon
  • Select Control Center
  • Tap Video and Snapshot Capture
  • Select Familiar Faces
  • Toggle the feature on and follow the on-screen prompts

To turn Familiar Faces off:

  • Open the Ring app
  • Go to Control Center
  • Tap Video and Snapshot Capture
  • Select Familiar Faces
  • Toggle the feature off

Turning the feature off stops facial recognition and prevents new faces from being identified. Any labeled faces can also be deleted manually from the Familiar Faces library if you want to remove stored data.

Alexa is now answering your door for you

Amazon is also rolling out a very different kind of AI feature for Ring doorbells, and it lives inside Alexa+. Called Greetings, this update gives Ring doorbells a conversational AI voice that can interact with people at your door when you are busy or not home. Instead of identifying who someone is, Greetings focuses on what they appear to be doing. Using Ring’s video descriptions, the system looks at apparel, actions, and objects to decide how to respond. 

For example, if someone in a delivery uniform drops off a package, Alexa can tell them exactly where to leave it based on your instructions. You can even set preferences to guide delivery drivers toward a specific spot, or let them know water or snacks are available. If a delivery requires a signature, Alexa can ask the driver when they plan to return and pass that message along to you. The feature can also handle sales representatives or service vendors. You might set a rule such as politely declining sales pitches without ever coming to the door yourself.

Greetings can also work for friends and family. If someone stops by while you are away, Alexa can greet them and ask them to leave a message for you. That interaction is saved so you can review it later. That said, the system is not perfect. Because it relies on visual context rather than identity, mistakes can happen. A friend who works in logistics could show up wearing a delivery uniform and be treated like a courier instead of being invited to leave a message. Amazon acknowledges that accuracy can vary. Importantly, Amazon says Greetings does not identify who a person is. It uses Ring’s video descriptions to determine the main subject in front of the camera and generate responses, without naming or recognizing individuals. That makes it fundamentally different from the Familiar Faces feature, even though both rely on AI.

Greetings is compatible with Ring Wired Doorbell Pro (3rd Gen) and Ring Wired Doorbell Plus (2nd Gen). It is available to Ring Premium Plan subscribers who have video descriptions enabled and is currently rolling out to Alexa+ Early Access users in the United States and Canada.

Thinking about a Ring doorbell?

If you are already in the Ring ecosystem or considering a video doorbell, Ring’s lineup includes models with motion alerts, HD video, night vision, and optional AI-powered features such as Video Descriptions. While Familiar Faces remains controversial and can be turned off, many homeowners still use Ring doorbells for basic security awareness and package monitoring. 

Advertisement

If you decide Ring is right for your home, you can check out the latest Ring Video Doorbell models or compare features and pricing with other options by visiting Cyberguy.com and searching “Top Video Doorbells.”

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

Amazon Ring’s AI facial recognition feature shows how quickly convenience can collide with privacy. Familiar Faces may offer smarter alerts, but it also expands surveillance into deeply personal spaces. Meanwhile, features like Video Descriptions prove that AI can be useful without identifying people. As smart home tech evolves, the real question is not what AI can do but what it should do.

Would you trade fewer notifications for a system that recognizes and names everyone who comes to your door? Let us know by writing to us at Cyberguy.com.

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2025 CyberGuy.com. All rights reserved.

Technology

Crimson Desert dev apologizes for use of AI art

Published

on

Crimson Desert dev apologizes for use of AI art

Reviews of Crimson Desert have been mixed, but the bigger issue for the game has been the discovery of what appeared to be AI-generated assets in the final release. Now the developer has acknowledged that AI art was indeed used during the game’s creation, but says that it was intended to be replaced before release. In a statement on X, the company said it was conducting a “comprehensive audit” to identify and replace any AI-generated content.

The company apologized for both its inclusion in the final release and for not being more transparent about its use during development. “We should have clearly disclosed our use of AI,” it said.

The use of generative AI in gaming has become a hot-button issue of the last couple of years as it’s made its way into several high-profile titles. While some large studios have embraced it, many smaller developers have revolted against the trend, proudly proclaiming their games to be “AI free.”

Continue Reading

Technology

YouTube job scam text: How to spot it fast

Published

on

YouTube job scam text: How to spot it fast

NEWYou can now listen to Fox News articles!

Most of us have received a random text that makes us pause for a second. Maybe it promises a prize. Maybe it claims to be from a delivery company. Lately, another type of message is spreading quickly: the remote job scam.

That is exactly what happened to Peter from New York. He wrote in after receiving a suspicious message about a high-paying YouTube job.

Here is what he sent:

“I received this text today, and I think it’s a scam. How can I tell for sure, and what do I do next?”

Advertisement

Below is the message Peter received. At first glance, it looks like a job opportunity. However, when you break it down line by line, several warning signs appear. Let’s walk through them.

Sign up for my FREE CyberGuy Report

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

FAKE GOOGLE SECURITY PAGE CAN TURN YOUR BROWSER INTO A SPYING TOOL

A suspicious text message promises up to $10,000 a month for boosting YouTube video views. Offers like this are a common sign of a job scam.  (Kurt “CyberGuy” Knutsson)

Advertisement

Red flag 1: A random job offer from a stranger

The text comes from an unknown international phone number starting with +63, which is the country code for the Philippines. Legitimate companies rarely recruit through random text messages from unknown numbers. Real employers usually contact candidates through job platforms, email or professional networks like LinkedIn. When a job appears out of nowhere and promises high pay, it should immediately raise suspicion.

Red flag 2: The pay is wildly unrealistic

The message claims:

  • $200 to $600 per day
  • $10,000 or more per month

Those numbers are a major warning sign. Entry-level remote work, such as “boosting video views” or “YouTube optimization,” does not pay anywhere near that range. Scammers often use unusually high pay to trigger excitement and urgency. When money sounds too good to be true, it usually is.

Red flag 3: No experience required but huge income

The text says “no experience required, free paid training provided.” Scammers often combine high income with zero qualifications. That combination is designed to attract as many people as possible.

Real digital marketing jobs usually require:

  • SEO or marketing experience
  • Analytics knowledge
  • Platform expertise

A company offering $10K per month with no requirements is not realistic.

BE AWARE OF EXTORTION SCAM EMAILS CLAIMING YOUR DATA IS STOLEN

Advertisement

Scammers often claim no experience is required and that training is provided. The goal is to lure you in quickly before you start asking questions.  (Kurt “CyberGuy” Knutsson)

Red flag 4: The job description is vague

The text claims the job is to “increase video exposure and view count.”

That description is extremely vague. It does not explain:

  • What tools you would use
  • What company you would work for
  • How the work is measured

Scam job offers often stay vague so they can adapt the story later.

Red flag 5: Pressure to respond immediately

The message says: “5 urgent openings available, first come first served.” This is a classic scam tactic. Urgency pushes people to respond quickly before they have time to research the offer. Real companies rarely hire qualified candidates on a first-come basis through text messages.

Red flag 6: The strange reply instructions

The message tells recipients to reply “OK” and then send a numeric code. This step is often used to move the conversation to another messaging platform, such as Telegram or WhatsApp, where scammers continue the scheme. Once the conversation moves there, victims may be asked to:

Advertisement
  • Complete fake tasks
  • Send cryptocurrency
  • Pay deposits for “training”

These scams are often called task scams, where victims complete simple online tasks and may even receive small payments at first before scammers demand larger deposits for payouts that never come. They have exploded worldwide over the past few years.

Red flag 7: No company information

The message never names a real company. It mentions a “manager” named Goldie but provides:

  • No company website
  • No corporate email
  • No office address

Legitimate employers want applicants to know who they are. Scammers avoid details that can be verified.

How these YouTube job scams usually work

Many of these scams follow the same pattern. First, scammers promise easy money for simple tasks lsuch as liking videos or boosting views. At the beginning, they may even send a small payment to build trust. Then things change. Victims are asked to deposit money to unlock larger payouts or complete “premium tasks.” Once payments are sent, the scammers disappear. The Federal Trade Commission says Americans lost hundreds of millions of dollars to job scams in recent years, and text message recruitment scams are rising fast.

 Google warns about growing job scams and how to verify recruiters

We reached out to Google, and a spokesperson provided the following statement to CyberGuy:

“Google is aware of these job scams happening across the industry and believes they’re growing around the world. We strongly encourage any candidate, or individual receiving them, to exercise caution and report it to the platform you received it on as a phishing attempt and/or spam. Our recruiting team focuses on contacting candidates in official capacities and are very clear about who we are, why we’re reaching out, and do so from legitimate emails or profiles on job sites. Jobseekers should verify anyone contacting them by email addresses, looking up the person online, such as on LinkedIn, and if something does seem suspicious, flag it to the outlet where it was received. Folks can also vet and report these scams to Google at support.google.com. Our Google careers page reflects all of our current job postings, so candidates should check offers against those. Generally speaking, Google also continues to offer a range of tools and insights that help people automatically spot and avoid scams like these whether they receive them via email, search results, text messages, etc.”

Advertisement

FAKE GOOGLE GEMINI AI PUSHES ‘GOOGLE COIN’ CRYPTO SCAM

Messages that push you to reply immediately or move the conversation to apps like Telegram or WhatsApp are a major red flag.  (Kurt “CyberGuy” Knutsson)

Ways to stay safe from job text scams

If you receive a message like Peter’s, here are some smart steps to take.

1) Never respond to unknown job texts

Replying confirms your number is active. That can lead to more scam messages.

2) Do not click links or download attachments

Scam texts sometimes include links that lead to phishing pages designed to steal login credentials or financial information. Install strong antivirus software on your devices, which can help detect malicious links, block dangerous websites and warn you before you open something risky. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

Advertisement

3) Reduce how easily scammers can find your information

Scammers often harvest phone numbers and personal details from data broker sites and public profiles. Using a data removal service to remove your information from these sites can make it harder for criminals to target you with job scams and other fraud. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

4) Research the company independently

Search for the company name online. Look for an official website, verified social media or job listings.

5) Avoid jobs that ask for money

Legitimate employers never require deposits for training, equipment or task access.

6) Block and report the number

You can report scam texts directly from your phone.

On iPhone:

Advertisement

Open the message, tap the phone number at the top of the screen, scroll down and select Block Contact. You can also tap Report Spam under the message. If the option appears, then click Delete and Report Spam, which sends the report to Apple and deletes the message.

On Samsung Galaxy phones:

Steps may vary slightly depending on your Samsung model and software version.

Open the Messages app and select the conversation. Tap the three-dot menu in the upper right corner, then tap Block and report spam, then confirm by tapping Yes. This blocks the number and helps Samsung identify and filter future scam messages.

7) Report it to the FTC

In the United States, you can report scams at reportfraud.ftc.gov. Reports help investigators track large scam networks.

Advertisement

So what should Peter do next?

The safest move is simple. Peter should not reply to the message. Instead, he should block the number and report it as spam. If he has already responded, he should stop communicating immediately and avoid clicking any links or sending money. If he shared personal information such as his phone number, email address or financial details, it may also be wise to monitor his accounts closely and consider signing up for an identity theft protection service. The good news is that spotting the red flags early can prevent a much bigger problem later. See my tips and best picks on Best Identity Theft Protection at Cyberguy.com.

Kurt’s key takeaways 

Scammers constantly adapt their tactics. Today, it might be a fake delivery notice. Tomorrow, it might be a high-paying remote job. The message Peter received hits many of the classic warning signs: unrealistic pay, vague job duties, urgent language and a request to reply quickly. When a stranger promises easy money through a random text message, pause for a moment. That short pause can save you a lot of trouble.

Now I am curious. If a text suddenly promised you $10,000 a month for simple online tasks, would you recognize the warning signs before replying? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading

Technology

Halide co-founder is suing former partner Sebastiaan de With for taking source code to Apple

Published

on

Halide co-founder is suing former partner Sebastiaan de With for taking source code to Apple

Lux Optics co-founder Sebastiaan de With made headlines when he joined Apple in late January. The company was behind Halide, one of the most popular photography apps for the iPhone, which gained a cult following for its robust pro-level controls.

Apple was apparently a big enough fan that it tried to acquire the developer last summer. Those talks never bore fruit, and eventually the company simply hired de With. At the time, it was widely believed that Apple had poached him from Lux. But new allegations from a lawsuit filed by co-founder Ben Sandofsky in the California Superior Court of Santa Cruz claim de With was fired for financial misconduct in December of 2025.

According to The Information, the suit “accuses de With of improperly using more than $150,000 in Lux corporate funds to pay for personal expenses,” as well as “taking Lux source code and confidential material with him when he joined Apple.”

An attorney for de With denied those claims and said that “The attempt to insert Apple into this dispute appears designed to create leverage and attract attention.“

Continue Reading

Trending