Connect with us

Technology

Facebook is starting to feed its Meta AI with private, unpublished photos

Published

on

Facebook is starting to feed its Meta AI with private, unpublished photos

For years, Meta trained its AI programs using the billions of public images uploaded by users onto Facebook and Instagram’s servers. Now, it’s also hoping to access the billions of images that users haven’t uploaded to those servers. Meta tells The Verge that it’s not currently training its AI models on those photos, but it would not answer our questions about whether it might do so in future, or what rights it will hold over your camera roll images.

On Friday, TechCrunch reported that Facebook users trying to post something on the Story feature have encountered pop-up messages asking if they’d like to opt into “cloud processing”, which would allow Facebook to “select media from your camera roll and upload it to our cloud on a regular basis”, to generate “ideas like collages, recaps, AI restyling or themes like birthdays or graduations.”

By allowing this feature, the message continues, users are agreeing to Meta AI terms, which allows their AI to analyze “media and facial features” of those unpublished photos, as well as the date said photos were taken, and the presence of other people or objects in them. You further grant Meta the right to “retain and use” that personal information.

Meta recently acknowledged that it scraped the data from all the content that’s been published on Facebook and Instagram since 2007 to train its generative AI models. Though the company stated that it’s only used public posts uploaded from adult users over the age of 18, it has long been vague about exactly what “public” entails, as well as what counted as an “adult user” in 2007.

Meta tells The Verge that, for now, it’s not training on your unpublished photos with this new feature. “[The Verge’s headline] implies we are currently training our AI models with these photos, which we aren’t. This test doesn’t use people’s photos to improve or train our AI models,” Meta public affairs manager Ryan Daniels tells The Verge.

Advertisement

Meta’s public stance is that the feature is “very early,” innocuous and entirely opt-in: “We’re exploring ways to make content sharing easier for people on Facebook by testing suggestions of ready-to-share and curated content from a person’s camera roll. These suggestions are opt-in only and only shown to you – unless you decide to share them – and can be turned off at any time. Camera roll media may be used to improve these suggestions, but are not used to improve AI models in this test,” reads a statement from Meta comms manager Maria Cubeta.

On its face, that might sound not altogether different from Google Photos, which similarly might suggest AI tweaks to your images after you opt into Google Gemini. But unlike Google, which explicitly states that it does not train generative AI models with personal data gleaned from Google Photos, Meta’s current AI usage terms, which have been in place since June 23, 2024, do not provide any clarity as to whether unpublished photos accessed through “cloud processing” are exempt from being used as training data — and Meta would not clear that up for us going forward.

And while Daniels and Cubeta tell The Verge that opting in only gives Meta permission to retrieve 30 days worth of your unpublished camera roll at a time, it appears that Meta is retaining some data longer than that. “Camera roll suggestions based on themes, such as pets, weddings and graduations, may include media that is older than 30 days,” Meta writes.

Thankfully, Facebook users do have an option to turn off camera roll cloud processing in their settings, which, once activated, will also start removing unpublished photos from the cloud after 30 days.

The feature suggests a new incursion into our previously private data, one that bypasses the point of friction known as conscientiously deciding to post a photo for public consumption. And according to Reddit posts found by TechCrunch, Meta’s already offering AI restyling suggestions on previously-uploaded photos, even if users hadn’t been aware of the feature: one user reported that Facebook had Studio Ghiblified her wedding photos without her knowledge.

Advertisement

Correction, June 27th: An earlier version of this story implied Meta was already training AI on these photos, but Meta now states that the current test does not yet do so. Also added statement and additional details from Meta.

Technology

Google’s annual revenue tops $400 billion for the first time

Published

on

Google’s annual revenue tops 0 billion for the first time

Google’s parent company, Alphabet, has earned more than $400 billion in annual revenue for the first time. The company announced the milestone as part of its Q4 2025 earnings report released on Wednesday, which highlights the 15 percent year-over-year increase as its cloud business and YouTube continue to grow.

As noted in the earnings report, Google’s Cloud business reached a $70 billion run rate in 2025, while YouTube’s annual revenue soared beyond $60 billion across ads and subscriptions. Alphabet CEO Sundar Pichai told investors that YouTube remains the “number one streamer,” citing data from Nielsen. The company also now has more than 325 million paid subscribers, led by Google One and YouTube Premium.

Additionally, Pichai noted that Google Search saw more usage over the past few months “than ever before,” adding that daily AI Mode queries have doubled since launch. Google will soon take advantage of the popularity of its Gemini app and AI Mode, as it plans to build an agentic checkout feature into both tools.

Continue Reading

Technology

Waymo under federal investigation after child struck

Published

on

Waymo under federal investigation after child struck

NEWYou can now listen to Fox News articles!

Federal safety regulators are once again taking a hard look at self-driving cars after a serious incident involving Waymo, the autonomous vehicle company owned by Alphabet.

This time, the investigation centers on a Waymo vehicle that struck a child near an elementary school in Santa Monica, California, during morning drop-off hours. The crash happened Jan. 23 and raised immediate questions about how autonomous vehicles behave around children, school zones and unpredictable pedestrian movement.

On Jan. 29, the National Highway Traffic Safety Administration confirmed it had opened a new preliminary investigation into Waymo’s automated driving system.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

TESLA’S SELF-DRIVING CARS UNDER FIRE AGAIN

Waymo operates Level 4 self-driving vehicles in select U.S. cities, where the car controls all driving tasks without a human behind the wheel. (AP Photo/Terry Chea, File)

What happened near the Santa Monica school?

According to documents posted by NHTSA, the crash occurred within two blocks of an elementary school during normal drop-off hours. The area was busy. There were multiple children present, a crossing guard on duty and several vehicles double-parked along the street.

Investigators say the child ran into the roadway from behind a double-parked SUV while heading toward the school. The Waymo vehicle struck the child, who suffered minor injuries. No safety operator was inside the vehicle at the time.

NHTSA’s Office of Defects Investigation is now examining whether the autonomous system exercised appropriate caution given its proximity to a school zone and the presence of young pedestrians.

Advertisement

AI TRUCK SYSTEM MATCHES TOP HUMAN DRIVERS IN MASSIVE SAFETY SHOWDOWN WITH PERFECT SCORES

Federal investigators are now examining whether Waymo’s automated system exercised enough caution near a school zone during morning drop-off hours. (Waymo)

Why federal investigators stepped in

The NHTSA says the investigation will focus on how Waymo’s automated driving system is designed to behave in and around school zones, especially during peak pickup and drop-off times.

That includes whether the vehicle followed posted speed limits, how it responded to visual cues like crossing guards and parked vehicles and whether its post-crash response met federal safety expectations. The agency is also reviewing how Waymo handled the incident after it occurred.

Waymo said it voluntarily contacted regulators the same day as the crash and plans to cooperate fully with the investigation. In a statement, the company said it remains committed to improving road safety for riders and everyone sharing the road.

Advertisement

Waymo responds to the federal investigation

We reached out to Waymo for comment, and the company provided the following statement:

“At Waymo, we are committed to improving road safety, both for our riders and all those with whom we share the road. Part of that commitment is being transparent when incidents occur, which is why we are sharing details regarding an event in Santa Monica, California, on Friday, January 23, where one of our vehicles made contact with a young pedestrian. Following the event, we voluntarily contacted the National Highway Traffic Safety Administration (NHTSA) that same day. NHTSA has indicated to us that they intend to open an investigation into this incident, and we will cooperate fully with them throughout the process. 

“The event occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into our vehicle’s path. Our technology immediately detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph before contact was made. 

“To put this in perspective, our peer-reviewed model shows that a fully attentive human driver in this same situation would have made contact with the pedestrian at approximately 14 mph. This significant reduction in impact speed and severity is a demonstration of the material safety benefit of the Waymo Driver.

“Following contact, the pedestrian stood up immediately, walked to the sidewalk and we called 911. The vehicle remained stopped, moved to the side of the road and stayed there until law enforcement cleared the vehicle to leave the scene. 

Advertisement

This event demonstrates the critical value of our safety systems. We remain committed to improving road safety where we operate as we continue on our mission to be the world’s most trusted driver.”

Understanding Waymo’s autonomy level

Waymo vehicles fall under Level 4 autonomy on NHTSA’s six-level scale.

At Level 4, the vehicle handles all driving tasks within specific service areas. A human driver is not required to intervene, and no safety operator needs to be present inside the car. However, these systems do not operate everywhere and are currently limited to ride-hailing services in select cities.

The NHTSA has been clear that Level 4 vehicles are not available for consumer purchase, even though passengers may ride inside them.

This is not Waymo’s first federal probe

This latest investigation follows a previous NHTSA evaluation that opened in May 2024. That earlier probe examined reports of Waymo vehicles colliding with stationary objects like gates, chains and parked cars. Regulators also reviewed incidents in which the vehicles appeared to disobey traffic control devices.

Advertisement

That investigation was closed in July 2025 after regulators reviewed the data and Waymo’s responses. Safety advocates say the new incident highlights unresolved concerns.

UBER UNVEILS A NEW ROBOTAXI WITH NO DRIVER BEHIND THE WHEEL

No safety operator was inside the vehicle at the time of the crash, raising fresh questions about how autonomous cars handle unpredictable situations involving children. (Waymo)

What this means for you

If you live in a city where self-driving cars operate, this investigation matters more than it might seem. School zones are already high-risk areas, even for attentive human drivers. Autonomous vehicles must be able to detect unpredictable behavior, anticipate sudden movement and respond instantly when children are present.

This case will likely influence how regulators set expectations for autonomous driving systems near schools, playgrounds and other areas with vulnerable pedestrians. It could also shape future rules around local oversight, data reporting and operational limits for self-driving fleets.

Advertisement

For parents, commuters and riders, the outcome may affect where and when autonomous vehicles are allowed to operate.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP    

Kurt’s key takeaways

Self-driving technology promises safer roads, fewer crashes and less human error. But moments like this remind us that the hardest driving scenarios often involve human unpredictability, especially when children are involved. Federal investigators now face a crucial question: Did the system act as cautiously as it should have in one of the most sensitive driving environments possible? How they answer that question could help define the next phase of autonomous vehicle regulation in the United States.

Advertisement

Do you feel comfortable sharing the road with self-driving cars near schools, or is that a line technology should not cross yet? Let us know by writing to us at Cyberguy.com

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com. All rights reserved.

Continue Reading

Technology

Adobe actually won’t discontinue Animate

Published

on

Adobe actually won’t discontinue Animate

Adobe is no longer planning to discontinue Adobe Animate on March 1st. In an FAQ, the company now says that Animate will now be in maintenance mode and that it has “no plans to discontinue or remove access” to the app. Animate will still receive “ongoing security and bug fixes” and will still be available for “both new and existing users,” but it won’t get new features.

An announcement email that went out to Adobe Animate customers about the discontinuation did “not meet our standards and caused a lot of confusion and angst within the community,” according to a Reddit post from Adobe community team member Mike Chambers.

Animate will be available in maintenance mode “indefinitely” to “individual, small business, and enterprise customers,” according to Adobe. Before the change, Adobe said that non-enterprise customers could access Animate and download content until March 1st, 2027, while enterprise customers had until March 1st, 2029.

Continue Reading

Trending