For years, Meta trained its AI programs using the billions of public images uploaded by users onto Facebook and Instagram’s servers. Now, it’s also hoping to access the billions of images that users haven’t uploaded to those servers. Meta tells The Verge that it’s not currently training its AI models on those photos, but it would not answer our questions about whether it might do so in future, or what rights it will hold over your camera roll images.
Technology
Facebook is starting to feed its Meta AI with private, unpublished photos
On Friday, TechCrunch reported that Facebook users trying to post something on the Story feature have encountered pop-up messages asking if they’d like to opt into “cloud processing”, which would allow Facebook to “select media from your camera roll and upload it to our cloud on a regular basis”, to generate “ideas like collages, recaps, AI restyling or themes like birthdays or graduations.”
By allowing this feature, the message continues, users are agreeing to Meta AI terms, which allows their AI to analyze “media and facial features” of those unpublished photos, as well as the date said photos were taken, and the presence of other people or objects in them. You further grant Meta the right to “retain and use” that personal information.
Meta recently acknowledged that it scraped the data from all the content that’s been published on Facebook and Instagram since 2007 to train its generative AI models. Though the company stated that it’s only used public posts uploaded from adult users over the age of 18, it has long been vague about exactly what “public” entails, as well as what counted as an “adult user” in 2007.
Meta tells The Verge that, for now, it’s not training on your unpublished photos with this new feature. “[The Verge’s headline] implies we are currently training our AI models with these photos, which we aren’t. This test doesn’t use people’s photos to improve or train our AI models,” Meta public affairs manager Ryan Daniels tells The Verge.
Meta’s public stance is that the feature is “very early,” innocuous and entirely opt-in: “We’re exploring ways to make content sharing easier for people on Facebook by testing suggestions of ready-to-share and curated content from a person’s camera roll. These suggestions are opt-in only and only shown to you – unless you decide to share them – and can be turned off at any time. Camera roll media may be used to improve these suggestions, but are not used to improve AI models in this test,” reads a statement from Meta comms manager Maria Cubeta.
On its face, that might sound not altogether different from Google Photos, which similarly might suggest AI tweaks to your images after you opt into Google Gemini. But unlike Google, which explicitly states that it does not train generative AI models with personal data gleaned from Google Photos, Meta’s current AI usage terms, which have been in place since June 23, 2024, do not provide any clarity as to whether unpublished photos accessed through “cloud processing” are exempt from being used as training data — and Meta would not clear that up for us going forward.
And while Daniels and Cubeta tell The Verge that opting in only gives Meta permission to retrieve 30 days worth of your unpublished camera roll at a time, it appears that Meta is retaining some data longer than that. “Camera roll suggestions based on themes, such as pets, weddings and graduations, may include media that is older than 30 days,” Meta writes.
Thankfully, Facebook users do have an option to turn off camera roll cloud processing in their settings, which, once activated, will also start removing unpublished photos from the cloud after 30 days.
The feature suggests a new incursion into our previously private data, one that bypasses the point of friction known as conscientiously deciding to post a photo for public consumption. And according to Reddit posts found by TechCrunch, Meta’s already offering AI restyling suggestions on previously-uploaded photos, even if users hadn’t been aware of the feature: one user reported that Facebook had Studio Ghiblified her wedding photos without her knowledge.
Correction, June 27th: An earlier version of this story implied Meta was already training AI on these photos, but Meta now states that the current test does not yet do so. Also added statement and additional details from Meta.
Technology
Crimson Desert dev apologizes for use of AI art
Reviews of Crimson Desert have been mixed, but the bigger issue for the game has been the discovery of what appeared to be AI-generated assets in the final release. Now the developer has acknowledged that AI art was indeed used during the game’s creation, but says that it was intended to be replaced before release. In a statement on X, the company said it was conducting a “comprehensive audit” to identify and replace any AI-generated content.
The company apologized for both its inclusion in the final release and for not being more transparent about its use during development. “We should have clearly disclosed our use of AI,” it said.
The use of generative AI in gaming has become a hot-button issue of the last couple of years as it’s made its way into several high-profile titles. While some large studios have embraced it, many smaller developers have revolted against the trend, proudly proclaiming their games to be “AI free.”
Technology
YouTube job scam text: How to spot it fast
NEWYou can now listen to Fox News articles!
Most of us have received a random text that makes us pause for a second. Maybe it promises a prize. Maybe it claims to be from a delivery company. Lately, another type of message is spreading quickly: the remote job scam.
That is exactly what happened to Peter from New York. He wrote in after receiving a suspicious message about a high-paying YouTube job.
Here is what he sent:
“I received this text today, and I think it’s a scam. How can I tell for sure, and what do I do next?”
Below is the message Peter received. At first glance, it looks like a job opportunity. However, when you break it down line by line, several warning signs appear. Let’s walk through them.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
FAKE GOOGLE SECURITY PAGE CAN TURN YOUR BROWSER INTO A SPYING TOOL
A suspicious text message promises up to $10,000 a month for boosting YouTube video views. Offers like this are a common sign of a job scam. (Kurt “CyberGuy” Knutsson)
Red flag 1: A random job offer from a stranger
The text comes from an unknown international phone number starting with +63, which is the country code for the Philippines. Legitimate companies rarely recruit through random text messages from unknown numbers. Real employers usually contact candidates through job platforms, email or professional networks like LinkedIn. When a job appears out of nowhere and promises high pay, it should immediately raise suspicion.
Red flag 2: The pay is wildly unrealistic
The message claims:
- $200 to $600 per day
- $10,000 or more per month
Those numbers are a major warning sign. Entry-level remote work, such as “boosting video views” or “YouTube optimization,” does not pay anywhere near that range. Scammers often use unusually high pay to trigger excitement and urgency. When money sounds too good to be true, it usually is.
Red flag 3: No experience required but huge income
The text says “no experience required, free paid training provided.” Scammers often combine high income with zero qualifications. That combination is designed to attract as many people as possible.
Real digital marketing jobs usually require:
- SEO or marketing experience
- Analytics knowledge
- Platform expertise
A company offering $10K per month with no requirements is not realistic.
BE AWARE OF EXTORTION SCAM EMAILS CLAIMING YOUR DATA IS STOLEN
Scammers often claim no experience is required and that training is provided. The goal is to lure you in quickly before you start asking questions. (Kurt “CyberGuy” Knutsson)
Red flag 4: The job description is vague
The text claims the job is to “increase video exposure and view count.”
That description is extremely vague. It does not explain:
- What tools you would use
- What company you would work for
- How the work is measured
Scam job offers often stay vague so they can adapt the story later.
Red flag 5: Pressure to respond immediately
The message says: “5 urgent openings available, first come first served.” This is a classic scam tactic. Urgency pushes people to respond quickly before they have time to research the offer. Real companies rarely hire qualified candidates on a first-come basis through text messages.
Red flag 6: The strange reply instructions
The message tells recipients to reply “OK” and then send a numeric code. This step is often used to move the conversation to another messaging platform, such as Telegram or WhatsApp, where scammers continue the scheme. Once the conversation moves there, victims may be asked to:
- Complete fake tasks
- Send cryptocurrency
- Pay deposits for “training”
These scams are often called task scams, where victims complete simple online tasks and may even receive small payments at first before scammers demand larger deposits for payouts that never come. They have exploded worldwide over the past few years.
Red flag 7: No company information
The message never names a real company. It mentions a “manager” named Goldie but provides:
- No company website
- No corporate email
- No office address
Legitimate employers want applicants to know who they are. Scammers avoid details that can be verified.
How these YouTube job scams usually work
Many of these scams follow the same pattern. First, scammers promise easy money for simple tasks lsuch as liking videos or boosting views. At the beginning, they may even send a small payment to build trust. Then things change. Victims are asked to deposit money to unlock larger payouts or complete “premium tasks.” Once payments are sent, the scammers disappear. The Federal Trade Commission says Americans lost hundreds of millions of dollars to job scams in recent years, and text message recruitment scams are rising fast.
Google warns about growing job scams and how to verify recruiters
We reached out to Google, and a spokesperson provided the following statement to CyberGuy:
“Google is aware of these job scams happening across the industry and believes they’re growing around the world. We strongly encourage any candidate, or individual receiving them, to exercise caution and report it to the platform you received it on as a phishing attempt and/or spam. Our recruiting team focuses on contacting candidates in official capacities and are very clear about who we are, why we’re reaching out, and do so from legitimate emails or profiles on job sites. Jobseekers should verify anyone contacting them by email addresses, looking up the person online, such as on LinkedIn, and if something does seem suspicious, flag it to the outlet where it was received. Folks can also vet and report these scams to Google at support.google.com. Our Google careers page reflects all of our current job postings, so candidates should check offers against those. Generally speaking, Google also continues to offer a range of tools and insights that help people automatically spot and avoid scams like these whether they receive them via email, search results, text messages, etc.”
FAKE GOOGLE GEMINI AI PUSHES ‘GOOGLE COIN’ CRYPTO SCAM
Messages that push you to reply immediately or move the conversation to apps like Telegram or WhatsApp are a major red flag. (Kurt “CyberGuy” Knutsson)
Ways to stay safe from job text scams
If you receive a message like Peter’s, here are some smart steps to take.
1) Never respond to unknown job texts
Replying confirms your number is active. That can lead to more scam messages.
2) Do not click links or download attachments
Scam texts sometimes include links that lead to phishing pages designed to steal login credentials or financial information. Install strong antivirus software on your devices, which can help detect malicious links, block dangerous websites and warn you before you open something risky. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
3) Reduce how easily scammers can find your information
Scammers often harvest phone numbers and personal details from data broker sites and public profiles. Using a data removal service to remove your information from these sites can make it harder for criminals to target you with job scams and other fraud. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
4) Research the company independently
Search for the company name online. Look for an official website, verified social media or job listings.
5) Avoid jobs that ask for money
Legitimate employers never require deposits for training, equipment or task access.
6) Block and report the number
You can report scam texts directly from your phone.
On iPhone:
Open the message, tap the phone number at the top of the screen, scroll down and select Block Contact. You can also tap Report Spam under the message. If the option appears, then click Delete and Report Spam, which sends the report to Apple and deletes the message.
On Samsung Galaxy phones:
Steps may vary slightly depending on your Samsung model and software version.
Open the Messages app and select the conversation. Tap the three-dot menu in the upper right corner, then tap Block and report spam, then confirm by tapping Yes. This blocks the number and helps Samsung identify and filter future scam messages.
7) Report it to the FTC
In the United States, you can report scams at reportfraud.ftc.gov. Reports help investigators track large scam networks.
So what should Peter do next?
The safest move is simple. Peter should not reply to the message. Instead, he should block the number and report it as spam. If he has already responded, he should stop communicating immediately and avoid clicking any links or sending money. If he shared personal information such as his phone number, email address or financial details, it may also be wise to monitor his accounts closely and consider signing up for an identity theft protection service. The good news is that spotting the red flags early can prevent a much bigger problem later. See my tips and best picks on Best Identity Theft Protection at Cyberguy.com.
Kurt’s key takeaways
Scammers constantly adapt their tactics. Today, it might be a fake delivery notice. Tomorrow, it might be a high-paying remote job. The message Peter received hits many of the classic warning signs: unrealistic pay, vague job duties, urgent language and a request to reply quickly. When a stranger promises easy money through a random text message, pause for a moment. That short pause can save you a lot of trouble.
Now I am curious. If a text suddenly promised you $10,000 a month for simple online tasks, would you recognize the warning signs before replying? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Halide co-founder is suing former partner Sebastiaan de With for taking source code to Apple
Lux Optics co-founder Sebastiaan de With made headlines when he joined Apple in late January. The company was behind Halide, one of the most popular photography apps for the iPhone, which gained a cult following for its robust pro-level controls.
Apple was apparently a big enough fan that it tried to acquire the developer last summer. Those talks never bore fruit, and eventually the company simply hired de With. At the time, it was widely believed that Apple had poached him from Lux. But new allegations from a lawsuit filed by co-founder Ben Sandofsky in the California Superior Court of Santa Cruz claim de With was fired for financial misconduct in December of 2025.
According to The Information, the suit “accuses de With of improperly using more than $150,000 in Lux corporate funds to pay for personal expenses,” as well as “taking Lux source code and confidential material with him when he joined Apple.”
An attorney for de With denied those claims and said that “The attempt to insert Apple into this dispute appears designed to create leverage and attract attention.“
-
Detroit, MI4 days agoDrummer Brian Pastoria, longtime Detroit music advocate, dies at 68
-
Oklahoma1 week agoFamily rallies around Oklahoma father after head-on crash
-
Nebraska1 week agoWildfire forces immediate evacuation order for Farnam residents
-
Georgia6 days agoHow ICE plans for a detention warehouse pushed a Georgia town to fight back | CNN Politics
-
Alaska1 week agoPolice looking for man considered ‘armed and dangerous’
-
Science1 week agoFederal EPA moves to roll back recent limits on ethylene oxide, a carcinogen
-
Science1 week agoH5N1 bird flu spreads to sea otters and sea lions along San Mateo coast, wildlife experts say
-
Movie Reviews4 days ago‘Youth’ Twitter review: Ken Karunaas impresses audiences; Suraj Venjaramoodu adds charm; music wins praise | – The Times of India