Connect with us

Technology

‘There isn’t really another choice:’ Signal chief explains why the encrypted messenger relies on AWS

Published

on

‘There isn’t really another choice:’ Signal chief explains why the encrypted messenger relies on AWS

After last week’s major Amazon Web Services (AWS) outage took Signal along with it, Elon Musk was quick to criticize the encrypted messaging app’s reliance on big tech. But Signal president Meredith Whittaker argues that the company didn’t have any other choice but to use AWS or another major cloud provider.

“The problem here is not that Signal ‘chose’ to run on AWS,” Whittaker writes in a series of posts on Bluesky. “The problem is the concentration of power in the infrastructure space that means there isn’t really another choice: the entire stack, practically speaking, is owned by 3-4 players.”

In the thread, Whittaker says the number of people who didn’t realize Signal uses AWS is “concerning,” as it indicates they aren’t aware of just how concentrated the cloud infrastructure industry is. “The question isn’t ‘why does Signal use AWS?’” Whittaker writes. “It’s to look at the infrastructural requirements of any global, real-time, mass comms platform and ask how it is that we got to a place where there’s no realistic alternative to AWS and the other hyperscalers.”

Whittaker notes that AWS, Microsoft Azure, and Google’s cloud services are the only viable options that Signal can use to provide reliable service on a global scale without spending billions of dollars to build its own. “Running a low-latency platform for instant comms capable of carrying millions of concurrent audio/video calls requires a pre-built, planet-spanning network of compute, storage and edge presence that requires constant maintenance, significant electricity and persistent attention and monitoring,” Whittaker says.

She adds that Signal only “partly” runs on AWS and uses encryption to ensure Signal and AWS can’t see your conversations. Signal was far from the only company affected by the AWS outage, as it also brought down Starbucks, the Epic Games Store, Ring doorbells, Snapchat, Alexa devices, and even smart beds.

Advertisement

“My silver lining hope is that AWS going down can be a learning moment, in which the risks of concentrating the nervous system of our world in the hands of a few players become very clear,” Whittaker writes.

Technology

Meta AI edits your camera roll for better Facebook posts

Published

on

Meta AI edits your camera roll for better Facebook posts

NEWYou can now listen to Fox News articles!

Your phone is full of photos you’ve never posted, moments you meant to share but never got around to. That’s exactly what Facebook wants to change. It now uses Meta AI to spot hidden gems in your camera roll, polish them, and create simple collages you can share. You take the pictures, and Facebook helps turn them into easy, ready-to-share memories. No design skills required.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Why Meta created this AI photo feature

Many people take photos but then don’t share them because they feel the image isn’t “post-worthy,” or they simply don’t have time to make it look good.  Meta’s logic: if those moments are sitting unseen in your phone, screenshots, receipts, random snaps, they might still matter to you. So the tool helps you rediscover and share them. From Meta’s perspective, this also fits its bigger push into artificial intelligence-driven features across its apps.

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

Advertisement

New AI tool scans your camera roll to find and polish images for quick sharing. (Kurt “CyberGuy” Knutsson)

Behind the scenes, Meta AI analyzes photo details, like lighting, people and events, to group similar moments and create polished collage layouts automatically. It can suggest captions or filters, but users can edit or reject any suggestion before posting.

How to enable the Facebook AI feature

Here’s how to turn this feature on in Facebook (and how to disable it if you prefer).

  • Open the Facebook app on your phone (iOS or Android).
  • Tap your profile picture or the menu icon.
  • Go to Settings & Privacy.
  • Click Settings.

META STRENGTHENS TEEN SAFETY WITH EXPANDED ACCOUNTS

Instructions for Meta.

Meta aims to revive old memories with Facebook’s AI-powered collage creator. (Kurt “CyberGuy” Knutsson)

  • Scroll to Preferences (or something similar) and find Camera Roll Sharing Suggestions and tap on it.
  • Toggle on ‘Get creative ideas made for you by allowing camera roll cloud processing’ (or similar wording). You may be prompted to allow “cloud processing,” whereby Facebook uploads photos from your device to its servers so Meta AI can analyze them.

INSTAGRAM FRIEND MAP FEATURE SPARKS PRIVACY QUESTIONS

Camera roll instructions.

Users can now let Facebook’s AI curate camera roll highlights automatically. (Kurt “CyberGuy” Knutsson)

  • Confirm the opt-in and accept any permission prompts. Once enabled, Meta claims that only you see suggestions, and you decide if you save or share them.

META DELETES 10 MILLION FACEBOOK ACCOUNTS THIS YEAR, BUT WHY?

Camera roll instructions for Facebook.

Facebook rolls out AI photo suggestions to make sharing easier than ever. (Kurt “CyberGuy” Knutsson)

You’ll also receive optional notifications when new collage suggestions are ready, giving you the chance to preview and edit them before sharing.

Steps to disable or opt out

  • Follow the same path: Facebook app → Settings & Privacy → Settings → Preferences → Camera Roll Sharing Suggestions.
  • Toggle the feature off or disable “cloud processing.”
  • For extra privacy, you can also revoke Facebook’s access to your camera roll in your phone’s OS settings.

If you’ve already uploaded photos for analysis, Meta says you can delete that data by turning off the feature and clearing saved files under “Your Facebook Information” in Settings.

What this means for you

Here’s how Facebook’s new AI photo feature could change the way you share, save and see your favorite moments online.

Advertisement
  • More sharing without the effort. You capture the moment, Facebook helps polish it. The barrier of “this photo isn’t good enough” gets lowered.
  • Greater visibility for memories. That vacation scrapbook photo or family snap buried in your camera roll might now get a second life.
  • Full control remains. You decide whether to share the suggested edit or keep it private. Meta emphasizes that the suggestions are shown only to you unless you choose to share.
  • Privacy considerations. Even though Meta says your photos won’t be used to train AI unless you edit or share them, they do get uploaded to Meta’s cloud when you opt in and may be stored for some time. Meta confirms that the uploaded photos aren’t used for ad targeting or facial recognition, but they may be stored temporarily for processing before being deleted.
  • Limited rollout. At present, U.S. and Canada only; international users may need to wait.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

This move by Facebook addresses a common pain point (photos that don’t get shared) and leverages AI to make sharing more effortless. If you’re an active Facebook user who takes many photos and wants to share more of them, this feature could be a welcome boost. But if you’re cautious about how your private media may be handled, the opt-out path is important and worth using. Either way, it reflects how AI is quietly reshaping everyday apps.

Will you turn on Facebook’s AI-powered photo suggestion feature or keep your camera roll private just the way it is?  Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement

Continue Reading

Technology

You need to listen to the brutally oppressive I’ve Seen All I Need to See

Published

on

You need to listen to the brutally oppressive I’ve Seen All I Need to See

There are only a handful of albums that I think qualify as genuinely scary. You Won’t Get What You Want by Daughters, and Swans To Be Kind both immediately come to mind. But those records come with… let’s say, baggage. I’ve Seen All I Need to See lacks some of the atmospheric spookiness of To Be Kind and the flashes of pop-tinged menace of You Won’t Get What You Want, but it makes up for that with unrelenting brutality. It’s not the soundtrack to a slasher film, it’s the most violent scene in the bleakest horror film, rendered as blown-out drums and detuned guitar.

The album opens with a reading of Douglas Dunn’s The Kaleidoscope, a poem about being trapped in a cycle of grief, as sparse drums boom arhythmically alongside bursts of noise and a low metallic drone. As it transitions into the distant shriek of vocalist / guitarist Chip King, “A Lament” sputters in fits and starts as it struggles to take flight.

Good art is not necessarily pleasant art.

That sets the tone for the record, which is less a collection of songs and more a relentless monolith erected in tribute to the power of distortion. And this is where I admit, I’ve Seen All I Need to See won’t be for everyone. It’s largely atonal, tracks can blend into each other, and even when the drums pick the pace up beyond funeral dirge, the songs feel weighed down, like the band is trying to play their way out of a bog.

That’s not to say there aren’t moments of catharsis to be found. The City is Shelled in particular, erupts towards its back end as King’s vocals become a Goblin-esque croak over pounding piano chords, delivering one of the few moments of genuine melodicism (even if it’s buried under a skyscraper of fuzz).

Advertisement

Even though it’s only 38 minutes long, at times, I’ve Seen All I Need to See can feel like an endurance exercise. But, like a marathon, that doesn’t mean it’s not worth enduring. There is beauty in its brutality. It’s haunting and vicious in the way that, say, Bring Her Back is. Good art is not necessarily pleasant art.

If you’re looking for a record that conjures horror movie vibes without devolving into camp. Something that feels genuinely dangerous and frightening, and not just merely kind of spooky, The Body’s I’ve Seen All I Need to See is what you’re looking for. The record is available on Bandcamp and most streaming services, including Apple Music, Tidal, Deezer, YouTube Music, and Spotify.

Continue Reading

Technology

Teen sues AI tool maker over fake nude images

Published

on

Teen sues AI tool maker over fake nude images

NEWYou can now listen to Fox News articles!

A teenager in New Jersey has filed a major lawsuit against the company behind an artificial intelligence (AI) “clothes removal” tool that allegedly created a fake nude image of her. 

The case has drawn national attention because it shows how AI can invade privacy in harmful ways. The lawsuit was filed to protect students and teens who share photos online and to show how easily AI tools can exploit their images.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

Advertisement

How the fake nude images were created and shared

When she was 14, the plaintiff posted a few photos of herself on social media. A male classmate used an AI tool called ClothOff to remove her clothing in one of those pictures. The altered photo kept her face, making it look real.

The fake image quickly spread through group chats and social media. Now 17, she is suing AI/Robotics Venture Strategy 3 Ltd., the company that operates ClothOff. A Yale Law School professor, several students and a trial attorney filed the case on her behalf.

A New Jersey teen is suing the creators of an AI tool that made a fake nude image of her. (iStock)

The suit asks the court to delete all fake images and stop the company from using them to train AI models. It also seeks to remove the tool from the internet and provide financial compensation for emotional harm and loss of privacy.

The legal fight against deepfake abuse

States across the U.S. are responding to the rise of AI-generated sexual content. More than 45 states have passed or proposed laws to make deepfakes without consent a crime. In New Jersey, creating or sharing deceptive AI media can lead to prison time and fines.

Advertisement

At the federal level, the Take It Down Act requires companies to remove nonconsensual images within 48 hours after a valid request. Despite new laws, prosecutors still face challenges when developers live overseas or operate through hidden platforms.

APPARENT AI MISTAKES FORCE TWO JUDGES TO RETRACT SEPARATE RULINGS

courtroom and gavel

The lawsuit aims to stop the spread of deepfake “clothes-removal” apps and protect victims’ privacy. (iStock)

Why legal experts say this case could set a national precedent

Experts believe this case could reshape how courts view AI liability. Judges must decide whether AI developers are responsible when people misuse their tools. They also need to consider whether the software itself can be an instrument of harm.

The lawsuit highlights another question: How can victims prove damage when no physical act occurred, but the harm feels real? The outcome may define how future deepfake victims seek justice.

Is ClothOff still available?

Reports indicate that ClothOff may no longer be accessible in some countries, such as the United Kingdom, where it was blocked after public backlash. However, users in other regions, including the U.S., still appear able to reach the company’s web platform, which continues to advertise tools that “remove clothes from photos.”

Advertisement

On its official website, the company includes a short disclaimer addressing the ethics of its technology. It states, “Is it ethical to use AI generators to create images? Using AI to create ‘deepnude’ style images raises ethical considerations. We encourage users to approach this with an understanding of responsibility and respect for others’ privacy, ensuring that the use of undress app is done with full awareness of ethical implications.”

Whether fully operational or partly restricted, ClothOff’s ongoing presence online continues to raise serious legal and moral questions about how far AI developers should go in allowing such image-manipulation tools to exist.

Insurance data breach exposes sensitive info of 1.6 million people

This case could set a national precedent for holding AI companies accountable for misuse of their tools. (Kurt “CyberGuy” Knutsson)

Why this AI lawsuit matters for everyone online

The ability to make fake nude images from a simple photo threatens anyone with an online presence. Teens face special risks because AI tools are easy to use and share. The lawsuit draws attention to the emotional harm and humiliation caused by such images.

Parents and educators worry about how quickly this technology spreads through schools. Lawmakers are under pressure to modernize privacy laws. Companies that host or enable these tools must now consider stronger safeguards and faster takedown systems.

Advertisement

What this means for you

If you become a target of an AI-generated image, act quickly. Save screenshots, links and dates before the content disappears. Request immediate removal from websites that host the image. Seek legal help to understand your rights under state and federal law.

Parents should discuss digital safety openly. Even innocent photos can be misused. Knowing how AI works helps teens stay alert and make safer online choices. You can also demand stricter AI rules that prioritize consent and accountability.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

This lawsuit is not only about one teenager. It represents a turning point in how courts handle digital abuse. The case challenges the idea that AI tools are neutral and asks whether their creators share responsibility for harm. We must decide how to balance innovation with human rights. The court’s ruling could influence how future AI laws evolve and how victims seek justice.

If an AI tool creates an image that destroys someone’s reputation, should the company that made it face the same punishment as the person who shared it? Let us know by writing to us at Cyberguy.com.

Advertisement

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading
Advertisement

Trending