Connect with us

Technology

Here’s the first teaser for Squid Game’s third and final season

Published

on

Here’s the first teaser for Squid Game’s third and final season

Picking up from season 2’s devastating cliffhanger, season 3 thrusts Gi-hun (Player 456) back into the brutal heart of the games, determined to dismantle them once and for all. Still haunted by the betrayal and loss of his closest ally, Jung-bae (Player 390), Gi-hun faces new perils — including the Front Man, who shockingly infiltrated their rebellion disguised as Player 001.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Tesla says it delivered its first car autonomously from factory to customer

Published

on

Tesla says it delivered its first car autonomously from factory to customer

Tesla said it completed its first fully autonomous vehicle delivery from factory to customer. A video posted on X shows the vehicle — a Tesla Model Y — leaving the company’s Austin Gigafactory, driving on the highway, passing through suburban sprawl and residential neighborhoods, before arriving at a customer’s apartment building.

Tesla CEO Elon Musk had promised the first fully autonomous delivery would take place on June 28th. But on Friday, he announced that the milestone had been achieved a day early.

“There were no people in the car at all and no remote operators in control at any point. FULLY autonomous!” Musk wrote on X. “To the best of our knowledge, this is the first fully autonomous drive with no people in the car or remotely operating the car on a public highway.”

But Tesla’s achievement is still notable, especially when you consider the rocky rollout of the company’s robotaxi service. The robotaxis launched with safety monitors in the passenger seat with access to a kill switch, and within a few days, the vehicles were recorded committing several safety lapses, including driving over the double-yellow line into the opposite lane of traffic and hard braking in the middle of the road for no apparent reason.

By proving it can operate fully autonomous vehicles on highways without a safety monitor present in the vehicle, Tesla is able to demonstrate that its Full Self-Driving system is getting closer to Musk’s promise of “unsupervised” driving. The robotaxis aren’t quite there yet, still requiring safety monitors and remote supervisors. That leaves Tesla in limbo between confidence that its technology can handle the driving without anyone in the vehicle, but less confident when there’s a human being riding inside.

Advertisement

Update, June 28th: Added Tesla’s 30-minute “long version” of the trip.

Continue Reading

Technology

5.4 million patient records exposed in healthcare data breach

Published

on

5.4 million patient records exposed in healthcare data breach

NEWYou can now listen to Fox News articles!

Over the past decade, software companies have built solutions for nearly every industry, including healthcare. One term you might be familiar with is software as a service (SaaS), a model by which software is accessed online through a subscription rather than installed on individual machines. 

In healthcare, SaaS providers are now a common part of the ecosystem. But, recently, many of them have made headlines for the wrong reasons. 

Several data breaches have been traced back to vulnerabilities at these third-party service providers. The latest incident comes from one such firm, which has now confirmed that hackers stole the health information of over 5 million people in the United States during a cyberattack in January.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join.

Advertisement

ASCENSION HEALTHCARE DATA BREACH EXPOSES 430,000 PATIENT RECORDS

A healthcare worker looking at data on a tablet   (Kurt “CyberGuy” Knutsson)

SaaS firm leads to major healthcare blunder

Episource, a big name in healthcare data analytics and coding services, has confirmed a major cybersecurity incident (via Bleeping Computer). The breach involved sensitive health information belonging to over 5 million people in the United States. The company first noticed suspicious system activity Feb. 6, 2025, but the actual compromise began ten days earlier.

An internal investigation revealed that hackers accessed and copied private data between Jan. 27 and Feb. 6. The company insists that no financial information was taken, but the stolen records do include names, contact details, Social Security numbers, Medicaid IDs and full medical histories.

Episource claims there’s no evidence the information has been misused, but because they haven’t seen the fallout yet doesn’t mean it isn’t happening. Once data like this is out, it spreads fast, and the consequences don’t wait for official confirmation.

Advertisement
healthcare breach 2

A woman looking at a health app on her smartphone   (Kurt “CyberGuy” Knutsson)

OVER 8 MILLION PATIENT RECORDS LEAKED IN HEALTHCARE DATA BREACH

Why healthcare SaaS is a growing target

The healthcare industry has embraced cloud-based services to improve efficiency, scale operations and reduce overhead. Companies like Episource enable healthcare payers to manage coding and risk adjustment at a much larger scale. But this shift has also introduced new risks. When third-party vendors handle patient data, the security of that data becomes dependent on their infrastructure.

Healthcare data is among the most valuable types of personal information for hackers. Unlike payment card data, which can be changed quickly, medical and identity records are long-term assets on the dark web. These breaches can lead to insurance fraud, identity theft and even blackmail.

Episource is not alone in facing this kind of attack. In the past few years, several healthcare SaaS providers have faced breaches, including Accellion and Blackbaud. These incidents have affected millions of patients and have led to class-action lawsuits and stricter government scrutiny.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Advertisement
healthcare breach 3

A healthcare worker typing on her laptop    (Kurt “CyberGuy” Knutsson)

5.5 MILLION PATIENTS EXPOSED BY MAJOR HEALTHCARE DATA BREACH

5 ways you can protect yourself from healthcare data breach

If your information was part of the healthcare breach or any similar one, it’s worth taking a few steps to protect yourself.

1. Consider identity theft protection services: Since the healthcare data breach exposed personal and financial information, it’s crucial to stay proactive against identity theft. Identity theft protection services offer continuous monitoring of your credit reports, Social Security number and even the dark web to detect if your information is being misused. 

These services send you real-time alerts about suspicious activity, such as new credit inquiries or attempts to open accounts in your name, helping you act quickly before serious damage occurs. Beyond monitoring, many identity theft protection companies provide dedicated recovery specialists who assist you in resolving fraud issues, disputing unauthorized charges and restoring your identity if it’s compromised. See my tips and best picks on how to protect yourself from identity theft.

2. Use personal data removal services: The healthcare data breach leaks loads of information about you, and all this could end up in the public domain, which essentially gives anyone an opportunity to scam you.  

Advertisement

One proactive step is to consider personal data removal services, which specialize in continuously monitoring and removing your information from various online databases and websites. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here. 

Get a free scan to find out if your personal information is already out on the web.

3. Have strong antivirus software: Hackers have people’s email addresses and full names, which makes it easy for them to send you a phishing link that installs malware and steals all your data. These messages are socially engineered to catch them, and catching them is nearly impossible if you’re not careful. However, you’re not without defenses.

The best way to safeguard yourself from malicious links is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.

4. Enable two-factor authentication: While passwords weren’t part of the data breach, you still need to enable two-factor authentication (2FA). It gives you an extra layer of security on all your important accounts, including email, banking and social media. 2FA requires you to provide a second piece of information, such as a code sent to your phone, in addition to your password when logging in. This makes it significantly harder for hackers to access your accounts, even if they have your password. Enabling 2FA can greatly reduce the risk of unauthorized access and protect your sensitive data.

Advertisement

5. Be wary of mailbox communications: Bad actors may also try to scam you through snail mail. The data leak gives them access to your address. They may impersonate people or brands you know and use themes that require urgent attention, such as missed deliveries, account suspensions and security alerts.

WINDOWS 10 SECURITY FLAWS LEAVE MILLIONS VULNERABLE

Kurt’s key takeaways

What makes this breach especially alarming is that many of the affected patients may have never even heard of Episource. As a business-to-business vendor, Episource operates in the background, working with insurers and healthcare providers, not with patients directly. The people affected were customers of those companies, yet it’s their most sensitive data now at risk because of a third party they never chose or trusted. This kind of indirect relationship muddies the waters when it comes to responsibility and makes it even harder to demand transparency or hold anyone accountable.

Do you think healthcare companies are investing enough in their cybersecurity infrastructure? Let us know by writing us at Cyberguy.com/Contact

Advertisement

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter

Ask Kurt a question or let us know what stories you’d like us to cover

Follow Kurt on his social channels

Answers to the most asked CyberGuy questions:

New from Kurt:

Advertisement

Copyright 2025 CyberGuy.com.  All rights reserved. 
 

Continue Reading

Technology

Facebook is starting to feed its Meta AI with private, unpublished photos

Published

on

Facebook is starting to feed its Meta AI with private, unpublished photos

For years, Meta trained its AI programs using the billions of public images uploaded by users onto Facebook and Instagram’s servers. Now, it’s also hoping to access the billions of images that users haven’t uploaded to those servers. Meta tells The Verge that it’s not currently training its AI models on those photos, but it would not answer our questions about whether it might do so in future, or what rights it will hold over your camera roll images.

On Friday, TechCrunch reported that Facebook users trying to post something on the Story feature have encountered pop-up messages asking if they’d like to opt into “cloud processing”, which would allow Facebook to “select media from your camera roll and upload it to our cloud on a regular basis”, to generate “ideas like collages, recaps, AI restyling or themes like birthdays or graduations.”

By allowing this feature, the message continues, users are agreeing to Meta AI terms, which allows their AI to analyze “media and facial features” of those unpublished photos, as well as the date said photos were taken, and the presence of other people or objects in them. You further grant Meta the right to “retain and use” that personal information.

Meta recently acknowledged that it scraped the data from all the content that’s been published on Facebook and Instagram since 2007 to train its generative AI models. Though the company stated that it’s only used public posts uploaded from adult users over the age of 18, it has long been vague about exactly what “public” entails, as well as what counted as an “adult user” in 2007.

Meta tells The Verge that, for now, it’s not training on your unpublished photos with this new feature. “[The Verge’s headline] implies we are currently training our AI models with these photos, which we aren’t. This test doesn’t use people’s photos to improve or train our AI models,” Meta public affairs manager Ryan Daniels tells The Verge.

Advertisement

Meta’s public stance is that the feature is “very early,” innocuous and entirely opt-in: “We’re exploring ways to make content sharing easier for people on Facebook by testing suggestions of ready-to-share and curated content from a person’s camera roll. These suggestions are opt-in only and only shown to you – unless you decide to share them – and can be turned off at any time. Camera roll media may be used to improve these suggestions, but are not used to improve AI models in this test,” reads a statement from Meta comms manager Maria Cubeta.

On its face, that might sound not altogether different from Google Photos, which similarly might suggest AI tweaks to your images after you opt into Google Gemini. But unlike Google, which explicitly states that it does not train generative AI models with personal data gleaned from Google Photos, Meta’s current AI usage terms, which have been in place since June 23, 2024, do not provide any clarity as to whether unpublished photos accessed through “cloud processing” are exempt from being used as training data — and Meta would not clear that up for us going forward.

And while Daniels and Cubeta tell The Verge that opting in only gives Meta permission to retrieve 30 days worth of your unpublished camera roll at a time, it appears that Meta is retaining some data longer than that. “Camera roll suggestions based on themes, such as pets, weddings and graduations, may include media that is older than 30 days,” Meta writes.

Thankfully, Facebook users do have an option to turn off camera roll cloud processing in their settings, which, once activated, will also start removing unpublished photos from the cloud after 30 days.

The feature suggests a new incursion into our previously private data, one that bypasses the point of friction known as conscientiously deciding to post a photo for public consumption. And according to Reddit posts found by TechCrunch, Meta’s already offering AI restyling suggestions on previously-uploaded photos, even if users hadn’t been aware of the feature: one user reported that Facebook had Studio Ghiblified her wedding photos without her knowledge.

Advertisement

Correction, June 27th: An earlier version of this story implied Meta was already training AI on these photos, but Meta now states that the current test does not yet do so. Also added statement and additional details from Meta.

Continue Reading

Trending