Technology
Apple app password scam email warning
NEWYou can now listen to Fox News articles!
You open your inbox and see a subject line from Apple. It says an app-specific password was generated for your account. Then your stomach drops.
The email claims you authorized a $2,990.02 PayPal payment. It even includes a confirmation number. It urges you to call a support number right away. There is just one problem. You never did any of this.
If that sounds familiar, you are likely looking at a classic Apple impersonation scam.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Scammers are using Apple branding and urgent language to trick victims into calling a fake support number. (Kevin Carter/Getty Images)
What the fake Apple email says
The message claims:
- An app-specific password was generated
- A large PayPal payment was approved
- You should call the listed phone number to report an unauthorized transaction
At first glance, it looks polished. It uses Apple branding. It mentions Apple Support. It includes a confirmation code. However, once you slow down and read it carefully, the red flags jump out.
Red flags in the Apple app-specific password scam email
Before you panic or pick up the phone, take a closer look at these warning signs that expose this Apple app-specific password scam email.
1) The ‘To’ address is not you
The “To” field shows an email address that is not the recipient’s actual address. That is a huge warning sign. Legitimate Apple security emails are sent directly to the Apple ID email on file. If the visible recipient address is different from yours, the message was likely mass-mailed or spoofed. Scammers blast these emails to thousands of addresses at once. They do not customize the recipient line properly. That mismatch alone is enough to treat the message as fraudulent.
2) The sudden $2,990 charge
Scammers love big numbers. A charge close to $3,000 is designed to trigger panic. When people feel fear, they act fast. That is exactly what the criminals want.
3) The ‘call this number now’ trick
The email pushes you to call a specific phone number. That number does not belong to Apple. Real Apple security emails tell you to visit your account directly. They do not pressure you to call a random support line.
If you call, the scammer may:
- Ask for your Apple ID password
- Request remote access to your computer
- Tell you to move money to “secure” your account
That is how the real damage begins.
4) Bold links that push you to click
The email includes bold links such as Apple Account and Apple Support. They are designed to look official and trustworthy. However, scammers often hide malicious URLs behind legitimate-looking text. When you hover over the link, the actual destination may be a completely different website. That is why you should never click links inside a suspicious email. Instead, open a new browser window and type the official website address yourself.
5) Mixed messages about passwords and payments
The subject mentions an app-specific password. The body suddenly talks about a PayPal transaction. That mismatch is a major warning sign. Scammers often combine multiple fears into one message to increase urgency.
6) Generic greeting
The email opens with “Dear Customer.” Apple typically addresses you by your name. Generic greetings are common in bulk phishing emails.
SPYWARE CAN HIGHJACK YOUR PHONE IN SECONDS
A fake Apple email claiming a $2,990 PayPal charge is targeting inboxes in a new impersonation scam. (Qilai Shen/Bloomberg via Getty Images)
More subtle signs this is a scam
There are several additional details that help confirm this is not real.
The reply-to address may look legitimate at first glance
In this case, the Reply-To field shows appleid-usen@email.apple.com, which appears to be an official Apple domain. However, a familiar-looking domain does not automatically prove an email is legitimate. Scammers can spoof visible sender information. They can manipulate display names and certain header fields so a message appears to come from a trusted company. Most people never see the deeper technical authentication details, such as SPF, DKIM or DMARC validation. That means a legitimate-looking sender address can still appear in a fraudulent message. When evaluating a suspicious Apple app-specific password email, weigh all the red flags together, not just the reply-to address.
If the email also includes:
- A mismatched “To” field
- A large unexpected payment
- An urgent phone number
- Mixed messaging about passwords and PayPal
Those warning signs matter far more than a familiar-looking domain.
The payment language feels forced
The email says: “You authorized a USD 2,990.02 payment to apple.com using PayPal.” That wording feels stiff and unnatural. Apple receipts usually reference specific products, subscriptions or invoice details. They do not vaguely reference a large PayPal payment tied to a password notification. The mismatch between a password alert and a major payment should raise suspicion immediately.
The masked email formatting looks odd
The message shows a masked address with dots and an unusual domain, such as relay.quickinvoicesus.com. That is not standard Apple formatting. Apple typically references your Apple ID directly, not an unrelated invoice-style domain. That strange domain inclusion is another strong indicator that this email is fraudulent.
The pressure to act fast
The message urges you to call immediately to report an unauthorized transaction. High urgency is a hallmark of phishing. Legitimate companies encourage you to log in securely to your account. They do not rush you into calling a third-party phone number. When you feel rushed, pause. Scammers rely on speed and emotion.
What this scam is really trying to do
This is a refund scam disguised as a security alert.
The goal is simple. Get you to call the fake support number. Once you are on the phone, the scammer may:
- Ask for your Apple ID password
- Request remote access to your computer
- Guide you through fake refund steps
- Steal banking or PayPal information
In many cases, victims lose far more than the fake $2,990 charge mentioned in the email.
How to check your Apple account safely
If you receive this type of message, pause. Then take control. Instead of clicking links in the email:
- Open a new browser window
- Type appleid.apple.com directly into the address bar
- Log in and review your account activity
If you did not generate an app-specific password and you see no suspicious charges, you are safe. You can also check your PayPal account directly by typing paypal.com into your browser. Never rely on links or phone numbers inside a suspicious email.
Apple app-specific password scam email checklist
Use this simple checklist the next time you get a scary email:
- The “To” field does not match your email
- The greeting says Dear Customer
- There is a large unexpected charge
- You are told to call a number immediately
- The topic feels mismatched, such as password plus payment
If several of these appear together, you are almost certainly dealing with a scam.
Why Apple and PayPal impersonation scams keep working
Apple has billions of users. PayPal has hundreds of millions more. Both brands are trusted, widely used and connected to sensitive financial information. When criminals attach Apple’s name to a message, people pay attention. When they add PayPal and a large dollar amount, the fear intensifies. That combination is powerful. It blends account security concerns with financial panic. Many people react before they pause to verify the details. That split second of fear is exactly where scammers make their money.
“PayPal does not tolerate fraudulent activity, and we work hard to protect our customers from evolving phishing scams,” a PayPal spokesperson told CyberGuy. “We always encourage consumers to practice vigilance online and to learn how to spot the warning signs of common fraud. We recommend reviewing our best practice tips for avoiding phishing schemes on the PayPal Newsroom, and contacting Customer Support directly through the PayPal app or our Contact page for assistance if you believe you have been targeted by a scam.”
CyberGuy also reached out to Apple for comment.
TAX SEASON SCAMS 2026: FAKE IRS MESSAGES STEALING IDENTITIES
The fraudulent message combines an app-specific password alert with a PayPal charge to create panic. (Christian Charisius/picture alliance via Getty Images)
How to protect yourself from Apple phishing emails
You can reduce your risk from an Apple app-specific password scam email with a few smart habits. These steps protect more than just your Apple account. They protect your entire digital life.
1) Use two-factor authentication
Enable two-factor authentication (2FA) on your Apple ID, PayPal and email accounts. Even if someone guesses your password, they still cannot log in without the second verification step. That extra layer blocks most account takeover attempts.
2) Never click links or call numbers in suspicious emails
If an email tells you to call support or click a link, stop. Instead, open a new browser window and type the official website address yourself. Go directly to appleid.apple.com or paypal.com. Also, make sure you have strong antivirus software installed on your devices. Strong antivirus tools can detect malicious links, block phishing sites and warn you before you land on a fake login page. That protection matters because one click on the wrong link can expose login credentials or install hidden malware. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
3) Watch for urgency and fear tactics
Scammers push urgency. They use large dollar amounts and phrases like unauthorized transaction to rush you. Pause when you feel panic. Review the details carefully. Legitimate companies do not pressure you into instant action.
4) Keep your devices updated
Install software updates on your phone and computer as soon as they become available. Security patches fix vulnerabilities that attackers exploit. Outdated software makes phishing and malware attacks easier to pull off.
5) Use a password manager and strong, unique passwords
Do not reuse passwords across accounts. If one site gets breached, reused passwords put everything else at risk. A password manager generates long, complex passwords and stores them securely. That way, even if scammers trick you into entering one password somewhere, it will not unlock your other accounts.
Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.
6) Reduce your exposed personal information
Scammers often find your email address and personal details through data broker sites. Using a reputable data removal service can reduce how much of your personal information is publicly available online. When less of your data floats around the internet, criminals have fewer tools to target you with convincing phishing emails. Less exposure means fewer personalized scams landing in your inbox. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
7) Report the phishing email
Forward suspicious Apple impersonation emails to reportphishing@apple.com. You can also mark the message as phishing in your email provider. Reporting scams helps improve filters and protect other people from falling victim.
8) Monitor your financial accounts
Even if you did not click anything or call the number, review your bank, PayPal and Apple accounts for unusual activity over the next few days. Early detection limits damage. The faster you spot fraud, the easier it is to reverse.
9) Consider freezing your credit if information was exposed
If you entered personal information or downloaded anything suspicious, consider placing a free credit freeze with Equifax, Experian and TransUnion. A credit freeze prevents criminals from opening new accounts in your name. To learn more about how to do this, go to Cyberguy.com and search “How to freeze your credit.”
Kurt’s key takeaways
If you received an Apple app-specific password email with a $2,990 charge you did not authorize, trust your instincts. It is almost certainly a scam. Do not call the number. Do not click the links. Go directly to your official account pages and check for yourself. A few calm minutes can save you thousands of dollars and hours of stress.
When phishing scams use trusted brands like Apple so easily, is the tech industry truly staying ahead of cybercriminals? Let us know your thoughts by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
300,000 Chrome users hit by fake AI extensions
NEWYou can now listen to Fox News articles!
Your web browser may feel like a safe place, especially when you install helpful tools that promise to make your life easier. But security researchers have uncovered a dangerous campaign in which more than 300,000 people installed Chrome extensions pretending to be artificial intelligence (AI) assistants. Instead of helping, these fake tools secretly collect sensitive information like your emails, passwords and browsing activity.
They used familiar names like ChatGPT, Gemini and AI Assistant. If you use Chrome and have installed any AI-related extension, your personal information may already be exposed. Even worse, some of these malicious extensions are still available today, putting more people at risk without their knowing.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
More than 300,000 Chrome users installed fake AI extensions that secretly harvested sensitive data. (Kurt “CyberGuy” Knutsson)
What you need to know about fake AI extensions
Security researchers at browser security company LayerX discovered a large campaign involving 30 malicious Chrome extensions disguised as AI-powered assistants (via BleepingComputer). Together, these extensions were installed more than 300,000 times by unsuspecting users.
Some of the most popular extensions included names like AI Sidebar with 70,000 users, AI Assistant with 60,000 users, ChatGPT Translate with 30,000 users, and Google Gemini with 10,000 users. Another extension called Gemini AI Sidebar had 80,000 users before it was removed.
These extensions were distributed through the official Chrome Web Store, which made them appear legitimate and trustworthy. Even more concerning, researchers found that many of these extensions were connected to the same malicious server, showing they were part of a coordinated effort.
While some extensions have since been removed, others remain available. This means new users could still unknowingly install them and expose their personal data. Here’s the list of the affected extensions:
- AI Assistant
- Llama
- Gemini AI Sidebar
- AI Sidebar
- ChatGPT Sidebar
- Grok
- Asking ChatGPT
- ChatGBT
- Chat Bot GPT
- Grok Chatbot
- Chat With Gemini
- XAI
- Google Gemini
- Ask Gemini
- AI Letter Generator
- AI Message Generator
- AI Translator
- AI For Translation
- AI Cover Letter Generator
- AI Image Generator ChatGPT
- Ai Wallpaper Generator
- Ai Picture Generator
- DeepSeek Download
- AI Email Writer
- Email Generator AI
- DeepSeek Chat
- ChatGPT Picture Generator
- ChatGPT Translate
- AI GPT
- ChatGPT Translation
- ChatGPT for Gmail
FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE
These malicious tools were listed in the official Chrome Web Store, making them appear legitimate and trustworthy. (LayerX)
How the fake AI Chrome extension attack works
These fake extensions pretend to offer helpful AI features, such as translating text, summarizing emails, or acting as an AI assistant. But behind the scenes, they quietly monitor what you are doing online.
Once installed, the extension gains permission to view and interact with the websites you visit. This allows it to read the contents of web pages, including login screens where you enter your username and password.
In some cases, the extensions specifically targeted Gmail. They could read your email messages directly from your browser, including emails you received and even drafts you were still writing. This means attackers could access private conversations, financial information and sensitive personal details.
The extensions then sent this information to servers controlled by the attackers. Because they loaded content remotely, the attackers could change their behavior at any time without needing to update the extension.
Some versions could also activate voice features through your browser. This could potentially capture spoken conversations near your device and send transcripts back to the attackers.
If you installed one of these extensions, attackers may already have access to extremely sensitive information. This includes your email content, login credentials, browsing habits and possibly even voice recordings.
We reached out to Google for comment, and a spokesperson told CyberGuy that the company “can confirm that the extensions from this report have all been removed from the Google Web Store.”
BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK
Once installed, the extensions could read emails, capture passwords, monitor browsing activity and send the data to attacker-controlled servers. (Bildquelle/ullstein bild via Getty Images)
7 ways you can protect yourself from malicious Chrome extensions
If you have ever installed an AI-related Chrome extension, taking a few simple precautions now can help protect your accounts and prevent further damage.
1) Remove any suspicious or unused browser extensions
On a Windows PC or Mac, open Chrome and type chrome://extensions into the address bar. Review every extension listed. If you see anything unfamiliar, especially AI assistants you don’t remember installing, click “Remove” immediately. Malicious extensions depend on going unnoticed. Removing them stops further data collection and cuts off the attacker’s access to your information.
2) Change your passwords
If you installed any suspicious extension, assume your passwords may be compromised. Start by changing your email password first, since email controls access to most other accounts. Then update passwords for banking, shopping and social media accounts. This prevents attackers from using stolen credentials to break into your accounts.
3) Use a password manager to create and protect strong passwords
A password manager generates unique, complex passwords for each account and stores them securely. This prevents attackers from accessing multiple accounts if one password is stolen. Password managers also alert you if your login credentials appear in known data breaches, helping you respond quickly and protect your identity. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.
4) Install strong antivirus software and keep it active
Good antivirus software can detect malicious browser extensions, spyware, and other hidden threats. It scans your system for suspicious activity and blocks harmful programs before they can steal your information. This adds an important layer of protection that works continuously in the background to keep your device safe. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
5) Use an identity theft protection service
Identity theft protection services monitor your personal data, including email addresses, financial accounts, and Social Security numbers, for signs of misuse. If criminals try to open accounts or commit fraud using your information, you receive alerts quickly. Early detection allows you to act fast and limit financial and personal damage. See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.
6) Keep your browser and computer fully updated
Software updates fix security vulnerabilities that attackers exploit. Enable automatic updates for Chrome and your operating system so you always have the latest protections. These updates strengthen your defenses against malicious extensions and prevent attackers from taking advantage of known weaknesses.
7) Use a personal data removal service
Personal data removal services scan data broker websites that collect and sell your personal information. They help remove your data from these sites, reducing what attackers can find and use against you. Less exposed information means fewer opportunities for criminals to target you with scams, identity theft or phishing attacks.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Kurt’s key takeaway
Even tools designed to make your life easier can become tools for cybercriminals. Malicious extensions often hide behind trusted names and convincing features, making them difficult to spot. You can significantly reduce your risk by reviewing your browser extensions regularly, removing anything suspicious and using protective tools like password managers and strong antivirus software.
Have you checked your browser extensions recently? Let us know your thoughts by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance
Less than 24 hours before the deadline in an ultimatum issued by the Pentagon, Anthropic has refused the Department of Defense’s demands for unrestricted access to its AI.
It’s the culmination of a dramatic exchange of public statements, social media posts, and behind-the-scenes negotiations, coming down to Defense Secretary Pete Hegseth’s desire to renegotiate all AI labs’ current contracts with the military. But Anthropic, so far, has refused to back down from its two current red lines: no mass surveillance of Americans, and no lethal autonomous weapons (or weapons with license to kill targets with no human oversight whatsoever). OpenAI and xAI had reportedly already agreed to the new terms, while Anthropic’s refusal had led to CEO Dario Amodei being summoned to the White House this week for a meeting with Hegseth himself, in which the Secretary reportedly issued an ultimatum to the CEO to back down by the end of business day on Friday or else.
In a statement late Thursday, Amodei wrote, “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community.”
He added that the company has “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner” but that in a “narrow set of cases, we believe AI can undermine, rather than defend, democratic values” — going on to specifically mention mass domestic surveillance and fully autonomous weapons. (Amodei mentioned that “partial autonomous weapons … are vital to the defense of democracy” and that fully autonomous weapons may eventually “prove critical for our national defense,” but that “today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” He did not rule out Anthropic acquiescing to the military’s use of fully autonomous weapons in the future but mentioned that they were not ready now.)
The Pentagon had already reportedly asked major defense contractors to assess their dependence on Anthropic’s Claude, which could be seen as the first step to designating the company a “supply chain risk” – a public threat that the Pentagon had made recently (and a classification usually reserved for threats to national security). The Pentagon was also reportedly considering invoking the Defense Production Act to make Anthropic comply.
Amodei wrote in his statement that the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.” He also wrote that “should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”
Technology
Amazon shelves Blue Jay warehouse robot
NEWYou can now listen to Fox News articles!
Amazon made a lot of noise in October when it unveiled Blue Jay, a multi-armed warehouse robot built to speed up same-day deliveries. Just months later, the company quietly ended the program.
The robot’s core technology will live on in other projects. Still, Blue Jay itself is done.
That sudden shift raises an important question. If one of the world’s most advanced logistics companies cannot make a high-profile robot work at scale, what does that say about the future of artificial intelligence (AI) in the real world?
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Blue Jay was designed as a ceiling-mounted robot that could sort and handle multiple packages at once to speed up same-day delivery. (Amazon)
What Blue Jay was supposed to do
Blue Jay was not a simple conveyor belt upgrade. It was a ceiling-mounted system designed to recognize and sort multiple packages at once. Using AI-powered perception models, the robot could:
- Identify packages in motion
- Coordinate several arms at the same time
- Manipulate items with speed and precision
Amazon said it developed the system in under a year. That pace alone was impressive. The goal was clear: move more packages faster while reducing strain on workers in same-day fulfillment centers. On paper, that sounds like a win for everyone.
Why Blue Jay ran into trouble
Despite the hype, Blue Jay faced steep engineering and cost challenges. First, the robot was mounted to the ceiling. That design required complex installation and tight integration into Amazon’s Local Vending Machine warehouses. Those facilities operate as massive, single structures with automation baked into the building itself.
There was little room to reconfigure hardware once installed. That rigidity likely became a liability. In software, AI can pivot overnight with a code update. In the physical world, changing course means retooling steel beams, motors and entire layouts. That takes time and serious money. Several employees who worked on Blue Jay have already moved to other robotics projects.
The company reportedly continues to experiment and improve its warehouse systems. The technology behind Blue Jay will, in fact, inform future designs. In other words, the robot failed. The ideas did not.
WAYMO’S CHEAPER ROBOTAXI TECH COULD HELP EXPAND RIDES FAST
Engineering complexity and high installation costs limited how easily Blue Jay could scale inside Amazon’s tightly integrated warehouse system. (Amazon)
From LVM to Orbital: A strategic shift
Amazon’s next move centers on a new warehouse architecture called Orbital. Unlike the older Local Vending Machine model, Orbital is modular. It can be built from smaller units and deployed faster in different layouts.
That flexibility matters. Retail is fragmenting. Customers expect same-day delivery from urban hubs, local stores and even grocery locations. Orbital could allow Amazon to place micro-fulfillment centers behind retail stores, including Whole Foods locations. That would help it compete more directly with Walmart, which already has a strong grocery footprint.
Alongside Orbital, Amazon is developing a new robotics system called Flex Cell. Unlike Blue Jay’s ceiling mount, Flex Cell is expected to sit on the floor.
That small design change signals something bigger. Amazon appears to be moving from massive centralized automation to smaller, adaptable systems built for the unpredictable realities of local retail.
What this means for your deliveries
If you order from Amazon regularly, you might wonder whether this affects you. In the short term, probably not. Your packages will still show up. Same-day and next-day delivery remain core priorities. However, the long-term story is more interesting. Amazon’s robotics strategy shapes how fast your order arrives, how much you pay and how local warehouses operate in your community.
If Orbital works, you could see:
- Faster delivery from smaller neighborhood hubs
- Better handling of chilled and perishable items
- More automation in retail backrooms
If it struggles, same-day expansion could slow or become more expensive. That tension reflects a broader truth about AI. Writing code is one thing. Teaching a robot to lift boxes in a real warehouse without breaking down is another.
AI TRUCK SYSTEM MATCHES TOP HUMAN DRIVERS IN MASSIVE SAFETY SHOWDOWN WITH PERFECT SCORES
After only a few months, Amazon discontinued the Blue Jay program while continuing to reuse parts of its underlying robotics technology. (Amazon)
The gap between AI hype and hardware reality
Blue Jay highlights a growing divide in the tech world. AI in software is moving at lightning speed. Chatbots, image tools and predictive systems evolve weekly.
Hardware is different. Robots must deal with gravity, friction, heat and unpredictable human environments. Every mistake has a physical cost.
Amazon’s course correction shows that even tech giants hit limits when translating AI breakthroughs into moving metal. That does not mean automation is slowing down. It means the path is bumpier than the headlines suggest.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
Amazon shelving Blue Jay is not a retreat from robotics. It is a recalibration. The company is betting that modular, flexible systems will win over massive, tightly integrated machines. That shift could define the next era of e-commerce logistics. For you, the promise remains the same: faster delivery, better availability and more local convenience. But behind that promise is a complicated dance between AI ambition and real-world constraints.
If even Amazon struggles to make advanced robots work at scale, how much of the AI revolution is still more vision than reality? Let us know by writing to us at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
-
World1 day agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts2 days agoMother and daughter injured in Taunton house explosion
-
Montana1 week ago2026 MHSA Montana Wrestling State Championship Brackets And Results – FloWrestling
-
Oklahoma1 week agoWildfires rage in Oklahoma as thousands urged to evacuate a small city
-
Louisiana4 days agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Technology6 days agoYouTube TV billing scam emails are hitting inboxes
-
Denver, CO2 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Technology6 days agoStellantis is in a crisis of its own making