Technology
Siberia's 'Gateway to Hell' crater fuels new fears
In the remote reaches of Siberia, a colossal scar on the Earth’s surface is expanding at a breathtaking pace, swallowing up the frozen landscape and potentially unleashing ancient threats. The Batagaika crater, aptly nicknamed the “Gateway to Hell,” is not just a geological curiosity, it’s a stark reminder of the rapid changes our planet is undergoing.
4 DAYS LEFT! I’M GIVING AWAY A $500 GIFT CARD FOR THE HOLIDAYS (Ends 12/3/24 3 pm ET)
Enter by signing up for my free newsletter.
Batagaika crater (Murton et al./Permafrost Periglacial Processes) (Kurt “CyberGuy” Knutsson)
A monstrous sinkhole in the permafrost
Imagine a gash in the Earth so large you could fit several football stadiums inside it. That’s the Batagaika crater for you. This massive thermokarst depression – a fancy term for a giant permafrost-thaw sinkhole – is growing at an astonishing rate of 35 million cubic feet each year. To put that into perspective, it’s like carving out a small town’s worth of earth annually. Currently stretching about 0.6 miles long and 0.5 miles wide at its widest point, this behemoth shows no signs of slowing down. In fact, it’s speeding up, driven by a vicious cycle of warming temperatures and melting ice. This study was published in the journal Geomorphology.
Batagaika crater (Earth Resources Observation and Science Center) (Kurt “CyberGuy” Knutsson)
IS THIS SPACE CAPSULE HOW WE WILL LIVE, WORK IN ORBIT IN FUTURE?
The permafrost paradox
Despite its name, permafrost isn’t actually permanent. It’s ground that’s remained at or below freezing for at least two years. When this frozen soil thaws, it can’t support the weight above it, leading to collapse and the formation of these massive “slumps.” The Batagaika crater is a prime example of this process in overdrive. As the permafrost melts, it exposes more soil to sunlight, which then melts more permafrost. It’s a feedback loop that’s difficult to break, especially in our warming world.
KURT’S BEST NEW BLACK FRIDAY DEALS
Batagaika crater (USGS) (Kurt “CyberGuy” Knutsson)
RACE TO FLOAT TOURISTS TO EDGE OF SPACE HEATING UP
Unlocking ancient secrets – and dangers
While the sheer size of the Batagaika crater is impressive, what’s truly mind-boggling is its depth, both physical and temporal. The steep walls of this mega-slump reveal permafrost layers estimated to be 650,000 years old. That’s older than our species. But with ancient ice comes ancient dangers. Scientists have already revived a 48,500-year-old “zombie virus” from Arctic permafrost, and there’s concern about what other long-dormant pathogens might be awakening. It’s not just a plot from some sci-fi movie anymore. It’s a real consideration for modern science and medicine.
Batagaika crater over time (Murton et al./Permafrost Periglacial Processes) (Kurt “CyberGuy” Knutsson)
CALIFORNIA’S FIRST ELECTRIC TRAIN COULD BE WHAT’S COMING TO YOUR CITY
A carbon time bomb
The Batagaika crater isn’t just releasing potential pathogens. It’s also unleashing a significant amount of carbon into the atmosphere. According to recent studies, this single mega-slump is responsible for releasing 4,000 to 5,000 tons of organic carbon every year. That’s equivalent to the annual emissions of about 1,000 cars. This release of carbon, previously locked away in the frozen ground, further contributes to global warming, potentially accelerating the very process that created the crater in the first place.
SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES
Batagaika crater (USGS) (Kurt “CyberGuy” Knutsson)
Kurt’s key takeaways
The Batagaika crater, while extreme, is not unique. It represents a process happening across the Arctic and sub-Arctic regions. As our planet continues to warm, more of these massive permafrost thaw features are likely to appear. While some might see the crater as a tourist attraction – and indeed it has become one – it’s crucial to recognize it as a warning sign. The “Gateway to Hell” is more than just a catchy nickname; it’s a portal into a possible future where rapid environmental changes reshape our world in ways we’re only beginning to understand. The question remains: Will we heed the warning signs and take action, or will we continue to watch as more gateways open across our warming world?
What are your thoughts on the potential impacts of ancient pathogens being released from melting permafrost, and how do you think we should address the challenges posed by climate change? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Ask Kurt a question or let us know what stories you’d like us to cover. Follow Kurt on his social channels:
Answers to the most asked CyberGuy questions:
New from Kurt:
Try CyberGuy’s new games (crosswords, word searches, trivia and more!)
Enter CyberGuy’s $500 Holiday Gift Card Sweepstakes KURT’S HOLIDAY GIFT GUIDES
Deals: Unbeatable Best Black Friday deals | Laptops | Desktops | Printers
Best gifts for: Men | Women | Kids | Teens | Pet lovers
For those who love: Cooking | Coffee | Tools | Travel | Wine
Devices: Laptops | Desktops | Printers | Monitors | Earbuds | Headphones | Kindles | Soundbars | Vacuums | Surge strips and protectors Accessories: Car | Kitchen | Laptop | Keyboards | Phone | Travel | Keep it cozy
Can’t go wrong with these: Gift cards | Money-saving apps | Amazon Black Friday insider tips
Copyright 2024 CyberGuy.com. All rights reserved.
Technology
Robots learn 1,000 tasks in one day from a single demo
NEWYou can now listen to Fox News articles!
Most robot headlines follow a familiar script: a machine masters one narrow trick in a controlled lab, then comes the bold promise that everything is about to change. I usually tune those stories out. We have heard about robots taking over since science fiction began, yet real-life robots still struggle with basic flexibility. This time felt different.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
ELON MUSK TEASES A FUTURE RUN BY ROBOTS
Researchers highlight the milestone that shows how a robot learned 1,000 real-world tasks in just one day. (Science Robotics)
How robots learned 1,000 physical tasks in one day
A new report published in Science Robotics caught our attention because the results feel genuinely meaningful, impressive and a little unsettling in the best way. The research comes from a team of academic scientists working in robotics and artificial intelligence, and it tackles one of the field’s biggest limitations.
The researchers taught a robot to learn 1,000 different physical tasks in a single day using just one demonstration per task. These were not small variations of the same movement. The tasks included placing, folding, inserting, gripping and manipulating everyday objects in the real world. For robotics, that is a big deal.
Why robots have always been slow learners
Until now, teaching robots physical tasks has been painfully inefficient. Even simple actions often require hundreds or thousands of demonstrations. Engineers must collect massive datasets and fine-tune systems behind the scenes. That is why most factory robots repeat one motion endlessly and fail as soon as conditions change. Humans learn differently. If someone shows you how to do something once or twice, you can usually figure it out. That gap between human learning and robot learning has held robotics back for decades. This research aims to close that gap.
THE NEW ROBOT THAT COULD MAKE CHORES A THING OF THE PAST
The research team behind the study focuses on teaching robots to learn physical tasks faster and with less data. (Science Robotics)
How the robot learned 1,000 tasks so fast
The breakthrough comes from a smarter way of teaching robots to learn from demonstrations. Instead of memorizing entire movements, the system breaks tasks into simpler phases. One phase focuses on aligning with the object, and the other handles the interaction itself. This method relies on artificial intelligence, specifically an AI technique called imitation learning that allows robots to learn physical tasks from human demonstrations.
The robot then reuses knowledge from previous tasks and applies it to new ones. This retrieval-based approach allows the system to generalize rather than start from scratch each time. Using this method, called Multi-Task Trajectory Transfer, the researchers trained a real robot arm on 1,000 distinct everyday tasks in under 24 hours of human demonstration time.
Importantly, this was not done in a simulation. It happened in the real world, with real objects, real mistakes and real constraints. That detail matters.
Why this research feels different
Many robotics papers look impressive on paper but fall apart outside perfect lab conditions. This one stands out because it tested the system through thousands of real-world rollouts. The robot also showed it could handle new object instances it had never seen before. That ability to generalize is what robots have been missing. It is the difference between a machine that repeats and one that adapts.
AI VIDEO TECH FAST-TRACKS HUMANOID ROBOT TRAINING
The robot arm practices everyday movements like gripping, folding and placing objects using a single human demonstration. (Science Robotics)
A long-standing robotics problem may finally be cracking
This research addresses one of the biggest bottlenecks in robotics: inefficient learning from demonstrations. By decomposing tasks and reusing knowledge, the system achieved an order of magnitude improvement in data efficiency compared to traditional approaches. That kind of leap rarely happens overnight. It suggests that the robot-filled future we have talked about for years may be nearer than it looked even a few years ago.
What this means for you
Faster learning changes everything. If robots need less data and less programming, they become cheaper and more flexible. That opens the door to robots working outside tightly controlled environments.
In the long run, this could enable home robots to learn new tasks from simple demonstrations instead of specialist code. It also has major implications for healthcare, logistics and manufacturing.
More broadly, it signals a shift in artificial intelligence. We are moving away from flashy tricks and toward systems that learn in more human-like ways. Not smarter than people. Just closer to how we actually operate day to day.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
Robots learning 1,000 tasks in a day does not mean your house will have a humanoid helper tomorrow. Still, it represents real progress on a problem that has limited robotics for decades. When machines start learning more like humans, the conversation changes. The question shifts from what robots can repeat to what they can adapt to next. That shift is worth paying attention to.
If robots can now learn like us, what tasks would you actually trust one to handle in your own life? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Plaud updates the NotePin with a button
Plaud has updated its compact NotePin AI recorder. The new NotePin S is almost identical to the original, except for one major difference: a button. It’s joined by a new Plaud Desktop app for recording audio in online meetings, which is free to owners of any Plaud Note or NotePin.
The NotePin S has the same FitBit-esque design as the 2024 original and ships with a lanyard, wristband, clip, and magnetic pin, so you can wear it just about any way you please — now all included in the box, whereas before the lanyard and wristband were sold separately.
It’s about the same size as the NotePin, comes in the same colors (black, purple, or silver), offers similar battery life, and still supports Apple Find My. Like the NotePin, it records audio and generates transcriptions and summaries, whether those are meeting notes, action points, or reminders.
But now it has a button. Whereas the first NotePin used haptic controls, relying on a long squeeze to start recording, with a short buzz to let you know it worked, the S switches to something simpler. A long press of the button starts recording, a short tap adds highlight markers. Plaud’s explanation for the change is simple: buttons are less ambiguous, so you’ll always know you’ve successfully pressed it and started recording, whereas original NotePin users complained they sometimes failed to record because they hadn’t squeezed just right.
AI recorders like this live or die by ease of use, so removing a little friction gives Plaud better odds of survival.
Alongside the NotePin S, Plaud is launching a new Mac and PC application for recording the audio from online meetings. Plaud Desktop runs in the background and activates whenever it detects calls from apps including Zoom, Meet, and Teams, recording both system audio and from your microphone. You can set it to either record meetings automatically or require manual activation, and unlike some alternatives it doesn’t create a bot that joins the call with you.
Recordings and notes are synced with those from Plaud’s line of hardware recorders, with the same models used for transcription and generation, creating a “seamless” library of audio from your meetings, both online and off.
Plaud Desktop is available now and is free to anyone who already owns a Plaud Note or NotePin device. The new NotePin S is also available today, for $179 — $20 more than the original, which Plaud says will now be phased out.
Technology
OpenAI admits AI browsers face unsolvable prompt attacks
NEWYou can now listen to Fox News articles!
Cybercriminals don’t always need malware or exploits to break into systems anymore. Sometimes, they just need the right words in the right place. OpenAI is now openly acknowledging that reality. The company says prompt injection attacks against artificial intelligence (AI)-powered browsers are not a bug that can be fully patched, but a long-term risk that comes with letting AI agents roam the open web. This raises uncomfortable questions about how safe these tools really are, especially as they gain more autonomy and access to your data.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY
AI-powered browsers can read and act on web content, which also makes them vulnerable to hidden instructions attackers can slip into pages or documents. (Kurt “CyberGuy” Knutsson)
Why prompt injection isn’t going away
In a recent blog post, OpenAI admitted that prompt injection attacks are unlikely to ever be completely eliminated. Prompt injection works by hiding instructions inside web pages, documents or emails in ways that humans don’t notice, but AI agents do. Once the AI reads that content, it can be tricked into following malicious instructions.
OpenAI compared this problem to scams and social engineering. You can reduce them, but you can’t make them disappear. The company also acknowledged that “agent mode” in its ChatGPT Atlas browser increases risk because it expands the attack surface. The more an AI can do on your behalf, the more damage it can cause when something goes wrong.
OpenAI launched the ChatGPT Atlas browser in October, and security researchers immediately started testing its limits. Within hours, demos appeared showing that a few carefully placed words inside a Google Doc could influence how the browser behaved. That same day, Brave published its own warning, explaining that indirect prompt injection is a structural problem for AI-powered browsers, including tools like Perplexity’s Comet.
This isn’t just OpenAI’s problem. Earlier this month, the National Cyber Security Centre in the U.K. warned that prompt injection attacks against generative AI systems may never be fully mitigated.
FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE
Prompt injection attacks exploit trust at scale, allowing malicious instructions to influence what an AI agent does without the user ever seeing it. (Kurt “CyberGuy” Knutsson)
The risk trade-off with AI browsers
OpenAI says it views prompt injection as a long-term security challenge that requires constant pressure, not a one-time fix. Its approach relies on faster patch cycles, continuous testing, and layered defenses. That puts it broadly in line with rivals like Anthropic and Google, which have both argued that agentic systems need architectural controls and ongoing stress testing.
Where OpenAI is taking a different approach is with something it calls an “LLM-based automated attacker.” In simple terms, OpenAI trained an AI to act like a hacker. Using reinforcement learning, this attacker bot looks for ways to sneak malicious instructions into an AI agent’s workflow.
The bot runs attacks in simulation first. It predicts how the target AI would reason, what steps it would take and where it might fail. Based on that feedback, it refines the attack and tries again. Because this system has insight into the AI’s internal decision-making, OpenAI believes it can surface weaknesses faster than real-world attackers.
Even with these defenses, AI browsers aren’t safe. They combine two things attackers love: autonomy and access. Unlike regular browsers, they don’t just display information, but also read emails, scan documents, click links and take actions on your behalf. That means a single malicious prompt hidden in a webpage, document or message can influence what the AI does without you ever seeing it. Even when safeguards are in place, these agents operate by trusting content at scale, and that trust can be manipulated.
THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS
As AI browsers gain more autonomy and access to personal data, limiting permissions and keeping human confirmation in the loop becomes critical for safety. (Kurt “CyberGuy” Knutsson)
7 steps you can take to reduce risk with AI browsers
You may not be able to eliminate prompt injection attacks, but you can significantly limit their impact by changing how you use AI tools.
1) Limit what the AI browser can access
Only give an AI browser access to what it absolutely needs. Avoid connecting your primary email account, cloud storage or payment methods unless there’s a clear reason. The more data an AI can see, the more valuable it becomes to attackers. Limiting access reduces the blast radius if something goes wrong.
2) Require confirmation for every sensitive action
Never allow an AI browser to send emails, make purchases or modify account settings without asking you first. Confirmation breaks long attack chains and gives you a moment to spot suspicious behavior. Many prompt injection attacks rely on the AI acting quietly in the background without user review.
3) Use a password manager for all accounts
A password manager ensures every account has a unique, strong password. If an AI browser or malicious page leaks one credential, attackers can’t reuse it elsewhere. Many password managers also refuse to autofill on unfamiliar or suspicious sites, which can alert you that something isn’t right before you manually enter anything.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com
4) Run strong antivirus software on your device
Even if an attack starts inside the browser, antivirus software can still detect suspicious scripts, unauthorized system changes or malicious network activity. Strong antivirus software focuses on behavior, not just files, which is critical when dealing with AI-driven or script-based attacks.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
5) Avoid broad or open-ended instructions
Telling an AI browser to “handle whatever is needed” gives attackers room to manipulate it through hidden prompts. Be specific about what the AI is allowed to do and what it should never do. Narrow instructions make it harder for malicious content to influence the agent.
6) Be careful with AI summaries and automated scans
When an AI browser scans emails, documents or web pages for you, remember that hidden instructions can live inside that content. Treat AI-generated actions as drafts or suggestions, not final decisions. Review anything the AI plans to act on before approving it.
7) Keep your browser, AI tools and operating system updated
Security fixes for AI browsers evolve quickly as new attack techniques emerge. Delaying updates leaves known weaknesses open longer than necessary. Turning on automatic updates ensures you get protection as soon as they’re available, even if you miss the announcement.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaway
There’s been a meteoric rise in AI browsers. We’re now seeing them from major tech companies, including OpenAI’s Atlas, The Browser Company’s Dia, and Perplexity’s Comet. Even existing browsers like Chrome and Edge are pushing hard to add AI and agentic features into their current infrastructure. While these browsers can be useful, the technology is still early. It’s best not to fall for the hype and to wait for it to mature.
Do you think AI browsers are worth the risk today, or are they moving faster than security can keep up? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
-
World1 week agoHamas builds new terror regime in Gaza, recruiting teens amid problematic election
-
Indianapolis, IN1 week agoIndianapolis Colts playoffs: Updated elimination scenario, AFC standings, playoff picture for Week 17
-
Business1 week agoGoogle is at last letting users swap out embarrassing Gmail addresses without losing their data
-
Southeast1 week agoTwo attorneys vanish during Florida fishing trip as ‘heartbroken’ wife pleads for help finding them
-
Politics1 week agoMost shocking examples of Chinese espionage uncovered by the US this year: ‘Just the tip of the iceberg’
-
News1 week agoRoads could remain slick, icy Saturday morning in Philadelphia area, tracking another storm on the way
-
World1 week agoPodcast: The 2025 EU-US relationship explained simply
-
News1 week agoMarijuana rescheduling would bring some immediate changes, but others will take time