Oshkosh, the 108-year-old American company that makes military vehicles and other specialty equipment, has big plans for your neighborhood.
Technology
FBI's new warning about AI-driven scams that are after your cash
The FBI is issuing a warning that criminals are increasingly using generative AI technologies, particularly deepfakes, to exploit unsuspecting individuals. This alert serves as a reminder of the growing sophistication and accessibility of these technologies and the urgent need for vigilance in protecting ourselves from potential scams. Let’s explore what deepfakes are, how they’re being used by criminals and what steps you can take to safeguard your personal information.
I’M GIVING AWAY THE LATEST & GREATEST AIRPODS PRO 2
Enter the giveaway by signing up for my free newsletter.
The rise of deepfake technology
Deepfakes refer to AI-generated content that can convincingly mimic real people, including their voices, images and videos. Criminals are using these techniques to impersonate individuals, often in crisis situations. For instance, they might generate audio clips that sound like a loved one asking for urgent financial assistance or even create real-time video calls that appear to involve company executives or law enforcement officials. The FBI has identified 17 common techniques used by criminals to create these deceptive materials.
THE AI-POWERED GRANDMA TAKING ON SCAMMERS
Key tactics used by criminals
The FBI has identified 17 common techniques that criminals are using to exploit generative AI technologies, particularly deepfakes, for fraudulent activities. Here is a comprehensive list of these techniques.
1) Voice cloning: Generating audio clips that mimic the voice of a family member or other trusted individuals to manipulate victims.
2) Real-time video calls: Creating fake video interactions that appear to involve authority figures, such as law enforcement or corporate executives.
3) Social engineering: Utilizing emotional appeals to manipulate victims into revealing personal information or transferring funds.
4) AI-generated text: Crafting realistic written messages for phishing attacks and social engineering schemes, making them appear credible.
5) AI-generated images: Using synthetic images to create believable profiles on social media or fraudulent websites.
6) AI-generated videos: Producing convincing videos that can be used in scams, including investment frauds or impersonation schemes.
7) Creating fake social media profiles: Establishing fraudulent accounts that use AI-generated content to deceive others.
8) Phishing emails: Sending emails that appear legitimate but are crafted using AI to trick recipients into providing sensitive information.
9) Impersonation of public figures: Using deepfake technology to create videos or audio clips that mimic well-known personalities for scams.
10) Fake identification documents: Generating fraudulent IDs, such as driver’s licenses or credentials, for identity fraud and impersonation.
11) Investment fraud schemes: Deploying AI-generated materials to convince victims to invest in non-existent opportunities.
12) Ransom demands: Impersonating loved ones in distress to solicit ransom payments from victims.
13) Manipulating voice recognition systems: Using cloned voices to bypass security measures that rely on voice authentication.
14) Fake charity appeals: Creating deepfake content that solicits donations under false pretenses, often during crises.
15) Business email compromise: Crafting emails that appear to come from executives or trusted contacts to authorize fraudulent transactions.
16) Creating misinformation campaigns: Utilizing deepfake videos as part of broader disinformation efforts, particularly around significant events like elections.
17) Exploiting crisis situations: Generating urgent requests for help or money during emergencies, leveraging emotional manipulation.
These tactics highlight the increasing sophistication of fraud schemes facilitated by generative AI and the importance of vigilance in protecting personal information.
FCC NAMES ITS FIRST-EVER AI SCAMMER IN THREAT ALERT
Tips for protecting yourself from deepfakes
Implementing the following strategies can enhance your security and awareness against deepfake-related fraud.
1) Limit your online presence: Reduce the amount of personal information, especially high-quality images and videos, available on social media by adjusting privacy settings.
2) Invest in personal data removal services: The less information is out there, the harder it is for someone to create a deepfake of you. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here.
3) Avoid sharing sensitive information: Never disclose personal details or financial information to strangers online or over the phone.
4) Stay vigilant with new connections: Be cautious when accepting new friends or connections on social media; verify their authenticity before engaging.
5) Check privacy settings on social media: Ensure that your profiles are set to private and that you only accept friend requests from trusted individuals. Here’s how to switch any social media accounts, including Facebook, Instagram, Twitter and any others you may use, to private.
6) Use two-factor authentication (2FA): Implement 2FA on your accounts to add an extra layer of security against unauthorized access.
7) Verify callers: If you receive a suspicious call, hang up and independently verify the caller’s identity by contacting their organization through official channels.
8) Watermark your media: When sharing photos or videos online, consider using digital watermarks to deter unauthorized use.
9) Monitor your accounts regularly: Keep an eye on your financial and online accounts for any unusual activity that could indicate fraud.
10) Use strong and unique passwords: Employ different passwords for various accounts to prevent a single breach from compromising multiple services. Consider using a password manager to generate and store complex passwords.
11) Regularly backup your data: Maintain backups of important data to protect against ransomware attacks and ensure recovery in case of data loss.
12) Create a secret verification phrase: Establish a unique word or phrase with family and friends to verify identities during unexpected communications.
13) Be aware of visual imperfections: Look for subtle flaws in images or videos, such as distorted features or unnatural movements, which may indicate manipulation.
14) Listen for anomalies in voice: Pay attention to the tone, pitch and choice of words in audio clips. AI-generated voices may sound unnatural or robotic.
15) Don’t click on links or download attachments from suspicious sources: Be cautious when receiving emails, direct messages, texts, phone calls or other digital communications if the source is unknown. This is especially true if the message is demanding that you act fast, such as claiming your computer has been hacked or that you have won a prize. Deepfake creators attempt to manipulate your emotions, so you download malware or share personal information. Always think before you click.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.
16) Be cautious with money transfers: Do not send money, gift cards or cryptocurrencies to people you do not know or have met only online or over the phone.
17) Report suspicious activity: If you suspect that you have been targeted by scammers or have fallen victim to a fraud scheme, report it to the FBI’s Internet Crime Complaint Center.
By following these tips, individuals can better protect themselves from the risks associated with deepfake technology and related scams.
30% OF AMERICANS OVER 65 WANT TO BE REMOVED FROM THE WEB. HERE’S WHY
Kurt’s key takeaways
The increasing use of generative AI technologies, particularly deepfakes, by criminals highlights a pressing need for awareness and caution. As the FBI warns, these sophisticated tools enable fraudsters to impersonate individuals convincingly, making scams harder to detect and more believable than ever. It’s crucial for everyone to understand the tactics employed by these criminals and to take proactive steps to protect their personal information. By staying informed about the risks and implementing security measures, such as verifying identities and limiting online exposure, we can better safeguard ourselves against these emerging threats.
In what ways do you think businesses and governments should respond to the growing threat of AI-powered fraud? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter. Ask Kurt a question or let us know what stories you’d like us to cover.
Follow Kurt on his social channels:
Answers to the most asked CyberGuy questions:
New from Kurt:
Copyright 2024 CyberGuy.com. All rights reserved.
Technology
Las Vegas police release ChatGPT logs from the suspect in the Cybertruck explosion
They confirmed that the suspect, an active duty soldier in the US Army named Matthew Livelsberger, had a “possible manifesto” saved on his phone, in addition to an email to a podcaster and other letters. They also showed video evidence of him preparing for the explosion by pouring fuel onto the truck while stopped before driving to the hotel. He’d also kept a log of supposed surveillance, although the officials said he did not have a criminal record and was not being surveilled or investigated.
The Las Vegas Metro Police also released several slides showing questions he’d posed to ChatGPT several days before the explosion, asking about explosives, how to detonate them, and how to detonate them with a gunshot, as well as information about where to buy guns, explosive material, and fireworks legally along his route.
Asked about the queries, OpenAI spokesperson Liz Bourgeois said:
We are saddened by this incident and committed to seeing AI tools used responsibly. Our models are designed to refuse harmful instructions and minimize harmful content. In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities. We’re working with law enforcement to support their investigation.
The officials say they are still examining possible sources for the explosion, described as a deflagration that traveled rather slowly as opposed to a high explosives detonation that would’ve moved faster and caused more damage. While investigators say they haven’t ruled out other possibilities like an electrical short yet, an explanation that matches some of the queries and the available evidence is that the muzzle flash of a gunshot ignited fuel vapor/fireworks fuses inside the truck, which then caused a larger explosion of fireworks and other explosive materials.
Trying the queries in ChatGPT today still works, however, the information he requested doesn’t appear to be restricted and could be obtained by most search methods. Still, the suspect’s use of a generative AI tool and the investigators’ ability to track those requests and present them as evidence take questions about AI chatbot guardrails, safety, and privacy out of the hypothetical realm and into our reality.
Technology
China rolls out its crime-fighting ball to chase down criminals
China’s latest innovation in policing technology has rolled onto the scene, quite literally.
The Rotunbot RT-G, developed by Logon Technology, is a spherical robot that’s turning heads and chasing down criminals at impressive speeds.
This 276-pound mechanical machine is pushing the boundaries of what’s possible in law enforcement robotics. Let’s break down what this crime-fighting machine is all about.
GET SECURITY ALERTS, EXPERT TIPS – SIGN UP FOR KURT’S NEWSLETTER – THE CYBERGUY REPORT HERE
A versatile crime-fighting machine
The RT-G is not your average police assistant. This self-balancing sphere can reach speeds of up to 22 mph on both land and water, making it a formidable pursuer of suspects. Its amphibious capabilities allow it to navigate through mud, slush and even dive into rivers, emerging unscathed on the other side.
What sets the RT-G apart is its rapid acceleration. It can hit speeds of about 19 mph in 2.5 seconds, giving it a significant advantage in pursuit scenarios. This quick burst of speed, combined with its ability to handle drops from knee-high ledges and potentially roll down staircases, makes it a persistent and resilient force in the field.
THE FUTURE OF SECURITY JUST ROLLED IN, AND HER NAME IS ATHENA
Advanced technology at its core
The Rotunbot RT-G is equipped with an array of advanced sensors and technologies that make it a sophisticated piece of equipment. These include GPS for precise positioning, multiple cameras and ultrasonic sensors for environmental awareness, obstacle avoidance capabilities and threat and target tracking systems. These features enable the RT-G to navigate complex environments while avoiding collisions with people and objects. Additionally, the robot uses gyroscopic self-stabilization to maintain its balance and keep its wide contact patch firmly on the ground, ensuring smooth and quiet operation.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
IS THIS 4-WHEEL SECURITY ROBOT ABOUT TO REPLACE HUMAN SECURITY GUARDS?
Non-lethal arsenal
For law enforcement purposes, the RT-G comes equipped with a comprehensive range of non-lethal tools designed to manage diverse tactical scenarios. These tools include tear gas dispensers, smoke bomb launchers, high-decibel horns, acoustic crowd dispersal devices and net shooters capable of close-range suspect apprehension.
This sophisticated arsenal allows the robot to handle various situations, from crowd control to individual suspect takedowns, without resorting to lethal force, providing law enforcement with a versatile and humane technological solution.
IS THIS AUTONOMOUS SECURITY GUARD ROBOT THE PROTECTION YOU NEED?
Real-world application
The Rotunbot RT-G is not just a concept; it’s already being put to the test. In Wenzhou, a city in China’s Zhejiang province, these robotic spheres are assisting police patrols in commercial zones. This real-world trial is providing valuable insights into the effectiveness and practicality of the RT-G in actual law enforcement scenarios.
However, despite its impressive capabilities, the RT-G is not without its limitations. Video footage shows that the robot can be somewhat unstable when making turns, and its pursuit capabilities may be easily thwarted by a flight of stairs. These challenges highlight the ongoing need for development and improvement in robotic law enforcement technology.
Kurt’s key takeaways
The Rotunbot RT-G’s amphibious nature, high-speed capabilities and non-lethal arsenal make it a versatile tool for police forces. However, like any new technology, it raises questions about privacy, surveillance and the increasing automation of policing. The RT-G may be rolling into the future of law enforcement, but we must carefully consider the implications of deploying such advanced robotic systems in our communities.
How do you feel about the increasing use of robotic technology in law enforcement, and what potential risks or benefits do you see emerging from these technological advancements? Let us know what you think by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Ask Kurt a question or let us know what stories you’d like us to cover.
Follow Kurt on his social channels:
Answers to the most asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
The maker of the electric USPS truck is also building garbage robots and EV firefighters
The company appeared at CES in Las Vegas for the first time to announce a raft of new commercial electric vehicles, including plug-in fire engines and garbage trucks as well as AI-powered technology that it says will make these vehicles safer and more convenient.
You may know Oshkosh, which has a lot of credibility as a defense contractor, from its contract with the United States Postal Service to build the first all-electric postal truck. Last year, The Washington Post reported that the project was mired in delays, with only 93 trucks delivered to the USPS as of November.
1/3
But despite these delays, Oshkosh thinks it’s well positioned to help build these next-generation specialty vehicles and says it plans to eventually deliver 165,000 vehicles to USPS, up to 70 percent of which will be electric. The company also announced plans to build a variety of electric and autonomous vehicles for airports, including a robot cargo handler and EVs for construction sites.
But the “neighborhood” EVs, as Oshkosh calls them, stand the chance to be the most visible and impactful — if the company can get them built.
The first vehicle to be announced today is the McNeilus Volterra ZFL, an all-electric front-loader garbage truck with an AI-powered detection system for refuse bins. The sensors detect the location of the garbage cans and communicate with the truck to ensure it’s positioned accurately. Then a robotic arm is deployed to snag the bin and lift it for trash disposal. Oshkosh is also rolling out a new AI-powered, vision-based contamination system to identify and remove items that don’t belong in the waste or recycling streams.
1/3
Speaking of robots, Oshkosh has introduced HARR-E, an autonomous electric refuse collection robot that purports to offer on-demand trash and recycling pickup via a smartphone app or virtual home assistant like Amazon Alexa.
The robot “makes trash removal as easy as ordering an Uber or a Lyft right from your home,” said Jay Iyengar, Oshkosh’s chief technology officer. HARR-E deploys from a central refuse collection area within the neighborhood and navigates to the resident’s home autonomously for collection before returning to the base to unload and recharge.
“Trash removal as easy as ordering an Uber or a Lyft right from your home”
For firefighters, Oshkosh is introducing a new Collision Avoidance Mitigation System, or CAMS, that aims to tell emergency workers when it’s safe to get out of their vehicles. According to Iyengar, “CAMS uses an advanced camera and radar sensor suite with AI to accurately detect the trajectory, the speed and proximity of ongoing vehicles relative to a parked emergency vehicle. CAMS can provide up to two to three seconds of advanced notice of an impending collision, giving an extra layer of safety during roadside operations.”
It’s an ambitious suite of technologies. Oshkosh says it’s up to the task. But political headwinds, including President-elect Donald Trump’s promises to eliminate billions of dollars in EV incentives, could make success more difficult.
Despite this, Oshkosh executives tried to project a sunny outlook. “The reviews on the first vehicle are fantastic,” Oshkosh CEO John Pfeifer said of the new USPS delivery truck. “It’s been written up in a lot of publications about the postal carrier’s responses to the first vehicles. But it’s going exceptionally well.”
-
Business1 week ago
These are the top 7 issues facing the struggling restaurant industry in 2025
-
Culture1 week ago
The 25 worst losses in college football history, including Baylor’s 2024 entry at Colorado
-
Sports7 days ago
The top out-of-contract players available as free transfers: Kimmich, De Bruyne, Van Dijk…
-
Politics5 days ago
New Orleans attacker had 'remote detonator' for explosives in French Quarter, Biden says
-
Politics5 days ago
Carter's judicial picks reshaped the federal bench across the country
-
Politics4 days ago
Who Are the Recipients of the Presidential Medal of Freedom?
-
Health3 days ago
Ozempic ‘microdosing’ is the new weight-loss trend: Should you try it?
-
World1 week ago
Ivory Coast says French troops to leave country after decades