Connect with us

Technology

FCC names its first-ever AI scammer in threat alert

Published

on

FCC names its first-ever AI scammer in threat alert

Join Fox News for access to this content

You have reached your maximum number of articles. Log in or create an account FREE of charge to continue reading.

By entering your email and pushing continue, you are agreeing to Fox News’ Terms of Use and Privacy Policy, which includes our Notice of Financial Incentive.

Please enter a valid email address.

Having trouble? Click here.

The first artificial intelligence robocall scammer has been officially named by the Federal Communications Commission. But is it too little, too late? 

After all, Royal Tiger has already gotten away with loads of scams that have impacted millions of Americans.

Advertisement

Let’s talk about what these headlines mean for AI scammers in general, what you still need to be on the lookout for and how to protect yourself from these sophisticated scams going forward.

GET SECURITY ALERTS, EXPERT TIPS — SIGN UP FOR KURT’S NEWSLETTER — THE CYBERGUY REPORT HERE

A woman receiving a robocall. (Kurt “CyberGuy” Knutsson)

So, who is the Royal Tiger cyber gang?

Royal Tiger is the first robocall gang named by the Federal Communications Commission (FCC). This group is known to use sophisticated techniques in their cyber scams, such as AI voice cloning, to impersonate staff from government agencies, banks and utilities, known as “robocall” scams.

The crew consists of individuals and voice service providers operating from various countries, including India, the U.K., the United Arab Emirates, and the U.S. The group is led by Prince Jashvantlal Anand, who uses the alias “Frank Murphy,” and his associate Kaushal Bhavsar. Anand has served as “CEO” of “U.S.-based companies” like Illum Telecommunication and PZ Telecommunication 

Advertisement

END OF ANNOYING ROBOCALLS? FTC CRACKS DOWN ON DECEPTIVE PRACTICES

What are robocalls and AI scams?

Robocalls and AI scams involve using automated calling systems and artificial intelligence to deceive and defraud individuals. Although there are a few ways to do this, scammers like Royal Tiger are now relying on AI voice cloning to create realistic-sounding voices that impersonate legitimate entities such as government agencies, banks and utility companies.

Generally, these scams involve using certain trick scenarios to take advantage of their victims, like calling about credit card interest rate reductions or fake purchase authorization orders, which enable them to obtain consumers’ financial and other sensitive data from the individuals that they target.

With phone spoofing techniques, it’s possible to make your caller ID actually show a call from these agencies, too, to make it look more legitimate.

man on phone

A man frustrated by a robocall. (Kurt “CyberGuy” Knutsson)

SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES

Advertisement

Is the Federal Communications Commission doing anything about it?

The first step to making scammers public — thus, spreading more awareness about these types of scams — is to publicly name and shame them. That’s what the FCC is attempting to do with Royal Tiger, with the hope that detailing their operations will encourage international action against the scammers. Meanwhile, in the U.S., the FCC aims to disrupt their activities and hold them accountable by sending cease-and-desist letters to companies involved in the operation, such as Illum Telecommunication, PZ Telecommunication and One Eye.

In some cases, the FCC has actually required downstream providers to block traffic from these companies. Additionally, the FCC has classified Royal Tiger and its entities as a Consumer Communications Information Services Threat (C-CIST), due to the significant danger they pose to consumer trust in communications services.

CYBER SCAMMERS USE AI TO MANIPULATE GOOGLE SEARCH RESULTS

What experts have to say

Dr. Ilia Kolochenko, CEO at ImmuniWeb and Adjunct Professor of Cybersecurity at Capital Technology University, commented:

“In 2024, we will probably see a surge of computer-enabled fraud and crimes — which should, however, be distinguished from pure cybercrime — propelled by the ballooning misuse of freely available Generative AI (GenAI) tools and online services. When combined with well-thought-out social engineering campaigns, GenAI can cause unprecedented financial damage in mass-scale phishing or fraud campaigns. For instance, elderly people and other socially vulnerable groups may be perfidiously tricked into paying ‘fines’ for speeding or petty offences that they have never committed.

Advertisement

“Well-prepared fake calls nefariously exploit people’s respect of law enforcement and government, for instance, calling on behalf of the local police or the FBI, citing numerous laws and regulations with some legalese to intellectually disarm and psychologically paralyse their victims. With VoIP, phone numbers can be easily spoofed, so many gangs utilize real phone numbers of law enforcement agencies to increase authenticity of their calls.

“Then the victim may be offered a ‘big favour’ (allegedly available only to first-time offenders) to pay the fine online or even by sharing their credit card details via phone — instead of traveling to the police station or local court. Sadly, most victims will readily pay. Worse, quite some will keep the event confidential, truly thinking that they did something bad and were lucky to avoid harsher penalties.”

HOW TO STOP PHONE NUMBER SPOOFING AND PROTECT YOURSELF FROM SCAMMERS

How to take protection into your own hands

While it’s great news that the FCC has taken these measures thus far, groups like Royal Tiger are generally able to move quickly and stay one step ahead, redefining their tactics and becoming more sophisticated. Here are some tips to take matters into your own hands and protect yourself:

Be skeptical of unsolicited calls: Be cautious when receiving unsolicited calls, especially those that request personal information or offer services that seem too good to be true.

Advertisement

Use call-blocking services: Many phone providers offer services to block or screen unwanted calls. Utilize these features to reduce the number of robocalls you receive.

Verify caller identity: If you receive a call from someone claiming to be from a government agency, bank or utility company, hang up and call the official number of the organization to verify the authenticity of the call.

Avoid sharing personal information at all costs: Do not share sensitive information such as Social Security numbers, bank account details or credit card numbers over the phone unless you are certain of the caller’s identity.

Report suspicious calls: Report any suspicious calls to the FCC or the Federal Trade Commission (FTC). Your reports can help these agencies track and take action against scam operations.

Use data removal services: Consider using data removal services to minimize the amount of personal information available online, making it harder for scammers to obtain. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for personal data removal services here.

Advertisement
Woman on robocall

A woman receiving a robocall. (Kurt “CyberGuy” Knutsson)

Kurt’s key takeaways

While the FCC naming Royal Tiger the first official AI robocall scammer gang is a positive step, sophisticated AI-powered scams exploiting voice cloning and caller ID spoofing will likely surge. We must all remain extremely vigilant — verify any unsolicited calls demanding personal information or payment through official channels, never share sensitive data over the phone and report suspected scams. A coordinated effort from the government, companies, and individuals is crucial to combating these evolving AI-enabled fraud tactics effectively.

What role should AI companies play in preventing their technologies from being misused for nefarious purposes like voice cloning scams? Let us know by writing us at Cyberguy.com/Contact

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter

Ask Kurt a question or let us know what stories you’d like us to cover.

Follow Kurt on his social channels:

Advertisement

Answers to the most-asked CyberGuy questions:

Copyright 2024 CyberGuy.com. All rights reserved.

Technology

OpenAI’s former chief scientist is starting a new AI company

Published

on

OpenAI’s former chief scientist is starting a new AI company

Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.

The announcement describes SSI as a startup that “approaches safety and capabilities in tandem,” letting the company quickly advance its AI system while still prioritizing safety. It also calls out the external pressure AI teams at companies like OpenAI, Google, and Microsoft often face, saying the company’s “singular focus” allows it to avoid “distraction by management overhead or product cycles.”

“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement reads. “This way, we can scale in peace.” In addition to Sutskever, SSI is co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked as a member of technical staff at OpenAI.

As OpenAI pushes forward with partnerships with Apple and Microsoft, we likely won’t see SSI doing that anytime soon. During an interview with Bloomberg, Sutskever says SSI’s first product will be safe superintelligence, and the company “will not do anything else” until then.

Continue Reading

Technology

Ready to unleash your inner maverick with thrilling Airwolf hoverbike

Published

on

Ready to unleash your inner maverick with thrilling Airwolf hoverbike

Join Fox News for access to this content

You have reached your maximum number of articles. Log in or create an account FREE of charge to continue reading.

By entering your email and pushing continue, you are agreeing to Fox News’ Terms of Use and Privacy Policy, which includes our Notice of Financial Incentive.

Please enter a valid email address.

Having trouble? Click here.

Can you imagine soaring through the skies like a modern-day Maverick, leaving the constraints of the road behind? Well, get ready to unleash your inner daredevil because the UDX Airwolf hoverbike could soon make that dream a reality.

GET SECURITY ALERTS, EXPERT TIPS – SIGN UP FOR KURT’S NEWSLETTER – THE CYBERGUY REPORT HERE

Advertisement

Airwolf hoverbike (UDX) (Kurt “CyberGuy” Knutsson)

The Maverick’s ride

The UDX Airwolf hoverbike is no ordinary quadcopter. We’re talking about a 430-hp eVTOL motorcycle-esque vehicle that seats two and features “hummingbird-like” agility. It has four fan units that can tilt independently. With a weight of 639 pounds, the Airwolf promises a 0-60 mph acceleration in just three seconds and a blistering top speed of 142 mph.

Ready to unleash your inner maverick with thrilling Airwolf hoverbike

Airwolf hoverbike (UDX) (Kurt “CyberGuy” Knutsson)

THE BEST TRAVEL GEAR FOR 2024

The price of admission

To fly the UDX Airwolf, you’ll need a sports pilot license in the U.S., which requires 20 hours of flight training (five of which can be solo) and passing a couple of tests. While not as expensive as a private pilot license, the real barrier to entry is the Airwolf’s price tag of $350,000, a sum that only the well-heeled can afford.

We reached out to UDX, and the company’s CEO, Jiri Madeja, tells us, “Lately, we’ve seen a huge spike in excitement around our Airwolf and other VTOLs, and it’s honestly so rewarding. It’s a dream come true for us to finally have the technology to make these machines a reality. Big thanks to CyberGuy, for getting the word out.”

Advertisement
Ready to unleash your inner maverick with thrilling Airwolf hoverbike

Airwolf hoverbike (UDX) (Kurt “CyberGuy” Knutsson)

BEST TRAVEL ADAPTERS OF 2024

The reality check

While the dream is tantalizing, the reality is still in development. UDX has built working prototypes at small and quarter-scale, incorporating the thrust-vectoring propulsion system. These prototypes demonstrate reasonable stability and agility in flight testing, but a production-ready model is still a few years off.

However, with battery and electric motor technology advancing rapidly, these compact personal eVTOLs are no longer just retro-futurist dreams; they’re already here, albeit expensive and in small numbers. Some are pitched as fun machines while others aspire to be practical transport options for commuting, search and rescue operations, or quick responses to accidents.

HOW TO REMOVE YOUR PRIVATE DATA FROM THE INTERNET

Ready to unleash your inner maverick with thrilling Airwolf hoverbike

Airwolf hoverbike (UDX) (Kurt “CyberGuy” Knutsson)

REVOLUTIONARY FLYING SPORTS CAR COMPLETES ITS MAIDEN FLIGHT

Advertisement

Kurt’s key takeaways

While the Airwolf hoverbike may seem like a pipe dream, it represents the cutting edge of personal aviation technology. As battery and electric motor advancements continue, we may see these compact eVTOLs become more accessible and practical. For now, the Airwolf offers a tantalizing glimpse into a future where we can unleash our inner mavericks and take to the skies with the freedom and exhilaration of a fighter pilot.

If the UDX Airwolf hoverbike becomes available in the future, would you dare to take flight on it and experience it firsthand? Why or why not? Let us know by writing us at Cyberguy.com/Contact.

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Ask Kurt a question or let us know what stories you’d like us to cover.

Follow Kurt on his social channels:

Advertisement

Answers to the most asked CyberGuy questions:

Copyright 2024 CyberGuy.com. All rights reserved.

Continue Reading

Technology

Snapchat AI turns prompts into new lens

Published

on

Snapchat AI turns prompts into new lens

Snapchat offered an early look at its upcoming on-device AI model capable of transforming a user’s surroundings with augmented reality (AR). The new model will eventually let creators turn a text prompt into a custom lens — potentially opening the door for some wild looks to try on and send to friends.

You can see how this might look in the GIF below, which shows a person’s clothing and background transforming in real-time based on the prompt “50s sci-fi film.” Users will start seeing lenses using this new model in the coming months, while creators can start making lenses with the model by the end of this year, according to TechCrunch.

Additionally, Snapchat is rolling out a suite of new AI tools that could make it easier for creators to make custom augmented reality (AR) effects. Some of the tools now available with the latest Lens Studio update include new face effects that let creators write a prompt or upload an image to create a custom lens that completely transforms a user’s face.

The suite also includes a feature, called Immersive ML, that applies a “realistic transformation over the user’s face, body, and surroundings in real time.” Other AI tools coming to Lens Studio allow lens creators to generate 3D assets based on a text or image prompt, create face masks and textures, as well as make 3D character heads that mimic a user’s expression.

This is Snapchat’s “Immersive ML” effect using the prompt “Matisse Style Painting.”
Image: Snapchat
Advertisement
Continue Reading
Advertisement

Trending