Connect with us

Technology

Suno is a music copyright nightmare

Published

on

Suno is a music copyright nightmare

AI music platform Suno’s policy is that it does not permit the use of copyrighted material. You can upload your own tracks to remix or set your original lyrics to AI-generated music. But, it’s supposed to recognize and stop you from using other people’s songs and lyrics. Now, no system is perfect, but it turns out that Suno’s copyright filters are incredibly easy to fool.

With minimal effort and some free software, Suno will spit out AI-generated imitations of popular songs like Beyoncé‘s “Freedom,” Black Sabbath’s “Paranoid,” and Aqua’s “Barbie Girl” that are alarmingly close to the original. Most people will likely be able to tell the difference, but some could be mistaken for alternate takes or B-sides at a casual listen. What’s more, it’s possible someone could monetize these uncanny valley covers by exporting them and uploading them to streaming services. Suno declined to comment for this story.

Making these covers requires using Suno Studio, available on the company’s $24-a-month Premier Plan. Rather than prompting a whole song with text, Suno Studio lets you upload a track to edit or cover. It’s likely to catch and reject a well-known hit with no tweaks. But using a basic free tool like Audacity to slow down a track to half-speed or speed it up to twice normal will often bypass the filter, and adding a burst of white noise to the start and end seems to basically guarantee success. You can restore the original speed and cut the white noise in Suno Studio, and the copyrighted song becomes the seed for new AI music.

If you generate a cover of the imported audio without any style transfers, Suno basically spits out the original instrumental arrangement with very minimal tweaks to the sound palette if you’re using model 4.5 or 4.5+. Model v5 is a bit more aggressive in taking liberties with the source material, adding chugging guitar and galloping piano to “Freedom” and turning the Dead Kennedys’ “California Über Alles” into a fiddle-driven jig.

Suno lets you add vocals by generating lyrics or typing words into a box, and once again, it’s supposed to block anything copyrighted. If you copy and paste the official lyrics for a song from Genius, Suno will flag them and spit out gibberish vocals. But extremely minor changes can bypass this filter as well.

Advertisement

I was able to trick Suno Studio by tweaking the spelling of a handful of words in “Freedom” — changing “rain on this bitter love” to “reign on” and “tell the sweet I’m new” to “tell the suite” — and beyond the first verse and chorus, I didn’t even need to do that. The voice closely mimics the original recording, summoning slightly off-brand renditions of Ozzy or Beyoncé.

Indie artists might not even be afforded that level of protection. One of my own songs cleared the copyright filter while I was testing v5 of the company’s model. I was also able to get tracks by singer-songwriter Matt Wilson, Charles Bissell’s “Car Colors,” and experimental artist Claire Rousay by Suno’s copyright detection system without any changes at all. Artists on smaller labels or self-distributing through Bandcamp or services like DistroKid are most likely to slip through the cracks; DistroKid and CD Baby declined to comment.

The results of these AI covers fall firmly in the uncanny valley. The songs they’re covering are unmistakable: the riff from “Paranoid” remains identifiable and “Freedom” is obviously “Freedom” from the moment the marching snare hits kick in. But there is a lifelessness to them. Even if AI Ozzy is alarmingly accurate-sounding, it lacks nuance and dynamics, leading it to feel like an imitation of a human, rather than the real thing.

The instrumentals similarly discard any interesting artistic choices the originals make, or clone them in flat imitations. A non-jig “California Über Alles” cover has most of its rough edges sanded down so it sounds like a wedding band version of the original. Pink Floyd’s “Another Brick in the Wall” goes from an experiment in doom disco to just vacuous dancefloor filler. And, while it kind of nails David Gilmour’s guitar tone, it does away with any sense of phrasing or progression, turning the solo into just a mindless stream of notes.

Creating unauthorized covers violates both the stated purpose of Suno, and the terms of service. Moreover, Suno only appears to scan tracks on upload; it doesn’t seem to recheck outputs for potential infringement, or rescan tracks before exporting them. The path to monetizing Suno-created covers is simple from there. AI slopmongers could upload them through a distribution service like DistroKid and profit from other people’s songs without paying the typical royalties a cover would give the original composer. And independent artists seem to be the most vulnerable.

Advertisement

Folk artist Murphy Campbell discovered this recently when someone uploaded what seem to be AI covers of songs she posted on YouTube to her Spotify profile. (It’s not clear what system they were generated through.) Shortly afterwards, distributor Vydia filed copyright claims against her YouTube videos and began collecting royalties on them. And to highlight just how broken the whole system is, the songs which Vydia successfully filed copyright claims for are all in the public domain. Spotify eventually removed the AI covers, and Vydia has rescinded its copyright claims, but that only happened following a social media campaign by Campbell. Vydia says the two incidents are separate and it is not associated with the AI covers of Campbell’s work.

AI fakes are a problem for other artists too. Experimental composer William Basinski and indie rock group King Gizzard and The Lizard Wizard have had imitations slip through multiple filters and reach streaming platforms like Spotify. Sometimes, these fake songs can siphon up views straight from the artist’s own page. In a system where payouts can already be brutally low — Spotify requires a minimum of 1,000 streams to get paid — less famous musicians are hit hardest.

Suno is only one cog in a clearly broken system.

Services like Deezer, Qobuz, and Spotify have taken measures to combat spammy AI and impersonators. Spotify spokesperson Chris Macowski told The Verge that the company “takes protecting artists’ rights seriously, and approaches it from multiple angles. That includes safeguards to help prevent unauthorized content from being uploaded in the first place, along with systems that can identify duplicate or highly similar tracks. Those systems are backed by human review to make sure we’re getting it right.” But no system is perfect, and keeping up with a flood of AI slop enabled by platforms like Suno poses a challenge.

Macowski acknowledged the technical difficulties involved, saying, “It’s an area we’re continuing to invest in and evolve, especially as new technologies emerge.”

Advertisement

Suno is only one cog in a clearly broken system. But it’s one artists have particularly little recourse to fight. Bands can contact Spotify and have AI fakes removed from their profile. It’s harder to tell how those fakes are generated, and if they’re the result of Suno’s filters failing. And so far, Suno’s response is silence.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Nothing’s noise-canceling CMF Buds 2A are down to just $19.99 just for today

Published

on

Nothing’s noise-canceling CMF Buds 2A are down to just .99 just for today

It’s not every day you find a decent pair of wireless earbuds with active noise cancellation, a transparency mode, and app support for less than $20, which is why the current lighting deal on the CMF Buds 2A stands out. Now through 11:15PM ET today, April 7th, Nothing’s budget earbuds are available on Amazon in all three colors for just $19.99 ($29 off), which matches their lowest price to date.

For the price, the Buds 2A cover the basics and then some. They deliver decent (albeit a little tinny) sound and 42 decibels of noise cancellation, along with an IP54 rating and a useful transparency mode for staying aware of your surroundings. They also provide a commendable eight hours of battery life per charge with ANC disabled — or up to 35.5 with the included charging case — and feature four onboard mics that leverage Nothing’s noise reduction tech, which helps boost voice call quality. I wouldn’t say voice clarity is their strong suit, though, again, they’re a $20 pair of earbuds.

Like the rest of Nothing’s entry-level earbuds, the 2A also work with the Nothing X app, adding a level of flexibility that’s hard to find at this price. With the app, you can tweak EQ settings, adjust the bass response, switch between ANC modes, or quickly enable multi-device pairing. There’s even a “find my earbuds” feature if you lose them, and you can assign a gesture to trigger your phone’s virtual assistant on the fly, whether that’s Siri or Google Assistant. On top of that, if you’re using a Nothing or CMF phone, you can use your voice to access ChatGPT directly through the earbuds.

Continue Reading

Technology

Healthcare data breach hits system storing patient records

Published

on

Healthcare data breach hits system storing patient records

NEWYou can now listen to Fox News articles!

Healthcare data breaches keep coming. Now, CareCloud is the latest to confirm a serious security incident.

The company says hackers accessed one of its systems that stores electronic health records, not confirmed patient records themselves. The intrusion lasted more than eight hours on March 16. That window matters because even a short breach can expose sensitive data at scale.

At this point, there is still uncertainty. CareCloud has not confirmed whether any data was taken or what specific information may be involved. However, the investigation is ongoing, and the company has brought in outside cybersecurity experts.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com –  trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Advertisement

HEALTH TECH BREACH EXPOSES 3.4M PATIENT RECORDS
 

A CareCloud security breach exposed a key healthcare system used by providers nationwide, raising new concerns about whether patient data may have been taken. (Nansan Houn/Getty Images)

What exactly happened inside CareCloud’s systems

CareCloud operates multiple environments where patient records are stored. According to its filing with the U.S. Securities and Exchange Commission, attackers gained access to one of those environments.

Here is what we know so far:

  • Unauthorized access began on March 16
  • Hackers stayed inside for more than eight hours
  • The company restored full system functionality and data access the same day
  • The company believes the attackers are no longer inside

CareCloud also says the incident was contained to that single environment and did not impact its other systems or platforms. Even so, the biggest unanswered question remains whether any data left the system. That detail matters because stolen health data often fuels identity theft, insurance fraud and targeted scams. 

Why healthcare data is such a valuable target

Healthcare companies sit on a goldmine of personal information. That includes names, Social Security numbers and medical histories. Unlike a credit card, you cannot simply cancel your medical history. We saw the scale of this risk during the Change Healthcare ransomware attack. That breach disrupted systems across the U.S. and delayed care for weeks. It also exposed just how interconnected the healthcare infrastructure has become. CareCloud serves more than 45,000 providers and supports millions of patients. That kind of reach makes any incident more serious. 

Advertisement

Where patient data may be stored

CareCloud has not shared full technical details yet. Public records suggest much of its infrastructure relies on Amazon Web Services. Cloud platforms are widely used across healthcare. They offer scale and flexibility. At the same time, they require strict security controls to prevent unauthorized access. It is still unclear how CareCloud separates or backs up data across its systems. That detail could affect how far attackers were able to move once inside. We reached out to CareCloud for a comment, but did not hear back before our deadline.

BANKING TECH DATA BREACH EXPOSES 672K IN RANSOMWARE ATTACK
 

The latest healthcare cyber incident puts CareCloud in the spotlight as investigators work to determine whether sensitive patient information left the system. (shapecharge/Getty Images)

What this means to you

Even if you have never heard of CareCloud, your doctor might use it. That is how these breaches work. A behind-the-scenes company gets compromised, and patients feel the impact later. Right now, there is no confirmation that patient data was stolen. Still, this is the moment to stay alert. If your information was involved, notifications could come weeks or even months later.

Ways to stay safe from healthcare data breaches

Healthcare breaches can feel out of your control. Still, a few simple habits can make a real difference.

Advertisement

1) Watch your medical statements closely

Check every explanation of benefits and billing statement you receive. Look for charges, prescriptions or visits you do not recognize. Even a small, unfamiliar charge can signal fraud. If something looks off, contact your insurer or provider right away.

2) Set up identity theft monitoring

Health data can be used to open accounts, file fake claims or commit identity theft. Identity Theft companies can monitor personal information like your Social Security number (SSN), phone number and email address, and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. The faster you catch it, the easier it is to limit the damage. See my tips and best picks on Best Identity Theft Protection at Cyberguy.com 

3) Consider data removal services

Your personal details often end up on data broker sites without your knowledge. That information can be used to target you after a breach. Removing your data from these sites with a data removal service reduces how much scammers can find and use against you. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com

4) Use strong antivirus protection

If you receive emails about medical updates or billing issues, be extra careful. Malicious links and attachments are common after breaches. Strong antivirus software can help detect threats before you click and stop harmful downloads in real time. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

SSA IMPERSONATION SCAMS ARE GETTING MORE PERSONAL
 

Advertisement

CareCloud says hackers accessed one of its electronic health record environments for more than eight hours during a March 16 cyber incident now under investigation. (AndreyPopov/Getty Images)

5) Use strong, unique passwords

Secure your patient portals with a password you do not use anywhere else. Reusing passwords makes it easier for attackers to access multiple accounts. A password manager can generate and store strong passwords for you so you do not have to remember them. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com

6) Enable two-factor authentication

Turn on two-factor authentication (2FA) if your provider offers it. This adds a second step, such as a code sent to your phone. Even if someone gets your password, this extra layer can stop them from getting into your account.

7) Be cautious with follow-up scams

After a breach, scammers often pose as healthcare providers or support teams. They may send emails, texts or even call you. Do not click links or share personal details unless you verify the source. When in doubt, go directly to your provider’s official website or call their listed number.

Kurt’s key takeaways

The CareCloud data breach is still unfolding. That uncertainty is part of the problem. Healthcare systems are complex. They rely on multiple vendors, cloud services and interconnected tools. That creates more entry points for attackers. Even when companies respond quickly, the ripple effects can last much longer.

Advertisement

If your most sensitive health data can pass through multiple companies you have never heard of, who should be responsible for keeping it safe? Let us know by writing to us at Cyberguy.com

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com –  trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading

Technology

Gemini is making it faster for distressed users to reach mental health resources 

Published

on

Gemini is making it faster for distressed users to reach mental health resources 

Google says it has updated Gemini to better direct users to get mental health resources during moments of crisis. The change comes as the tech giant faces a wrongful death lawsuit alleging its chatbot “coached” a man to die by suicide, the latest in a string of lawsuits alleging tangible harm from AI products.

When a conversation indicates a user is in a potential crisis related to suicide or self-harm, Gemini already launches a “Help is available” module that directs users to mental health crisis resources, like a suicide hotline or crisis text line. Google says the update — really more of a redesign — will streamline this into a “one-touch” interface that will make it easier for users to get help quickly.

The help module also contains more empathetic responses designed “to encourage people to seek help,” Google says. Once activated, “the option to reach out for professional help will remain clearly available” for the remainder of the conversation.

Google says it engaged with clinical experts for the redesign and is committed to supporting users in crisis. It also announced $30 million in funding globally over the next three years “to help global hotlines.”

Like other leading chatbot providers, Google stressed that Gemini “is not a substitute for professional clinical care, therapy, or crisis support,” but acknowledged many people are using it for health information, including during moments of crisis.

Advertisement

The update comes amid broader scrutiny over how adequate the industry’s safeguards actually are. Reports and investigations, including our probe into the provision of crisis resources, frequently flag cases where chatbots fail vulnerable users, by helping them hide eating disorders or plan shootings. Google often fares better than many rivals in these tests, but is not perfect. Other AI companies, including OpenAI and Anthropic, have also taken steps to improve their detection and support of vulnerable users.

Continue Reading

Trending