Connect with us

Technology

Even Elden Ring’s game director knows Erdtree is too hard

Published

on

Even Elden Ring’s game director knows Erdtree is too hard

As I work my way through Shadow of the Erdtree, the new DLC for Elden Ring, I can’t help but think that game director Hidetaka Miyazaki is trolling the shit outta me.

Speaking with him at Summer Game Fest two weeks ago, Miyazaki said that Erdtree is “by far the biggest in scale in volume” than any other FromSoftware DLC before it. But he also said Erdtree is “about the same volume as the Limgrave part of the base game,” the starting area of Elden Ring, containing “slightly more content.” 

I don’t buy it. While it’s impossible to accurately compare scale in game, I’ve played the game for 30 hours and have already found the new areas in the DLC to be more expansive than Limgrave. Not only is the map itself large, it’s layered, with huge areas propped up on plateaus above, in deep valleys below, and on islands that take some creative platforming to reach.

I’d say it’s far, far larger than Limgrave, with plenty more to do as well. Shadow of the Erdtree honestly feels like it’s big enough to be its own game with its own story — one that was originally intended for Elden Ring but wound up being cut for time before being added back as DLC content.

Erdtree follows the story arc of Miquella, brother of uber boss Malenia and one of the demigods important to Elden Ring lore. To focus on Miquella was, Miyazaki told me, born of the desire to honor George R. R. Martin’s contributions to the game. “[He] gave us all this great mythology to work with,” Miyazaki said. Packaging Miquella’s story as a standalone DLC was essentially “closing the loop” on Martin’s involvement in the game. “It’s really about completing Elden Ring’s circle,” he said.

Advertisement

But understating the size of Shadow of the Erdtree is just one of the ways it feels like Miyazaki is misleading me. I know he’s trolling me when it comes to difficulty.

I cannot beat Rellana, Twin Moon Knight, the boss ensconced in Castle Enis that players can face five or 50 hours in depending on their exploration choices. (I met her after about 15.) None of my strategies nor any of the game’s built in assistance features — using Mimic Tear Ashes, summoning help from an NPC, changing my weapons or spells, inflicting damaging status effects — seem to work. My best attempt got her down to half of her health and I cannot seem to progress further. And she’s only major boss number two.

According to Miyazaki, this is by design. He said that Erdtree contains “10 plus boss encounters” — honestly, another hilariously absurd understatement, I’ve seen estimates of 55 bosses and up to 80. Thankfully only a small handful of those bosses are necessary to progress the story, while the rest are optional.

“And the ones that [are optional] are especially difficult,” Miyazaki said.

Whenever a new FromSoftware game releases, there’s nearly always a discussion of difficulty. With the DLC, other reviewers have suggested that its difficulty is too extreme. “It’s true that this distinct type of FromSoft-engineered frustration is an indispensable part of the Souls experience,” wrote Alexis Ong in Eurogamer. “This, however, feels like difficulty for difficulty’s sake, turned up to eleven.”

Advertisement

I agree. But while I think Shadow of the Erdtree could better straddle the line between pleasantly challenging and frustratingly impossible, the game was tuned to Miyazaki’s intentions, representing the lessons the development team learned from the original game’s feedback.

“Traditionally we’ve always liked the higher difficulty curve type of games and experiences, but I think that nature in and of itself alienates a good portion of the game playing audience,” he said. 

A contradictory thing to say considering his comments in a recent interview with The Guardian: “If we really wanted the whole world to play the game, we could just crank the difficulty down more and more, but that wasn’t the right approach. Turning down difficulty would strip the game of that joy, which, in my eyes, would break the game itself.”

He’s not wrong. Elden Ring ceases to be the game of the year it was if it lacks the kind of difficulty FromSoftware is known for. So Erdtree must be hard, but not so hard that it’ll turn players off. But it also can’t be too easy because that will break the game. What to do? The answer, according to Miyazaki, is freedom.

“The amount of freedom that we give players helps offset that difficulty curve and makes the game more accessible and engaging,” he told me.

Advertisement

I think that worked for Elden Ring, less so for this DLC. In the base game, difficulty could be circumvented with leveling up — the player freedom, as it were. But with the addition of the new DLC-exclusive consumables that increase your attack and defense, becoming more powerful is now dependent on your ability to find those scarce items. As a result, I’ve often found myself fearful of the simplest enemies as encountering more than one at a time will kill me outright. 

“I try to imagine different ways I would want to die as a player or be killed.”

In addition to ensuring that players die a lot, Miyazaki also said that how players die is just as important.

“I try to imagine different ways I would want to die as a player or be killed,” he said, explaining that those thoughts manifested in Elden Ring and in other FromSoftware games as his signature poison swamps. But for Erdtree, he confessed to cutting back on that indulgence — “In the original Elden Ring, I went a little too far.”

There are still poison swamps in Erdtree, “but in other parts of gameplay, there are still many ways to die.”

Advertisement

One of Hidetaka Miyazaki’s signature poison swamps in Shadow of the Erdtree.
Image: FromSoftware / Ash Parrish

Too many it seems. I’ve been bludgeoned, exsanguinated, frostbitten, and burned. I’ve fallen off cliffs, had cliffs fall on me — beware the fiery rocks the Furnace Golems spew — and I’ve even accidentally killed myself eating an item that refilled my HP while also inflicting poison. 

Despite my tribulations, Miyazaki, like a benevolent god, has faith in me and his players, only giving us trials he believes we can bear.

“We’ve really pushed the envelope in terms of what we think can be withstood by the player,” Miyazaki said. 

He clarified that one of the biggest lessons brought forward from Elden Ring into Erdtree was what the audience found fun over what was stressful. “ We tried to make that the foundation of the boss encounters of the DLC, so hopefully players will find it much more engaging and fun,” he said.

Advertisement

“But if that is not the case,” he added. “Then I’m sorry.”

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd

Published

on

You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd

Billy Woods has one of the highest batting averages in the game. Between his solo records like Hiding Places and Maps, and his collaborative albums with Elucid as Armand Hammer, the man has multiple stone-cold classics under his belt. And, while no one would ever claim that Woods’ albums were light-hearted fare (these are not party records), Golliwog represents his darkest to date.

This is not your typical horrorcore record. Others, like Geto Boys, Gravediggaz, and Insane Clown Posse, reach for slasher aesthetics and shock tactics. But what Billy Woods has crafted is more A24 than Blumhouse.

Sure, the first track is called “Jumpscare,” and it opens with the sound of a film reel spinning up, followed by a creepy music box and the line: “Ragdoll playing dead. Rabid dog in the yard, car won’t start, it’s bees in your head.” It’s setting you up for the typical horror flick gimmickry. But by the end, it’s psychological torture. A cacophony of voices forms a bed for unidentifiable screeching noises, and Woods drops what feels like a mission statement:

“The English language is violence, I hotwired it. I got a hold of the master’s tools and got dialed in.”

Throughout the record, Woods turns to his producers to craft not cheap scares, but tension, to make the listener feel uneasy. “Waterproof Mascara” turns a woman’s sobs into a rhythmic motif. On “Pitchforks & Halos” Kenny Segal conjures the aural equivalent of a POV shot of a serial killer. And “All These Worlds are Yours” produced by DJ Haram has more in common with the early industrial of Throbbing Gristle than it does even some of the other tracks on the record, like “Golgotha” which pairs boombap drums with New Orleans funeral horns.

That dense, at times scattered production is paired with lines that juxtapose the real-world horrors of oppression and colonialism, with scenes that feel taken straight from Bring Her Back: “Trapped a housefly in an upside-down pint glass and waited for it to die.” And later, Woods seamlessly transitions from boasting to warning people about turning their backs on the genocide in Gaza on “Corinthians”:

Advertisement

If you never came back from the dead you can’t tell me shit
Twelve billion USD hovering over the Gaza Strip
You don’t wanna know what it cost to live
What it cost to hide behind eyelids
When your back turnt, secret cannibals lick they lips

The record features some of Woods’ deftest lyricism, balancing confrontation with philosophy, horror with emotion. Billy Woods’ Golliwog is available on Bandcamp and on most major streaming services, including Apple Music, Qobuz, Deezer, YouTube Music, and Spotify.

Continue Reading

Technology

Grok AI scandal sparks global alarm over child safety

Published

on

Grok AI scandal sparks global alarm over child safety

NEWYou can now listen to Fox News articles!

Grok, the built-in chatbot on X, is facing intense scrutiny after acknowledging it generated and shared an AI image depicting two young girls in sexualized attire.

In a public post on X, Grok admitted the content “violated ethical standards” and “potentially U.S. laws on child sexual abuse material (CSAM).” The chatbot added, “It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”

That admission alone is alarming. What followed revealed a far broader pattern.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

The fallout from this incident has triggered global scrutiny, with governments and safety groups questioning whether AI platforms are doing enough to protect children.  (Silas Stein/picture alliance via Getty Images)

Grok quietly restricts image tools to paying users after backlash

As criticism mounted, Grok confirmed it has begun limiting image generation and editing features to paying subscribers only. In a late-night reply on X, the chatbot stated that image tools are now locked behind a premium subscription, directing users to sign up to regain access.

The apology that raised more questions

Grok’s apology appeared only after a user prompted the chatbot to write a heartfelt explanation for people lacking context. In other words, the system did not proactively address the issue. It responded because someone asked it to.

Around the same time, researchers and journalists uncovered widespread misuse of Grok’s image tools. According to monitoring firm Copyleaks, users were generating nonconsensual, sexually manipulated images of real women, including minors and well-known figures.

Advertisement

After reviewing Grok’s publicly accessible photo feed, Copyleaks identified a conservative rate of roughly one nonconsensual sexualized image per minute, based on images involving real people with no clear indication of consent. The firm says the misuse escalated quickly, shifting from consensual self-promotion to large-scale harassment enabled by AI.

Copyleaks CEO and co-founder Alon Yamin said, “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal.”

PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS

Grok admitted it generated and shared an AI image that violated ethical standards and may have broken U.S. child protection laws. (Kurt “CyberGuy” Knutsson)

Sexualized images of minors are illegal

This is not a gray area. Generating or distributing sexualized images of minors is a serious criminal offense in the United States and many other countries. Under U.S. federal law, such content is classified as child sexual abuse material. Penalties can include five to 20 years in prison, fines up to $250,000 and mandatory sex offender registration. Similar laws apply in the U.K. and France.

Advertisement

In 2024, a Pennsylvania man received nearly eight years in prison for creating and possessing deepfake CSAM involving child celebrities. That case set a clear precedent. Grok itself acknowledged this legal reality in its post, stating that AI images depicting minors in sexualized contexts are illegal.

The scale of the problem is growing fast

A July report from the Internet Watch Foundation, a nonprofit that tracks and removes child sexual abuse material online, shows how quickly this threat is accelerating. Reports of AI-generated child sexual abuse imagery jumped by 400% in the first half of 2025 alone. Experts warn that AI tools lower the barrier to potential abuse. What once required technical skill or access to hidden forums can now happen through a simple prompt on a mainstream platform.

Real people are being targeted

The harm is not abstract. Reuters documented cases where users asked Grok to digitally undress real women whose photos were posted on X. In multiple documented cases, Grok fully complied. Even more disturbing, users targeted images of a 14-year-old actress Nell Fisher from the Netflix series “Stranger Things.” Grok later admitted there were isolated cases in which users received images depicting minors in minimal clothing. In another Reuters investigation, a Brazilian musician described watching AI-generated bikini images of herself spread across X after users prompted Grok to alter a harmless photo. Her experience mirrors what many women and girls are now facing.

Governments respond worldwide

The backlash has gone global. In France, multiple ministers referred X to an investigative agency over possible violations of the EU’s Digital Services Act, which requires platforms to prevent and mitigate the spread of illegal content. Violations can trigger heavy fines. In India, the country’s IT ministry gave xAI 72 hours to submit a report detailing how it plans to stop the spread of obscene and sexually explicit material generated by Grok. Grok has also warned publicly that xAI could face potential probes from the Department of Justice or lawsuits tied to these failures.

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

Advertisement

Researchers later found Grok was widely used to create nonconsensual, sexually altered images of real women, including minors. (Nikolas Kokovlis/NurPhoto via Getty Images)

Concerns grow over Grok’s safety and government use

The incident raises serious concerns about online privacy, platform security and the safeguards designed to protect minors.

Elon Musk, the owner of X and founder of xAI, had not offered a public response at the time of publication. That silence comes at a sensitive time. Grok has been authorized for official government use under an 18-month federal contract. This approval was granted despite objections from more than 30 consumer advocacy groups that warned the system lacked proper safety testing.

Over the past year, Grok has been accused by critics of spreading misinformation about major news events, promoting antisemitic rhetoric and sharing misleading health information. It also competed directly with tools like ChatGPT and Gemini while operating with fewer visible safety restrictions. Each controversy raises the same question. Can a powerful AI tool be deployed responsibly without strong oversight and enforcement?

What parents and users should know

If you encounter sexualized images of minors or other abusive material online, report it immediately. In the United States, you can contact the FBI tip line or seek help from the National Center for Missing & Exploited Children.

Advertisement

Do not download, share, screenshot or interact with the content in any way. Even viewing or forwarding illegal material can expose you to serious legal risk.

Parents should also talk with children and teens about AI image tools and social media prompts. Many of these images are created through casual requests that do not feel dangerous at first. Teaching kids to report content, close the app and tell a trusted adult can stop harm from spreading further.

Platforms may fail. Safeguards may lag. But early reporting and clear conversations at home remain one of the most effective ways to protect children online.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com       

Advertisement

Kurt’s key takeaways

The Grok scandal highlights a dangerous reality. As AI spreads faster, these systems amplify harm at an unprecedented scale. When safeguards fail, real people suffer, and children face serious risk. At the same time, trust cannot depend on apologies issued after harm occurs. Instead, companies must earn trust through strong safety design, constant monitoring and real accountability when problems emerge.

Should any AI system be approved for government or mass public use before it proves it can reliably protect children and prevent abuse? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Advertisement

Copyright 2025 CyberGuy.com.  All rights reserved.

Continue Reading

Technology

Google pulls AI overviews for some medical searches

Published

on

Google pulls AI overviews for some medical searches

In one case that experts described as “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease.

In another “alarming” example, the company provided bogus information about crucial liver function tests, which could leave people with serious liver disease wrongly thinking they are healthy.

Continue Reading

Trending