Anne Aaron just can’t help herself.
Technology
Inside Netflix’s bet on advanced video encoding
/cdn.vox-cdn.com/uploads/chorus_asset/file/25477748/247139_Netflix_Video_Encoding_CVirginia0.jpg)
Aaron, Netflix’s senior encoding technology director, was watching the company’s livestream of the Screen Actors Guild Awards earlier this year. And while the rest of the world marveled at all those celebrities and their glitzy outfits sparkling in a sea of flashing cameras, Aaron’s mind immediately started to analyze all the associated visual challenges Netflix’s encoding tech would have to tackle. “Oh my gosh, this content is going to be so hard to encode,” she recalled thinking when I recently interviewed her in Netflix’s office in Los Gatos, California.
Aaron has spent the past 13 years optimizing the way Netflix encodes its movies and TV shows. The work she and her team have done allows the company to deliver better-looking streams over slower connections and has resulted in 50 percent bandwidth savings for 4K streams alone, according to Aaron. Netflix’s encoding team has also contributed to industrywide efforts to improve streaming, including the development of the AV1 video codec and its eventual successor.
Now, Aaron is getting ready to tackle what’s next for Netflix: Not content with just being a service for binge-watching, the company ventured into cloud gaming and livestreaming last year. So far, Netflix has primarily dabbled in one-off live events like the SAG Awards. But starting next year, the company will stream WWE RAW live every Monday. The streamer nabbed the wrestling franchise from Comcast’s USA Network, where it has long been the No. 1 rated show, regularly drawing audiences of around 1.7 million viewers. Satisfying that audience week after week poses some very novel challenges.
“It’s a completely different encoding pipeline than what we’ve had for VOD,” Aaron said, using industry shorthand for on-demand video streaming. “My challenge to (my) team is to get to the same bandwidth requirements as VOD but do it in a faster, real-time way.”
To achieve that, Aaron and her team have to basically start all over and disregard almost everything they’ve learned during more than a decade of optimizing Netflix’s streams — a decade during which Netflix’s video engineers re-encoded the company’s entire catalog multiple times, began using machine learning to make sure Netflix’s streams look good, and were forced to tweak their approach when a show like Barbie Dreamhouse Adventures tripped up the company’s encoders.
When Aaron joined Netflix in 2011, the company was approaching streaming much like everyone else in the online video industry. “We have to support a huge variety of devices,” said Aaron. “Really old TVs, new TVs, mobile devices, set top boxes: each of those devices can have different bandwidth requirements.”
To address those needs, Netflix encoded each video with a bunch of different bitrates and resolutions according to a predefined list of encoding parameters, or recipes, as Aaron and her colleagues like to call them. Back in those days, a viewer on a very slow connection would automatically get a 240p stream with a bitrate of 235 kbps. Faster connections would receive a 1750 kbps 720p video; Netflix’s streaming quality topped out at 1080p with a 5800 kbps bitrate.
The company’s content delivery servers would automatically choose the best version for each viewer based on their device and broadband speeds and adjust the streaming quality on the fly to account for network slow-downs.
To Aaron and her eagle-eyed awareness of encoding challenges, that approach seemed inadequate. Why spend the same bandwidth to stream something as visually complex as an action movie with car chases (lots of motion) and explosions (flashing lights and all that noisy smoke) as much simpler visual fare? “You need less bits for animation,” explained Aaron.
My Little Pony, which was a hit on the service at the time, simply didn’t have the same visual complexity as live-action titles. It didn’t make sense to use the same encoding recipes for both. That’s why, in 2015, Netflix began re-encoding its entire catalog with settings fine-tuned per title. With this new, title-specific approach, animated fare could be streamed in 1080p with as little as 1.5 Mbps.
Switching to per-title encoding resulted in bandwidth savings of around 20 percent on average — enough to make a notable difference for consumers in North America and Europe, but even more important as Netflix was eyeing its next chapter: in January of 2016, then-CEO Reed Hastings announced that the company was expanding into almost every country around the world — including markets with subpar broadband infrastructure and consumers who primarily accessed the internet from their mobile phone.
Per-title encoding has since been adopted by most commercial video technology vendors, including Amazon’s AWS, which used the approach to optimize PBS’s video library last year. But while the company’s encoding strategy has been wholeheartedly endorsed by streaming tech experts, it has been largely met with silence by Hollywood’s creative class.
Directors and actors like Judd Apatow and Aaron Paul were up in arms when Netflix began to let people change the playback speed of its videos in 2019. Changes to the way it encodes videos, on the other hand, never made the same kinds of headlines. That may be because encoding algorithms are a bit too geeky for that crowd, but there’s also a simpler explanation: the new encoding scheme was so successful at saving bandwidth without compromising on visual fidelity that no one noticed the difference.
Make that almost no one: Aaron quickly realized that the company’s per-title-based encoding approach wasn’t without faults. One problem became apparent to her while watching Barbie Dreamhouse Adventures. It’s one of those animated Netflix shows that was supposed to benefit the most from a per-title approach.
However, Netflix’s new encoding struggled with one particular scene. “There’s this guy with a very sparkly suit and a sparkly water fountain behind him,” said Aaron. The scene looked pretty terrible with the new encoding rules, which made her realize that they needed to be more flexible. “At (other) parts of the title, you need less bits,” Aaron said. “But for this, you need to increase it.”
The solution to this problem was to get a lot more granular during the encoding process. Netflix began to break down videos by shots and apply different encoding settings to each individual segment in 2018. Two people talking in front of a plain white wall were encoded with lower bit rates than the same two people taking part in a car chase; Barbie hanging out with her friends at home required less data than the scene in which Mr. Sparklesuit shows up.
As Netflix adopted 4K and HDR, those differences became even more stark. “(In) The Crown, there’s an episode where it’s very smokey,” said Aaron. “There’s a lot of pollution. Those scenes are really hard to encode.” In other words: they require more data to look good, especially when shown on a big 4K TV in HDR, than less visually complex fare.
Aaron’s mind never stops looking for those kinds of visual challenges, no matter whether she watches Netflix after work or goes outside to take a walk. This has even caught on with her kids, with Aaron telling me that they occasionally point at things in the real world and shout: “Look, it’s a blur!”
It’s a habit that comes with the job and a bit of a curse, too — one of those things you just can’t turn off. During our conversation, she picked up her phone, only to pause and point at the rhinestone-bedazzled phone case. It reminded her of that hard-to-encode scene from Barbie Dreamhouse Adventures. Another visual challenge!
Still, even an obsessive mind can only get you so far. For one thing, Aaron can’t possibly watch thousands of Netflix videos and decide which encoding settings to apply to every single shot. Instead, her team compiled a few dozen short clips sourced from a variety of shows and movies on Netflix and encoded each clip with a range of different settings. They then let test subjects watch those clips and grade the visual imperfections from not noticeable to very annoying. “You have to do subjective testing,” Aaron said. “It’s all based on ground truth, subjective testing.”
The insights gained this way have been used by Netflix to train a machine learning model that can analyze the video quality of different encoding settings across the company’s entire catalog, which helps to figure out the optimal settings for each and every little slice of a show or movie. The company collaborated with the University of Southern California on developing these video quality assessment algorithms and open-sourced them in 2016. Since then, it has been adopted by much of the industry as a way to analyze streaming video quality and even gained Netflix an Emmy Award. All the while, Aaron and her team have worked to catch up with Netflix’s evolving needs — like HDR.
“We had to develop yet another metric to measure the video quality for HDR,” Aaron said. “We had to run subjective tests and redo that work specifically for HDR.” This eventually allowed Netflix to encode HDR titles with per-shot-specific settings as well, which the company finally did last year. Now, her team is working on open-sourcing HDR-based video quality assessment.
Slicing up a movie by shot and then encoding every slice individually to make sure it looks great while also saving as much bandwidth as possible: all of this work happens independently of the video codecs Netflix uses to encode and compress these files. It’s kind of like how you might change the resolution or colors of a picture in Photoshop before deciding whether to save it as a JPEG or a PNG. However, Netflix’s video engineers have also actively been working on advancing video codecs to further optimize the company’s streams.
Netflix is a founding member of the Alliance for Open Media, whose other members include companies like Google, Intel, and Microsoft. Aaron sits on the board of the nonprofit, which has spearheaded the development of the open, royalty-free AV1 video codec. Netflix began streaming some videos in AV1 to Android phones in early 2020 and has since expanded to select smart TVs and streaming devices as well as iPhones. “We’ve encoded about two-thirds of our catalog in AV1,” Aaron said. The percentage of streaming hours transmitted in AV1 is “in the double digits,” she added.
And while the roll-out of AV1 continues, work is already underway on its successor. It might take a few more years before devices actually support that next-gen codec, but early results suggest that it will make a difference. “At this point, we see close to 30 percent bit rate reduction with the same quality compared to AV1,” Aaron explained. “I think that’s very, very promising.”
While contributing to the development of new video codecs, Aaron and her team stumbled across another pitfall: video engineers across the industry have been relying on a relatively small corpus of freely available video clips to train and test their codecs and algorithms, and most of those clips didn’t look at all like your typical Netflix show. “The content that they were using that was open was not really tailored to the type of content we were streaming,” recalled Aaron. “So, we created content specifically for testing in the industry.”
In 2016, Netflix released a 12-minute 4K HDR short film called Meridian that was supposed to remedy this. Meridian looks like a film noir crime story, complete with shots in a dusty office with a fan in the background, a cloudy beach scene with glistening water, and a dark dream sequence that’s full of contrasts. Each of these shots has been crafted for video encoding challenges, and the entire film has been released under a Creative Commons license. The film has since been used by the Fraunhofer Institute and others to evaluate codecs, and its release has been hailed by the Creative Commons foundation as a prime example of “a spirit of cooperation that creates better technical standards.”
Cutting-edge encoding strategies, novel quality metrics, custom-produced video assets, and advanced codecs: in many ways, Netflix has been leading the industry when it comes to delivering the best-looking streams in the most efficient ways to consumers. That’s why the past 14 months have been especially humbling.
Netflix launched its very first livestream in March of 2023, successfully broadcasting a Chris Rock comedy special to its subscribers. A month later, it tried again with a live reunion event for its reality show Love Is Blind — and failed miserably, with viewers waiting for over an hour for the show to start.
The failed livestream was especially embarrassing because it tarnished the image of Netflix as a technology powerhouse that is lightyears ahead of its competition. Netflix co-CEO Greg Peters issued a rare mea culpa later that month. “We’re really sorry to have disappointed so many people,” Peters told investors. “We didn’t meet the standard that we expect of ourselves to serve our members.”
Netflix wants to avoid further such failures, which is why the company is playing it safe and moving slowly to optimize encoding for live content. “We’re quite early into livestreaming,” Aaron said. “For now, the main goals are stability, resilience of the system, and being able to handle the scale of Netflix.” In practice, this means that Aaron’s team isn’t really tweaking encoding settings for those livestreams at all for the time being, even if it forces her to sit through the livestream of the SAG Awards show without being able to improve anything. “We’re starting with a bit more industry-standard ways to do it,” she told me. “And then from there, we’ll optimize.”
The same is true in many ways for cloud gaming. Netflix began to test games on TVs and desktop computers last summer and has since slowly expanded those efforts to include additional markets and titles. With games being rendered in the cloud as opposed to on-device, cloud gaming is essentially a specialized form of livestreaming, apart from one crucial distinction. “They’re quite different,” said Aaron. “[With] cloud gaming, your latency is even more stringent than live.”
Aaron’s team is currently puzzling over different approaches to both problems, which requires them to ignore much of what they’ve learned over the past decade. “The lesson is not to think about it like VOD,” Aaron said. One example: slicing and dicing a video by shot and then applying the optimal encoding setting for every shot is a lot more difficult when you don’t know what happens next. “With live, it’s even harder to anticipate complex scenes,” she said.
Live is unpredictable: that’s not just true for encoding but also for Netflix’s business. The company just inked a deal to show two NFL games on Christmas Day and will begin streaming weekly WWE matches in January. This happens as sports as a whole, which has long been the last bastion of cable TV, is transitioning to streaming. Apple is showing MLS games, Amazon is throwing tons of money at sports, and ESPN, Fox, and Warner Bros. are banding together to launch their own sports streaming service. Keeping up with these competitors doesn’t just require Netflix to spend heavily on sports rights but also actually get good at livestreaming.
All of this means that Aaron and her team won’t be out of work any time soon — especially since the next challenge is always just around the corner. “There’s going to be more live events. There’s going to be, maybe, 8K, at some point,” she said. “There’s all these other experiences that would need more bandwidth.”
In light of all of those challenges, does Aaron ever fear running out of ways to optimize videos? In other words: how many times can Netflix re-encode its entire catalog with yet another novel encoding strategy, or new codec, before those efforts are poised to hit a wall and won’t make much of a difference anymore?
“In the codec space, people were saying that 20 years ago,” Aaron said. “In spite of that, we still find areas for improvement. So, I’m hopeful.”
And always eagle-eyed to spot the next visual challenge, whether it’s a sea of camera flashes or a surprise appearance by Mr. Sparklesuit.

Technology
SpaceX Starship explodes again, this time on the ground

Late Wednesday night at about 11PM CT, SpaceX was about to perform a static fire test of Ship 36, ahead of a planned 10th flight test for its Starship, when there was suddenly a massive explosion at the Massey’s Testing Center site. SpaceX says “A safety clear area around the site was maintained throughout the operation and all personnel are safe and accounted for,” and that there are no hazards to residents in the area of its recently incorporated town of Starbase, Texas.
“After completing a single-engine static fire earlier this week, the vehicle was in the process of loading cryogenic propellant for a six-engine static fire when a sudden energetic event resulted in the complete loss of Starship and damage to the immediate area surrounding the stand,” according to an update on SpaceX’s website. “The explosion ignited several fires at the test site which remains clear of personnel and will be assessed once it has been determined to be safe to approach. Individuals should not attempt to approach the area while safing operations continue.”
The explosion follows others during the seventh, eighth, and ninth Starship flight tests earlier this year. “Initial analysis indicates the potential failure of a pressurized tank known as a COPV, or composite overwrapped pressure vessel, containing gaseous nitrogen in Starship’s nosecone area, but the full data review is ongoing,” SpaceX says. On X, the company called the explosion a “major anomaly.”
Fox 26 Houston says that, according to authorities, there have been no injuries reported. SpaceX also says no injuries have been reported.
This flight test would’ve continued using SpaceX’s “V2” Starship design, which Musk said in 2023, “holds more propellant, reduces dry mass and improves reliability.” SpaceX is also preparing a new V3 design that, according to Musk, was tracking toward a rate of launching once a week in about 12 months.
Update, June 19th: Added information from SpaceX.
Technology
Quadruped robot plays badminton with you using AI

NEWYou can now listen to Fox News articles!
At ETH Zurich’s Robotic Systems Lab, engineers have created ANYmal-D, a four-legged robot that can play badminton with people.
This project brings together robotics, artificial intelligence and sports, showing how advanced robots can take on dynamic, fast-paced games.
ANYmal-D’s design and abilities are opening up new possibilities for human-robot collaboration in sports and beyond.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join.
ANYmal-D, a four-legged robot that can play badminton with people (ETH Zurich)
How does ANYmal-D play badminton with humans?
Badminton is a game that requires quick footwork, fast reactions, and precise hand-eye coordination. To give a robot a chance on the court, the ETH Zurich team equipped ANYmal-D with four legs for stability and agility, a dynamic arm to swing the racket, and a stereo camera to track the shuttlecock. The robot uses a reinforcement learning-based controller, which allows it to predict and react to the shuttlecock’s movement in real-time. ANYmal-D can move around the court, adjust its posture, and time its swings, keeping rallies going with human players for up to 10 shots.
NO TENNIS PARTNER? NO WORRIES WITH THIS AI ROBOT
The technology behind ANYmal-D’s badminton skills
ANYmal-D’s stereo camera serves as its eyes, constantly monitoring the shuttlecock. The robot uses a “perception noise model” to compare what it sees with data from its training, helping it track the shuttlecock even when it moves unpredictably. The robot can pitch its body to keep the shuttlecock in view, mimicking how a human player might lean in for a tricky shot.

ANYmal-D, a four-legged robot that can play badminton with people (ETH Zurich)
HUMANPLUS ROBOT CAN GO FROM PLAYING THE PIANO TO PING-PONG TO BOXING
Unified reinforcement learning for whole-body control
Coordinating legs and an arm is tough for any robot. The ETH Zurich team developed a unified control policy using reinforcement learning, allowing ANYmal-D to move and swing as a coordinated whole. This system was trained in simulation, so the robot learned how to handle a wide range of shots and situations before stepping onto a real court.

Diagram of the ANYmal-D, a four-legged robot that can play badminton with people (ETH Zurich)
AI HUMANOID ROBOT LEARNS TO MIMIC HUMAN EMOTIONS AND BEHAVIOR
Hardware integration: What’s inside the robot?
ANYmal-D combines a sturdy quadrupedal base with the DynaArm, and its racket is set at a 45-degree angle for effective striking. The robot’s state estimation runs at 400 Hz, the control policy updates at 100 Hz, and the perception system operates at 60 Hz. All of this runs on a Jetson AGX Orin module, making the robot responsive and ready for action.
GET A FREE SCAN TO FIND OUT IF YOUR PERSONAL INFORMATION IS ALREADY OUT ON THE WEB

ANYmal-D, a four-legged robot that can play badminton with people (ETH Zurich)
Challenges of playing badminton with a robot
Getting the robot’s legs and arm to work together smoothly is a major challenge. Most robots handle these tasks separately, but this limits agility. By combining locomotion and arm control into a single system, ANYmal-D can adjust its posture and gait based on the shuttlecock’s path, moving more like a human player.
AI TENNIS ROBOT COACH BRINGS PROFESSIONAL TRAINING TO PLAYERS

ANYmal-D, a four-legged robot that can play badminton with people (ETH Zurich)
Active perception: How ANYmal-D sees the game
Robots don’t have human eyes, so their cameras can struggle with frame rates and field of view. ANYmal-D’s perception-aware controller keeps its camera moving smoothly, always tracking the shuttlecock. The perception noise model helps bridge the gap between simulation and real matches, making the robot more reliable during games.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

ANYmal-D, a four-legged robot that can play badminton with people (ETH Zurich)
Real-world deployment: Bringing the robot to the court
Bringing ANYmal-D from the lab to the badminton court meant dealing with practical issues like power limits and communication delays. Despite these challenges, the robot managed to keep up with human players, responding to different shot speeds and landing positions, and maintaining rallies that showcased its adaptability and skill.

ANYmal-D, a four-legged robot that can play badminton with people (ETH Zurich)
ANYmal-D’s badminton performance: What did the tests show?
In collaborative games with amateur players, ANYmal-D tracked, intercepted, and returned shuttlecocks with impressive consistency. On average, it took about 0.357 seconds to process the shuttlecock’s trajectory after a human hit, leaving just over half a second to get into position and make the shot. While it didn’t return every shot, the robot’s ability to maintain rallies and adjust to the pace of the game highlights how far robotics has come in dynamic sports scenarios.
SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES

ANYmal-D, a four-legged robot that can play badminton with people (ETH Zurich)
Kurt’s key takeaways
ANYmal-D really shows how far robotics has come, especially when it comes to working alongside people in fast-paced activities like badminton. It’s interesting to see a robot not just keeping up on the court, but actually rallying with human players and adapting to the game as it unfolds. As these technologies keep improving, it’s easy to picture more robots joining us in all sorts of sports and activities, making play and teamwork even more fun for everyone.
Would you be curious to play a match against a robot like ANYmal-D, or do you think nothing can replace the experience of playing badminton with another human? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Ask Kurt a question or let us know what stories you’d like us to cover
Follow Kurt on his social channels
Answers to the most asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
FBC: Firebreak is missing Control’s weird charm

With FBC: Firebreak, Remedy Entertainment has entered the world of the first-person co-op shooter. Set in its Control universe — specifically the site of the first game, the brutalist nightmare office called the Oldest House — players control a member of the titular three-person team of the Federal Bureau of Control (FBC), tasked with addressing various containment breaches. Unfortunately, all the aspects that make Remedy’s worlds so intriguing are completely absent in this bare-bones co-op shooter, which offers nothing for either longtime fans or those invested in existing shooters.
Players in Firebreak are like firefighters or disaster responders, with each member occupying a different role: mechanic, water carrier, electrician. Across five recurring levels, teams must work to stop the spread of corruption, called the Hiss (a mysterious red entity that turns people into raging zombies and other types of creatures). Objectives vary from destroying Post-it notes to fixing fans, all while being assailed by swarms of various nightmare monsters.
Control, the central foundation of Remedy’s wider connected universe that also includes Alan Wake, is at its core weird. It’s how Remedy developers have described it — to me and others — allowing for fluctuations between the terrifying, the quirky, the odd, and the hilarious. The Bureau itself is a government agency tasked with containing bizarre items and reacting to huge and strange world events: for example, a traffic light that, when it flashes red, sends people to different locations, or a fridge that eats people if you stop looking at it.
In Remedy’s universe, FBC workers document, monitor, and research these sorts of items with the gray-faced enthusiasm of every bored researcher. The number of times the toy duck teleports needs to be logged as much as how many coffee filters need to be replaced in the break room.
That stone-faced reaction to the weird is only mildly present in Firebreak, with brief interactions with mission provider Hank Wilder, the security chief, detailing bizarre tasks in a slight monotone. Even player character barks demonstrate this. One of the player voice options is called “Pencil Pusher,” who, when receiving friendly fire, screams that such actions “violate office policy.” Health restoration involves characters huddling in a shower together; you can fix equipment by hitting it with a wrench.
As someone obsessed with Control, I was eagerly anticipating a return — particularly in the shoes of ordinary personnel, rather than the almost godlike head of the agency, Jesse Faden (who you play in Control). But that sense of unease that plays off the quirkiness is not here. The Oldest House and its enemies feel like little more than an aesthetic, or even a kind of mod, for a generic co-op shooter. There is no sense of progression, no overarching goal to which you are working. Levels and tasks repeat. There aren’t even creepy big-level bosses, like the terrors in Control, except in one area.
You will have seen all the game has to offer within a few hours, since each level has only three or four stages (with each successive stage in the same level taking you further in), and some stages can be completed within three to four minutes. As an example, one stage involves destroying replicating Post-it notes. Once you have destroyed a sufficient number, you rush back to the elevator as a horde descends. The second stage requires the same objective, only this time you gain access to a second area to destroy more notes. The third stage repeats this, only you go further in and face a boss. All end with rushing back.
While the game offers modifiers — such as harder enemies and corrupting anomalies that can slightly keep you on your toes — the core aspect wears out quickly. I do not feel I am making any headway in clearing out an entire level, since once cleared, there’s no indicator our team made any difference. The only incentive is to obtain better gear. At least the game doesn’t push microtransactions and is quite generous in its rewards, especially on harder difficulties.
Image: Remedy Entertainment
You also level up various roles independently: playing mainly as the mechanic, you will have to start from scratch if you switch to, for example, the electrician role. These roles do feel distinct, as you are given different gear and abilities. The mechanic can almost instantly repair broken equipment, a very useful skill given how many broken machines there are. But the game is filled with various hazards, such as fire and gunk, which the water soaker character — with their water cannon — can negate.
Shooting feels good, but guns are standard: shotguns, machine guns, pistols. Don’t expect weird weapons like the Service Weapon from Control. This is meat-and-potatoes destruction.
That’s precisely what disappointed me: ordinary workers in a world where fridges eat people is what made me love Control, and the idea of being able to play one of the lowly workers was exciting. Yet that charm is largely absent. I barely felt part of the FBC and it didn’t seem like I was containing anything.
In Control, you would clear rooms and see the game world change permanently. Obviously a co-op shooter can’t do things in the exact same way. But why not tie something like this to the host player? If I have to see the same level three times, progressing further each level, why not show some permanent change from a previous run? There’s no indication the world is reacting to the Firebreak team’s efforts.
In reality, Firebreak feels like one of the multiplayer modes that used to be tacked on to big-budget single-player games (think Mass Effect 3, for example). If players don’t feel like they’re making a difference as part of a team trying to stop an outbreak, why should we bother? The levels are akin to hero-shooter arenas, devoid of the deep lore of a Remedy game. At least with hero shooters, playing against other people keeps play constantly fresh. This felt like it was stale within a few hours, an avocado of a game.
I genuinely don’t know who Firebreak is for. Longtime fans of Control won’t find collectibles, environmental storytelling, or anything to even read. And those looking for meaningful multiplayer shooters have plenty of options already. This is a strange dim light for a studio that usually produces brilliance.
FBC: Firebreak is available now on the PC, PS5, and Xbox Series X / S. It’s also available for Game Pass and PlayStation Plus subscribers.
-
Business1 week ago
Yale’s Endowment Selling Private Equity Stakes as Trump Targets Ivies
-
Culture1 week ago
Barbara Holdridge, Whose Record Label Foretold Audiobooks, Dies at 95
-
Culture1 week ago
A Murdered Journalist’s Unfinished Book About the Amazon Gets Completed and Published
-
News1 week ago
Yosemite Bans Large Flags From El Capitan, Criminalizing Protests
-
Education1 week ago
What Happens to Harvard if Trump Successfully Bars Its International Students?
-
Politics1 week ago
Fox News Politics Newsletter: Hillary ‘Can’t Handle the Ratio'
-
News1 week ago
Trumps to Attend ‘Les Misérables’ at Kennedy Center
-
Arizona1 day ago
Suspect in Arizona Rangers' death killed by Missouri troopers