Anne Aaron just can’t help herself.
Technology
Inside Netflix’s bet on advanced video encoding
Aaron, Netflix’s senior encoding technology director, was watching the company’s livestream of the Screen Actors Guild Awards earlier this year. And while the rest of the world marveled at all those celebrities and their glitzy outfits sparkling in a sea of flashing cameras, Aaron’s mind immediately started to analyze all the associated visual challenges Netflix’s encoding tech would have to tackle. “Oh my gosh, this content is going to be so hard to encode,” she recalled thinking when I recently interviewed her in Netflix’s office in Los Gatos, California.
Aaron has spent the past 13 years optimizing the way Netflix encodes its movies and TV shows. The work she and her team have done allows the company to deliver better-looking streams over slower connections and has resulted in 50 percent bandwidth savings for 4K streams alone, according to Aaron. Netflix’s encoding team has also contributed to industrywide efforts to improve streaming, including the development of the AV1 video codec and its eventual successor.
Now, Aaron is getting ready to tackle what’s next for Netflix: Not content with just being a service for binge-watching, the company ventured into cloud gaming and livestreaming last year. So far, Netflix has primarily dabbled in one-off live events like the SAG Awards. But starting next year, the company will stream WWE RAW live every Monday. The streamer nabbed the wrestling franchise from Comcast’s USA Network, where it has long been the No. 1 rated show, regularly drawing audiences of around 1.7 million viewers. Satisfying that audience week after week poses some very novel challenges.
“It’s a completely different encoding pipeline than what we’ve had for VOD,” Aaron said, using industry shorthand for on-demand video streaming. “My challenge to (my) team is to get to the same bandwidth requirements as VOD but do it in a faster, real-time way.”
To achieve that, Aaron and her team have to basically start all over and disregard almost everything they’ve learned during more than a decade of optimizing Netflix’s streams — a decade during which Netflix’s video engineers re-encoded the company’s entire catalog multiple times, began using machine learning to make sure Netflix’s streams look good, and were forced to tweak their approach when a show like Barbie Dreamhouse Adventures tripped up the company’s encoders.
When Aaron joined Netflix in 2011, the company was approaching streaming much like everyone else in the online video industry. “We have to support a huge variety of devices,” said Aaron. “Really old TVs, new TVs, mobile devices, set top boxes: each of those devices can have different bandwidth requirements.”
To address those needs, Netflix encoded each video with a bunch of different bitrates and resolutions according to a predefined list of encoding parameters, or recipes, as Aaron and her colleagues like to call them. Back in those days, a viewer on a very slow connection would automatically get a 240p stream with a bitrate of 235 kbps. Faster connections would receive a 1750 kbps 720p video; Netflix’s streaming quality topped out at 1080p with a 5800 kbps bitrate.
The company’s content delivery servers would automatically choose the best version for each viewer based on their device and broadband speeds and adjust the streaming quality on the fly to account for network slow-downs.
To Aaron and her eagle-eyed awareness of encoding challenges, that approach seemed inadequate. Why spend the same bandwidth to stream something as visually complex as an action movie with car chases (lots of motion) and explosions (flashing lights and all that noisy smoke) as much simpler visual fare? “You need less bits for animation,” explained Aaron.
My Little Pony, which was a hit on the service at the time, simply didn’t have the same visual complexity as live-action titles. It didn’t make sense to use the same encoding recipes for both. That’s why, in 2015, Netflix began re-encoding its entire catalog with settings fine-tuned per title. With this new, title-specific approach, animated fare could be streamed in 1080p with as little as 1.5 Mbps.
Switching to per-title encoding resulted in bandwidth savings of around 20 percent on average — enough to make a notable difference for consumers in North America and Europe, but even more important as Netflix was eyeing its next chapter: in January of 2016, then-CEO Reed Hastings announced that the company was expanding into almost every country around the world — including markets with subpar broadband infrastructure and consumers who primarily accessed the internet from their mobile phone.
Per-title encoding has since been adopted by most commercial video technology vendors, including Amazon’s AWS, which used the approach to optimize PBS’s video library last year. But while the company’s encoding strategy has been wholeheartedly endorsed by streaming tech experts, it has been largely met with silence by Hollywood’s creative class.
Directors and actors like Judd Apatow and Aaron Paul were up in arms when Netflix began to let people change the playback speed of its videos in 2019. Changes to the way it encodes videos, on the other hand, never made the same kinds of headlines. That may be because encoding algorithms are a bit too geeky for that crowd, but there’s also a simpler explanation: the new encoding scheme was so successful at saving bandwidth without compromising on visual fidelity that no one noticed the difference.
Make that almost no one: Aaron quickly realized that the company’s per-title-based encoding approach wasn’t without faults. One problem became apparent to her while watching Barbie Dreamhouse Adventures. It’s one of those animated Netflix shows that was supposed to benefit the most from a per-title approach.
However, Netflix’s new encoding struggled with one particular scene. “There’s this guy with a very sparkly suit and a sparkly water fountain behind him,” said Aaron. The scene looked pretty terrible with the new encoding rules, which made her realize that they needed to be more flexible. “At (other) parts of the title, you need less bits,” Aaron said. “But for this, you need to increase it.”
The solution to this problem was to get a lot more granular during the encoding process. Netflix began to break down videos by shots and apply different encoding settings to each individual segment in 2018. Two people talking in front of a plain white wall were encoded with lower bit rates than the same two people taking part in a car chase; Barbie hanging out with her friends at home required less data than the scene in which Mr. Sparklesuit shows up.
As Netflix adopted 4K and HDR, those differences became even more stark. “(In) The Crown, there’s an episode where it’s very smokey,” said Aaron. “There’s a lot of pollution. Those scenes are really hard to encode.” In other words: they require more data to look good, especially when shown on a big 4K TV in HDR, than less visually complex fare.
Aaron’s mind never stops looking for those kinds of visual challenges, no matter whether she watches Netflix after work or goes outside to take a walk. This has even caught on with her kids, with Aaron telling me that they occasionally point at things in the real world and shout: “Look, it’s a blur!”
It’s a habit that comes with the job and a bit of a curse, too — one of those things you just can’t turn off. During our conversation, she picked up her phone, only to pause and point at the rhinestone-bedazzled phone case. It reminded her of that hard-to-encode scene from Barbie Dreamhouse Adventures. Another visual challenge!
Still, even an obsessive mind can only get you so far. For one thing, Aaron can’t possibly watch thousands of Netflix videos and decide which encoding settings to apply to every single shot. Instead, her team compiled a few dozen short clips sourced from a variety of shows and movies on Netflix and encoded each clip with a range of different settings. They then let test subjects watch those clips and grade the visual imperfections from not noticeable to very annoying. “You have to do subjective testing,” Aaron said. “It’s all based on ground truth, subjective testing.”
The insights gained this way have been used by Netflix to train a machine learning model that can analyze the video quality of different encoding settings across the company’s entire catalog, which helps to figure out the optimal settings for each and every little slice of a show or movie. The company collaborated with the University of Southern California on developing these video quality assessment algorithms and open-sourced them in 2016. Since then, it has been adopted by much of the industry as a way to analyze streaming video quality and even gained Netflix an Emmy Award. All the while, Aaron and her team have worked to catch up with Netflix’s evolving needs — like HDR.
“We had to develop yet another metric to measure the video quality for HDR,” Aaron said. “We had to run subjective tests and redo that work specifically for HDR.” This eventually allowed Netflix to encode HDR titles with per-shot-specific settings as well, which the company finally did last year. Now, her team is working on open-sourcing HDR-based video quality assessment.
Slicing up a movie by shot and then encoding every slice individually to make sure it looks great while also saving as much bandwidth as possible: all of this work happens independently of the video codecs Netflix uses to encode and compress these files. It’s kind of like how you might change the resolution or colors of a picture in Photoshop before deciding whether to save it as a JPEG or a PNG. However, Netflix’s video engineers have also actively been working on advancing video codecs to further optimize the company’s streams.
Netflix is a founding member of the Alliance for Open Media, whose other members include companies like Google, Intel, and Microsoft. Aaron sits on the board of the nonprofit, which has spearheaded the development of the open, royalty-free AV1 video codec. Netflix began streaming some videos in AV1 to Android phones in early 2020 and has since expanded to select smart TVs and streaming devices as well as iPhones. “We’ve encoded about two-thirds of our catalog in AV1,” Aaron said. The percentage of streaming hours transmitted in AV1 is “in the double digits,” she added.
And while the roll-out of AV1 continues, work is already underway on its successor. It might take a few more years before devices actually support that next-gen codec, but early results suggest that it will make a difference. “At this point, we see close to 30 percent bit rate reduction with the same quality compared to AV1,” Aaron explained. “I think that’s very, very promising.”
While contributing to the development of new video codecs, Aaron and her team stumbled across another pitfall: video engineers across the industry have been relying on a relatively small corpus of freely available video clips to train and test their codecs and algorithms, and most of those clips didn’t look at all like your typical Netflix show. “The content that they were using that was open was not really tailored to the type of content we were streaming,” recalled Aaron. “So, we created content specifically for testing in the industry.”
In 2016, Netflix released a 12-minute 4K HDR short film called Meridian that was supposed to remedy this. Meridian looks like a film noir crime story, complete with shots in a dusty office with a fan in the background, a cloudy beach scene with glistening water, and a dark dream sequence that’s full of contrasts. Each of these shots has been crafted for video encoding challenges, and the entire film has been released under a Creative Commons license. The film has since been used by the Fraunhofer Institute and others to evaluate codecs, and its release has been hailed by the Creative Commons foundation as a prime example of “a spirit of cooperation that creates better technical standards.”
Cutting-edge encoding strategies, novel quality metrics, custom-produced video assets, and advanced codecs: in many ways, Netflix has been leading the industry when it comes to delivering the best-looking streams in the most efficient ways to consumers. That’s why the past 14 months have been especially humbling.
Netflix launched its very first livestream in March of 2023, successfully broadcasting a Chris Rock comedy special to its subscribers. A month later, it tried again with a live reunion event for its reality show Love Is Blind — and failed miserably, with viewers waiting for over an hour for the show to start.
The failed livestream was especially embarrassing because it tarnished the image of Netflix as a technology powerhouse that is lightyears ahead of its competition. Netflix co-CEO Greg Peters issued a rare mea culpa later that month. “We’re really sorry to have disappointed so many people,” Peters told investors. “We didn’t meet the standard that we expect of ourselves to serve our members.”
Netflix wants to avoid further such failures, which is why the company is playing it safe and moving slowly to optimize encoding for live content. “We’re quite early into livestreaming,” Aaron said. “For now, the main goals are stability, resilience of the system, and being able to handle the scale of Netflix.” In practice, this means that Aaron’s team isn’t really tweaking encoding settings for those livestreams at all for the time being, even if it forces her to sit through the livestream of the SAG Awards show without being able to improve anything. “We’re starting with a bit more industry-standard ways to do it,” she told me. “And then from there, we’ll optimize.”
The same is true in many ways for cloud gaming. Netflix began to test games on TVs and desktop computers last summer and has since slowly expanded those efforts to include additional markets and titles. With games being rendered in the cloud as opposed to on-device, cloud gaming is essentially a specialized form of livestreaming, apart from one crucial distinction. “They’re quite different,” said Aaron. “[With] cloud gaming, your latency is even more stringent than live.”
Aaron’s team is currently puzzling over different approaches to both problems, which requires them to ignore much of what they’ve learned over the past decade. “The lesson is not to think about it like VOD,” Aaron said. One example: slicing and dicing a video by shot and then applying the optimal encoding setting for every shot is a lot more difficult when you don’t know what happens next. “With live, it’s even harder to anticipate complex scenes,” she said.
Live is unpredictable: that’s not just true for encoding but also for Netflix’s business. The company just inked a deal to show two NFL games on Christmas Day and will begin streaming weekly WWE matches in January. This happens as sports as a whole, which has long been the last bastion of cable TV, is transitioning to streaming. Apple is showing MLS games, Amazon is throwing tons of money at sports, and ESPN, Fox, and Warner Bros. are banding together to launch their own sports streaming service. Keeping up with these competitors doesn’t just require Netflix to spend heavily on sports rights but also actually get good at livestreaming.
All of this means that Aaron and her team won’t be out of work any time soon — especially since the next challenge is always just around the corner. “There’s going to be more live events. There’s going to be, maybe, 8K, at some point,” she said. “There’s all these other experiences that would need more bandwidth.”
In light of all of those challenges, does Aaron ever fear running out of ways to optimize videos? In other words: how many times can Netflix re-encode its entire catalog with yet another novel encoding strategy, or new codec, before those efforts are poised to hit a wall and won’t make much of a difference anymore?
“In the codec space, people were saying that 20 years ago,” Aaron said. “In spite of that, we still find areas for improvement. So, I’m hopeful.”
And always eagle-eyed to spot the next visual challenge, whether it’s a sea of camera flashes or a surprise appearance by Mr. Sparklesuit.
Technology
US arrests soldier who allegedly made $400k on Maduro Polymarket bets
On or about January 6, 2026, for example, VAN DYKE asked Polymarket to delete his Polymarket account, falsely claiming that he had lost access to the email address to which the account had been associated. That same day, VAN DYKE changed the email registered to his cryptocurrency exchange account to an email address that was not subscribed to in his name, which email address was created on or about December 14., 2025.
Technology
How Florida retiree lost $200K in fake PayPal refund scam
NEWYou can now listen to Fox News articles!
Brian Oliver is retired, sharp and financially savvy enough to have a stock-and-bond portfolio worth hundreds of thousands of dollars. He is not the type of person you picture getting scammed. That is exactly why scammers picked him.
What happened to Oliver, 85, is the kind of story that makes your jaw drop, and your stomach turn at the same time. It started with a routine-looking email and ended with a box of gold coins rolling away in the back of a black Mustang. In between, Oliver lost $200,000 and nearly half of his retirement savings.
He told his story on my Beyond Connected podcast at getbeyondconnected.com, along with Detective Justin Torres of the Gainesville Police Department in Florida. What they shared together is equal parts chilling and clarifying.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
BEWARE FAKE CREDIT CARD ACCOUNT RESTRICTION SCAMS
Brian Oliver shares how a routine-looking email pulled him into a sophisticated refund scam that cost him $200,000. (Sebastian Gollnow/picture alliance)
It all started with a PayPal refund scam email
Brian got an email that said PayPal owed him money. It was not a wild claim. He had dealt with PayPal before and figured, “Maybe they found some money for me.” So he responded. The email included a phone number, and that number connected him to a man who called himself Andrew Johnson.
“Yeah, we have $450 for you. Type in the number 100 on your computer and we’ll get it started.”
Brian typed 100. Andrew immediately said he had made a mistake: “Oh no, you put in 10,000.”
Brian pushed back. He said he did not type 10,000. Andrew told him to check his Bank of America account. Brian opened it, and there it was: $10,000 sitting in his checking account.
Except it was not real. The scammers had somehow mirrored his bank’s website. What Brian saw looked exactly like his actual Bank of America page, complete with a new balance and a phone number embedded in the “Contact Us” section. That number was fake, too.
Brian called it. A man named Josh answered, identifying himself as a Bank of America representative. He told Brian that the only way to return the money without triggering a $3,500 tax penalty was to withdraw $10,000 in cash and feed it into a crypto ATM.
How the PayPal refund scam tricked Brian
Oliver had never heard of a crypto ATM before that day. Josh helpfully told him exactly where to find one. It was in a sketchy part of town, and Oliver walked in carrying $10,000 in his pocket.
“I’m on my knees, on a cement floor, and I’m 85,” Oliver said.
He fed one hundred $100 bills into the machine, bill by bill, watching over his shoulder the entire time. Some bills got kicked back out. He fed them in again. When the machine finally accepted all of them, he photographed the receipt and sent it to Andrew Johnson, just as he had been instructed.
Then Oliver went home and told Andrew it was done. Andrew told him they still had to take care of his refund. He told Oliver to type in the number 200.
FAKE PAYPAL EMAIL LET HACKERS ACCESS COMPUTER AND BANK ACCOUNT
Oliver typed it. Andrew’s response came fast: “Oh my God, my boss is going to kill me. It’s $200,000 we’ve transferred to your account.”
This type of scam is becoming more common, and it often involves criminals impersonating trusted platforms like PayPal.
“PayPal does not tolerate fraudulent activity, and we work hard to protect our customers from evolving phishing scams,” a spokesperson for PayPal told CyberGuy. “We always encourage consumers to learn how to spot the warning signs of common fraud, including our tips on the PayPal Newsroom for identifying phishing emails that attempt to impersonate trusted brands. We further recommend contacting Customer Support for assistance through official channels such as the PayPal app and our Contact Us webpage, and never responding to suspicious, unexpected emails.”
How the scam escalated to $200,000 in gold
Oliver opened his bank account again. The fake mirrored site showed $200,000 sitting there. Josh Wilson was back on the phone with a new plan. This time, the crypto ATM would not work because the amount was too large. Oliver needed to liquidate $200,000 from his stock and bond portfolio, convert it to cash and use it to buy gold coins.
Oliver protested. He told them to just reverse the transfer. They said it was impossible.
“This is my retirement money. 50% of my retirement money,” he said.
The scammers told him not to breathe a word to anyone. Josh specifically warned him that telling his broker the truth could trigger tax problems. So Oliver called his broker and said he had his eye on a piece of real estate he wanted to flip. The broker processed the sale without question.
YOUTUBE JOB SCAM TEXT: HOW TO SPOT IT FAST
Oliver went to a gold coin store, wrote a check for $198,560 and waited two to three days for it to clear. Andrew Johnson stayed in regular contact the entire time.
When the gold was ready, Johnson gave Oliver one final instruction. A courier would come to his door to pick up the box. Before handing it over, Oliver should ask the courier for a password. The password was “blue.”
The courier arrived. He was driving a black Mustang. He said the word blue. Oliver handed over the box.
“He told me the password,” Oliver said. “I handed the box, and off went my $200,000.”
The moment Brian Oliver realized it was all a scam
The day after the courier left, Andrew Johnson called back with urgency. He told Brian Oliver another $200,000 had landed in his account, and they needed to do the whole thing over again. That was the moment it broke.
“That’s when I came out from under the ether of this scam,” Oliver said. “And I said, this cannot be right.”
He immediately called the Gainesville Police Department.
The high-stakes sting that brought down a scam courier
Detective Justin Torres of the Gainesville Police Department took the call and started working the case immediately. The scammers had asked Oliver for photos of the gold and the purchase receipt, which gave law enforcement about a day and a half to set up an operation before the courier was scheduled to return.
Detective Torres pulled in four officers from the department’s Gun Violence Initiative unit, a team of intermediate detectives trained for exactly this kind of boots-on-ground work. They set up covert and marked vehicles around Oliver’s residence at a careful distance.
“It was pretty high intensity because I’m listening to Mr. Oliver’s conversation with Andrew,” Torres said. “And I’m also trying to be a good distance away to listen to my radio and be able to broadcast what I need to to the other officers on the outside.”
The scammers were suspicious. They kept pushing Oliver to be more compliant. Oliver pushed back. The goal was to keep them on the line long enough for the courier to show up. The courier, a man named Seth Wayne, drove in from Tampa. The officers waited. When he arrived, they arrested him. The case went to trial. Seth Wayne received an 18-year prison sentence.
A federal jury has since convicted a second courier in the same scheme. Atharva Shailesh Sathawane, 22, an undocumented immigrant from India, was found guilty of conspiracy to commit wire fraud and money laundering, with Brian Oliver among his victims.
Sathawane was arrested after the Gainesville Police Department set up a second sting operation at Brian’s home. Court documents showed Sathawane was involved in more than 30 transactions across multiple states, contributing to nearly $8 million stolen from elderly victims. He faces up to 20 years on each count, with sentencing scheduled for Dec. 16 in Gainesville, though he is appealing his conviction.
How refund scams are hitting multiple victims
The scam began with a convincing message and quickly escalated as criminals guided Brian Oliver step by step through fake account activity. (Halfpoint/iStock/Getty Images)
Ten other victims testified at Seth Wayne’s trial. They had come from all over the state of Florida, and their stories made Oliver furious.
Some had received fake arrest warrants, official-looking documents claiming their identities had been tied to gun running. They were told the only way to clear their names was to pull their savings and buy gold, which would be placed in a special locker in Washington, D.C., until their names were cleared.
One victim lost $1.8 million. Another lost $4.9 million. A third woman lost over $1 million across two separate pickups by the same courier. Her husband was in hospice care in Florida while all of this was happening. She drained her entire life savings, sold her condo and had to move in with her daughter and son-in-law in Alabama, leaving her dying husband behind.
Where the money from refund scams actually goes
Once the gold or cash leaves a victim’s hands, recovery is nearly impossible. Most of Seth Wayne’s deliveries went to parking lots at McDonald’s or shopping centers, where he handed the money directly to a controller. One pickup went to a jewelry store, where an employee came outside to collect it. That connection is still under active investigation by the IRS and FBI.
The call centers running these operations are overseas. Higher-level couriers in the United States are still being investigated. The full network is, as Detective Torres put it, “very intricate” and “very complicated.”
Seth Wayne himself was a mid-to-upper-level courier. He was also paying other couriers and compensating his handler. When investigators downloaded his cell phone after a judge-approved search warrant, they found evidence that he had researched exactly what he was doing before deciding the money was worth the risk.
SCAMS THAT AREN’T ILLEGAL (BUT SHOULD BE)
The defense of “willful blindness,” the idea that a courier can claim ignorance and escape responsibility, no longer holds up in Florida courts. Seth Wayne found that out the hard way.
For a deeper look at what Oliver went through, you can hear the full story on my Beyond Connected podcast at getbeyondconeccted.com.
How to stay safe from refund scams
Detective Torres laid out the most important red flags clearly, and Oliver added a few from painful personal experience. Here is what both of them want you to know.
1) Hang up on urgency
Scammers manufacture pressure because it works. If someone on the phone is telling you that you must act right now, that is not a real emergency. That is a tactic. Torres put it directly: “They want to make you believe that you have to do all this right now.”
2) Never call the number they give you
If someone calls claiming to be from PayPal, your bank or a law enforcement agency, hang up and find the real number yourself. The number embedded in Oliver’s fake bank website looked completely legitimate. It was not.
3) Pause for ten seconds
Literally ten seconds. Detective Torres confirmed what many security experts say: “If you pause these scams for just 10 seconds, many of them will just fall apart.” A scammer who is pushed back even slightly will often overreact, and that reaction will feel wrong.
4) Isolation is the biggest red flag
The moment someone on the phone tells you not to tell a family member, friend or neighbor what is happening, stop. That instruction exists for one reason: to prevent you from getting help before they get your money. “Once you start hearing that isolation conversation, that is the biggest red flag,” Torres said. “You need to hang up the phone.”
5) Gold is always a scam signal
Oliver made this one simple: “If you’re told to go buy gold, the only reason they tell you to buy gold is because it can never be traced. It’s a scam.” No legitimate company, government agency or financial institution will ever ask you to buy gold coins and hand them to a stranger.
6) The courier at your door means stop
If you have already bought gold and someone is coming to your home to pick it up in a box, Oliver’s advice is direct: “Stop right there. It’s a scam.”
7) Never move money to fix a ‘mistake’
If someone claims they accidentally sent you money and asks you to return it, stop right there. Real companies fix errors on their own systems. They will not ask you to withdraw cash, buy crypto or purchase gold to correct a transaction.
8) Verify your account on your own device
If you need to check your bank account, use your official banking app or type the website yourself. Do not trust links, screens or phone numbers provided during a call. In many cases, scammers create fake sites that look identical to the real thing.
9) Be wary of step-by-step instructions
Scammers often stay on the phone and guide you through every move. That level of control should raise concern. Legitimate companies do not walk you through withdrawing cash, using crypto ATMs or buying gold to solve a problem.
10) Bring in a second person
Before moving a large amount of money, pause and call someone you trust. A quick conversation with a family member or friend can shift your perspective. In many cases, that outside voice is enough to stop a scam in progress.
11) Limit how much of your information is online
Scammers build convincing stories using real details they find online. This can include your phone number, home address or financial history. To reduce that risk, consider removing your information from data broker and people-search sites. While you can do this manually, it often takes time, which is why some people use a data removal service such as Incogni to help automate the process and keep their information from resurfacing.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Scammers often operate behind the scenes, using technology and social engineering to manipulate victims into handing over cash or valuables. (Paul Chinn/The San Francisco Chronicle/Getty Images)
Kurt’s key takeaways
Brian Oliver lost $200,000, leaving him with only half of his retirement savings. Today, he says he is slowly sinking toward bankruptcy, and the odds of getting that money back are slim. Even so, he chose to go public so others could hear his story before it happens to them. What makes this case different is that it led to real consequences. Detective Torres and his team moved quickly and set up a sting operation. As a result, they arrested a courier who later received an 18-year prison sentence. Meanwhile, the IRS and FBI are still investigating the larger network. However, this kind of outcome is rare. In most cases, victims lose everything and never see justice. These scams are complex, often run from overseas, and are designed to move money fast. Because of that, law enforcement usually focuses on the people closest to the victim and works backward. In the end, Oliver’s turning point came during a second demand for money. At that moment, something felt off, so he paused. Then he said, “This cannot be right.” That instinct matters. In many cases, that brief pause is enough to break the scam.
If you were in Oliver’s position, at what exact moment do you think you would have stopped, and what would it have taken for you to make that call? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
BEWARE SOFTWARE BRAIN
Today on Decoder, I want to lay out an idea that’s been banging around my head for weeks now as we’ve been reporting on AI and having conversations here on this show. I’ve been calling it software brain, and it’s a particular way of seeing the world that fits everything into algorithms, databases and loops — software.
Software brain is powerful stuff. It’s a way of thinking that basically created our modern world. Marc Andreessen, the literal embodiment of software brain, called it in 2011 when he wrote the piece “Why software is eating the world” as an op-ed in The Wall Street Journal. But software thinking has been turbocharged by AI in a way that I think helps explain the enormous gap between how excited the tech industry is about the technology and how regular people are growing to dislike it more and more over time.
In fact, the polling on this is so strong, I think it’s fair to say that a lot of people hate AI. And Gen Z in particular seems to hate AI more and more as they encounter it. There’s that NBC News poll showing AI with worse favorability than ICE and only a little bit above the war in Iran and the Democrats generally. That’s with nearly two thirds of respondents saying they used ChatGPT or Copilot in the last month. Quinnipiac just found that over half of Americans think AI will do more harm than good, while more than 80 percent of people were either very concerned or somewhat concerned about the technology. Only 35 percent of people were excited about it.
Poll after poll shows that Gen Z uses AI the most and has the most negative feelings about it. A recent Gallup poll found that only 18 percent of Gen Z was hopeful about AI, down from an already-bad 27 percent last year. At the same time, anger is growing: 31 percent of those Gen Z respondents said they feel angry about AI, up from 22 percent last year.
Now, I obviously talk to a lot of tech executives and policy people here on Decoder, and I will tell you, they all know AI isn’t popular, and they can all see how that’s playing out in real life. Here’s Microsoft CEO Satya Nadella talking about how the tech industry needs to make the case for the investments it’s making in AI:
Satya Nadella: At the end of the day, I think this industry, to which I belong, needs to earn the social permission to consume energy because we’re doing good in the world.
I think it’s safe to say that the tech industry and AI have not earned any of that social permission yet. Politicians from both sides of the aisle are opposing data center buildouts. Politicians in local communities that support data centers are getting voted out of office. And in the most depressing reminder of how much political violence has become a part of everyday American life, politicians who’ve supported data centers have had their houses shot at. OpenAI CEO Sam Altman has had Molotov cocktails thrown at his house.
It’s sad that I’m going to have to say this again on the show, and it’s sad that we’re going to have commenters who disagree, but this violence is unacceptable. If you want to meaningfully oppose AI in a way that lasts, you should speak loudly with your dollars in the market and your attention online, and you should speak loudly with your votes. You should participate in a democratic regulatory and political process. Anything else will get dismissed and perpetuate the cycle. That dismissal is already happening.
I also think it’s incredibly important for our politicians and tech executives to make sure our political process makes people feel empowered, not helpless, which is a specific kind of nihilism they have all greatly contributed to. The violence is a result of that helplessness and nihilism. And the most powerful people in our society ought to reckon with that, especially as they run around saying AI will wipe out all the jobs. I’m not even exaggerating this. Here’s Anthropic CEO Dario Amodei saying he thinks AI will wipe out all the jobs:
Dario Amodei: Entry-level jobs in areas like finance, consulting, tech and many other areas like that —- entry-level white-collar work — I worry that those things are going to be first augmented, but before long replaced by AI systems. We may indeed —- it’s hard to predict the future — but we may indeed have a serious employment crisis on our hands as the pipeline for this early-stage, white-collar work starts to contract and dry up.
What I see when I encounter clips like this is the true gap between the tech industry and regular people when it comes to AI — and also the limit of software brain. Like I said, everyone in tech understands how much regular people dislike AI. What I think they’re missing is why. They think this is a marketing problem. OpenAI just spent $200 million on the TBPN podcast because the company thinks it will help make people like AI more. Sam Altman has said so explicitly:
Sam Altman: Oh, they are genius marketers and I would love to have better marketing. Somebody said to me recently that if AI were a political candidate, it would be the least popular political candidate in history. And given the amazing things AI can do, I think there’s got to be better marketing for AI.
It feels like someone just needs to say this clearly, so I’m just going to do it. AI doesn’t have a marketing problem. People experience these tools every single day. ChatGPT has 900 million weekly users, trending to a billion, and everyone has seen AI Overviews in Google Search and massive amounts of slop on their feeds. You can’t advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives.
Image: The Verge
So what is software brain? The simplest definition I’ve come up with is that it’s when you see the whole world as a series of databases that can be controlled with structured language and software code. Like I said, this is a powerful way of seeing things. So much of our lives run through databases, and a bunch of important companies have been built around maintaining those databases and providing access to them.
Zillow is a database of houses. Uber is a database of cars and riders. YouTube is a database of videos. The Verge’s website is a database of stories. You can go on and on and on. Once you start seeing the world as a bunch of databases, it’s a small jump to feeling like you can control everything if you can just control the data.
But that doesn’t always work. Here’s an example: Elon Musk and DOGE showed up in the government, and the first thing they did was take control of a bunch of databases. And they ran into the undeniable fact that the databases aren’t reality, and DOGE ended in hilarious failure. It turns out software brain has a limit, and the government isn’t software. People aren’t computers, and they don’t live in automatable loops that can be neatly captured in databases.
Anyone who’s actually ever run a database knows this. At some point, the database stops matching reality. And at that point, we usually end up tweaking the database, not the world. The AI industry has fully lost sight of this. AI thrives on data. It’s just software. And so the ask is for more and more of us to conform our lives to the database, not the other way around.
Let me offer you another example that I think about all the time, especially as AI finds real fit as a business tool. It’s the idea that AI is coming for lawyers and the legal system. The AI industry loves to talk about not needing lawyers anymore, which is already getting all kinds of people into all kinds of trouble. But I get it. I’ve spent a lot of time with lawyers. I used to be a lawyer. My wife is still a lawyer. Some of my best friends are lawyers.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.
I also spend all of my time at work talking to tech people. And so over time, I’ve learned that the overlap between software brain and lawyer brain is very, very deep. Alluringly deep. If the heart of software brain is the idea that thinking in the structured language of code can make things happen in the real world, well, the heart of lawyer brain is that thinking in the structured legal language of statutes and citations can also make things happen. Hell, it can give you power over society.
There are other commonalities. Both software development and the law depend heavily on precedent. We have a body of case law in this country, and we use it over and over again to help us resolve disputes. Much like software engineers have libraries of code that they turn to repeatedly to build the foundations of their products. I can go on.
At the end of the day, both lawyers and engineers do their best to use formal, structured language to guide the behavior of complicated systems in predictable and potentially profitable ways. I am far from the first person with this idea. Larry Lessig wrote a book called Code and Other Laws of Cyberspace in 2000. It’s just as relevant today as it was a quarter century ago.
And so you have this intoxicating similarity between law and code, and it trips people up all the time. People are constantly trying to issue commands to society at large like it’s a computer that will obey instructions. There are examples of this big and small. My favorite are those Facebook forwards insisting Mark Zuckerberg does not have the right to publish people’s photos. Honestly, I look at these, and I think it would be great if the law was actually code. Maybe things would be more predictable. Maybe we’d feel more in control.
But law isn’t actually code, and society and courts aren’t computers. I have to remind our fairly technical audience on Decoder and at The Verge all the time that the law is not deterministic. You simply cannot take the facts of a case, the law as written, and predict the outcome of that case with any real certainty, even though the formality of the legal system makes people think it works like a computer, that it’s predictable.
Because at the end of the day, it’s actually ambiguity that’s at the very heart of our legal system. It’s ambiguity that makes lawyers lawyers. Honestly, it’s ambiguity that makes people hate lawyers because it’s always possible to argue the other side, and it’s always possible to find the gray area in the law. That’s why prosecutors end up working as defense attorneys and why our regulators tend to end up working for big corporations.
So you can see the obvious collision between software brain and lawyer brain. This thing that looks like a computer isn’t actually anything at all like a computer. A lot of people even argue that the law should be more like a computer, that the system should be verifiable and consistent, and that merely issuing the right commands at the right times should lead to objectively correct outcomes.
Bridget McCormack, who used to be the chief justice of the Michigan Supreme Court, was on Decoder a few months ago pitching a fully automated AI arbitration system. Her argument to me was that people perceive the traditional legal system to be so unfair, they will accept a worse outcome from an automated system as more fair as long as they feel heard. And if there’s one thing AI can do, it’s sit there and listen all day and night. I don’t know if any of that is correct or even workable, but I do know software brain, and that is pure software brain. The idea that we can force the real world to act like a computer and then have AI issue that computer instructions.
You can see the same thing happening in every other kind of industry. You don’t hire a big consulting firm to actually come in and study your business and make it more efficient. You hire them to make slide decks that justify layoffs to your board and shareholders. Big consulting firms are great at this, and now they’re just going to generate those decks with AI. They are already doing this and the layoffs have already begun.
Any business process that looks like code talking to a database in a repetitive way is up for grabs. That’s why Anthropic has been so relentlessly focused on enterprise customers, and it’s why OpenAI is now pivoting to business use. There’s real value in introducing AI to business because so much of modern business is already software, collecting data, analyzing it, and taking action on it over and over again in a loop. Businesses also control their data, and they can demand that all their databases work together. In this way, software brain has ruled the business world for a long time. And AI has made it easier than ever for more people to make more software than ever before, for every kind of business to automate big chunks of itself with software. The absolute cutting edge of advertising and marketing is automation with AI. It’s not being in creative.
But not everything is a business, not everything is a loop, and the entire human experience cannot be captured in a database. That’s the limit of software brain. That’s why people hate AI. It flattens them. Regular people don’t see the opportunity to write code as an opportunity at all. The people do not yearn for automation. I’m a full-on smart home sicko; the lights and shades and climate controls of this house are automated in dozens of ways. But huge companies like Apple, Google and Amazon have struggled for over a decade now to make regular people care about smart home automation at all. And they just don’t.
AI isn’t going to fix that. Most people are not collecting data about every single thing that they do. And if they’re collecting any at all, it’s stored across lots of different systems — your email in Gmail, your messages in iMessage, your work schedule in Outlook, your workouts in Peloton. Those systems don’t talk to each other and maybe they never will, because there’s no reason for them to. And asking people to connect them all freaks them out.
Even taking the time to consider how much of your life is captured in databases makes people unhappy. No one wants to be surveilled constantly, and especially not in a way that makes tech companies even more powerful. But getting everything in a database so software can see it is a preoccupation of the AI industry. It’s why all the meeting systems have AI note takers in them now. It’s why Canva, which is design software, now connects to corporate email systems. My friend Ezra Klein just went to Silicon Valley, and he described the people that are actively trying to flatten themselves into a database:
Ezra Klein: You might think that A.I. types in Silicon Valley, flush with cash, are on top of the world right now. I found them notably insecure. They think the A.I. age has arrived and its winners and losers will be determined, in part, by speed of adoption. The argument is simple enough: The advantages of working atop an army of A.I. assistants and coders will compound over time, and to begin that process now is to launch yourself far ahead of your competition later. And so they are racing one another to fully integrate A.I. into their lives and into their companies. But that doesn’t just mean using A.I. It means making themselves legible to the A.I.
You can give it access to everything that’s there: your files, your email, your calendar, your messages. It operates continuously in the background, building a persistent memory of your preferences and patterns so it can better act on your behalf. The cybersecurity risks are glaring, but there’s a reason millions of people are using it: The more of your life you open to A.I., the more valuable the A.I. becomes.
I’ve reviewed a lot of tech products over the past decade and a half, and all I can tell you is that it is a failure when you ask people to adapt to computers. Computers should adapt to people. And asking people to make themselves more legible to software, to turn themselves into a database, is a doomed idea. It’s an ask so big, I can’t imagine a reward that would make it worth it for anyone, even if the tech industry wasn’t constantly talking about how AI will eliminate all the jobs, require a wholesale rethinking of the social contract and — oops — also the latest models might cause catastrophic cybersecurity problems that might lead to the end of the world.
Does this sound like a good deal to you? Can you market your way out of this? This only makes sense if you have software brain, if your operative framework is to flatten everything into databases that you can control with structured language. The people paying thousands of dollars a month to set up swarms of OpenClaw agents and write thousands of lines of code, they’re people who look at the world and see opportunities for automation, to repeat tasks, to collect data, to build software. AI is great for them. It’s even exciting in ways that I think are important and will probably change our relationship to computers forever.
For everyone else, AI is just a demanding slop monster. It’s a threat. I’m not saying regular people don’t use Excel or Airtable to plan their weddings or have fun throwing PowerPoint parties, or even that AI won’t be useful to regular people over time. I think a lot of people enjoy data and tracking different parts of their lives. There’s my WHOOP band. I’m just saying these things aren’t everything. Not everything about our lives can be measured and automated and optimized. It shouldn’t be.
And so the tech industry is rushing forward to put AI everywhere at enormous cost — energy, emissions, manufacturing capacity, the ability to buy RAM — and locked into the narrow framework of software brain without realizing they are also asking people to be fundamentally less human. They then sit around wondering why everyone hates them. I don’t think a couple haircuts are going to fix it.
Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!
Decoder with Nilay Patel
A podcast from The Verge about big ideas and other problems.
SUBSCRIBE NOW!
-
Rhode Island6 minutes agoSilver Alert issued for missing man in Cumberland, RI
-
South-Carolina12 minutes agoTwo from South Carolina charged with murder after Massachusetts man found dead in trash can
-
South Dakota18 minutes ago
SD Lottery Millionaire for Life winning numbers for April 23, 2026
-
Tennessee24 minutes agoAlabama Baseball Capitalizes on Free Passes in 12-8 Win Over Tennessee
-
Texas30 minutes agoTexas attorney raises concerns as investigation continues into 2 home explosions on North Side
-
Utah36 minutes agoUtah road rage cases peak in March, data shows
-
Vermont42 minutes agoFederal reclassification of marijuana could ‘turbocharge’ Vermont’s medical market – VTDigger
-
Virginia48 minutes agoA proposal to merge Alexandria, Arlington back into DC sheds light on past retrocession