Today on Decoder, I want to lay out an idea that’s been banging around my head for weeks now as we’ve been reporting on AI and having conversations here on this show. I’ve been calling it software brain, and it’s a particular way of seeing the world that fits everything into algorithms, databases and loops — software.
Technology
How to Watch a Baby
Parenthood is abrupt and total.
When I went to the hospital, I understood that I’d be sent home with a vulnerable being who would require constant care, but it was impossible to prepare for what that actually felt like.
I’d loved being in the maternity ward, a leisurely four nights thanks to a C-section and a few complications, where I was surrounded by perky and competent nurses who took care of me and my baby, checking my bandages and bringing me ice and answering my questions.
(I had a lot of questions.)
“If she doesn’t want to eat, is that okay?”
“What does that raspy noise mean?”
“Her lower lip keeps quivering, is that okay?”
“Does she need to keep the hat on all the time?”
“How often should I change her diaper?”
When we were discharged, my husband and I secured our newborn into a car seat on the checkered linoleum floor. The strap tightening system was confusing, and there were warning labels explaining the baby might become airborne or get strangled.
I asked a nurse on the way to the elevator if she could take a quick look to see if we’d strapped the baby in properly.
“Oh, I’m actually not legally allowed to help with that,” she said. “Sorry!”
The moment we stepped out of my hospital room, we were on our own.
We arrived home to an apartment that had rendered itself strange and irrelevant in its structure: it had belonged to different, childless people. We spent hundreds of dollars over the next two days overnighting bottles and breast pumps and swaddles: we needed diaper cream, and we needed it right now.
Somewhere within those bleary first days, I downloaded an app on my phone that promised to help me keep track of everything.
There are dozens of them, where caregivers can log how many ounces of milk their baby drank or how long they breastfed, how many minutes or hours a child slept, when they last had a bath or their diaper changed.
The reasoning behind this cataloging is pretty simple. A baby’s health is often determined by its regularity: how much the baby consumes, how much the baby excretes, how much the baby sleeps.
When things deviate from the norm, it can be a sign that something is changing or that something is wrong: the baby is sick, the baby has an allergy, the baby is not getting what she needs.
When a child is cared for by more than one person, she can be handed back and forth between two or three tired people without a lengthy explanation of how much she’s slept or eaten: we can just check the app.
I was a woman of advanced maternal age, which means I’d taken a very long time to decide that I wanted to be a mother, and now that I was one, I wanted the data.
And the data was adorable: when I logged my baby’s diapers, the app said: “Eloise had a little poo and a little pee.”
I opened the app dozens of times throughout the dreamy yet punishing expanse of a day, the tracker neatly converting our care back into minutes and hours, which had otherwise lost all meaning.
There were so many mistakes that I could make, but the data was unimpeachable.
She was safe, she was loved, she was cared for: here was the proof.
But a lot of my friends didn’t feel like they needed an app to keep track of their babies.
Tara said: “Proud to say I avoided these! I’m too lazy to track my baby’s every poop and nap, plus it just seems absurd, and I know it would exacerbate my already-spiraling postpartum anxiety.”
Whit said: “I was so tired and overwhelmed, I wouldn’t have been able to keep on top of tracking, and the last thing I’d have wanted is to be obsessing over what some metric means.”
And some who did so more aggressively than I ever did.
Leah is a project manager at an education and social impact firm who spent 10 years working in operations at elementary schools, experience she calls “a Venn diagram of thinking about kids and data.”
So when she became pregnant with her son, she approached the pregnancy with the same tools she used at work, creating spreadsheets to track her progress preparing for the baby’s arrival.
She describes her baby’s data as a well of private joy.
Tracking was a way to feel in control during a period when new parents — especially those who just gave birth — can feel powerless.
For me, the exhaustion of early parenthood felt enhanced by the fact that my love for my daughter was imbued with responsibility: since the moment I became pregnant, that obligation was relentless.
I could marvel at how sweet she was or how cute her sounds were, but I couldn’t totally relax into that feeling because I had to simultaneously remain vigilant in keeping her alive.
But at night, as she rocked peacefully in a $2,000 SIDS-risk-reduction self-soothing robotic bassinet, I could watch videos of her and sink unambiguously into my delight in her, scroll through the week’s data and bask in the ounces she consumed with the certainty that they were making her stronger and less vulnerable every day.
When she outgrew her bassinet and moved into her own room, we propped a Nest Camera up on the bookshelf overlooking her crib.
Now, I didn’t even need to be home to see her.
The Nest provided a strange, sweet record of us together, in moments that would otherwise be invisible: in a way, it allowed me to experience her twice.
But sometimes the freedom that the monitor promised also felt like a liability. No matter where I was, I could open an app and see if my baby was asleep. Sometimes, I realized I wasn’t checking to see if she was asleep so much as if she was still alive.
I’d be sitting at dinner with friends, or on the subway, zooming in on my spookily night-visioned baby, looking for confirmation that I could see the folds in her rainbow-speckled pajamas rise and fall with her breathing.
I have access to a space parents before me never got to see, and that is both a comfort and a burden.
When the first baby monitor was invented in 1937, 6% of babies died of illness or accident before their first birthday.
But the impetus for developing the technology had nothing to do with those very real threats.
Instead, the baby monitor rose from an event so sensational that it was constantly in headlines: the abduction of the Lindbergh baby in 1932.
The president of the Zenith Radio Corporation was terrified that his daughter might also be snatched from her crib, so he started rewiring some radios at home before assigning the task of concocting a one-way monitor to his employees.
The model was designed by the not-yet-famous Isamu Noguchi, who’d go on to popularize mid-century modern home decor.
But the radio nurse was expensive, and the unit didn’t take off.
The whole concept didn’t gain real traction until the 1980s, when Fisher-Price released the baby monitor that my parents bought when they had me.
Once, they left it too close to the oven and the plastic warped vaguely in a Dr. Seuss sort of way, and sometimes at naptime they’d hear the muffled sounds of a neighbor chatting on their cordless phone over the crackle of the monitor’s static.
I couldn’t relate to the inventor’s fear of child abduction, but there were so many things to be scared of. The possibilities swirled around me: SIDS, mass shootings, political instability, gas leaks, rising sea levels, button batteries, war, food allergies, drowning, RSV, the hottest year on record, fascism, bulletproof nap mats, fascism, sleepovers, car accidents, nuclear weapons, and the vague threat of ultraprocessed foods.
The companies that push ads to my Instagram while I’m rocking my baby to sleep know this. They capitalize on the fact that there is no greater loss than that of a child, that even imagining it for most parents is utterly unbearable, and that we’ll often shell out as much money as we’re able to give ourselves some semblance of hope that we can control the untamable world into which we’ve born our children.
When Chloe* [name has been changed] and her partner had their first child, they bought a monitor that promised peace of mind.
The Miku Smart Baby Monitor provides baby sleep analytics, tracks respirations per minute, and “analyzes and stores data to build a bigger picture of your child’s behavior over time.”
She found most of the Miku’s features unhelpful — it constantly gave off false alarms that their son had stopped breathing — but she became fixated on its motion detection.
“If my mom or my partner would do his routine, I could see how they were doing it — and I could critique it.”
Sometimes, when her husband put their baby down at night, she’d watch on the monitor and see him take a phone call or respond to an email while he stood next to the baby’s crib, and it enraged her.
He’d gone back to work much earlier than she had, so she’d created all the systems that maintained their son’s daily rhythms. “There was a specific way I wanted things done, and the only way I knew he was deviating from it was because I could see and hear it on the monitor.”
Her husband wasn’t putting their son in danger when he looked at his phone, but it was still painful for her to witness. “I would be holding him to standards that I didn’t keep myself. I remember being glad that there was no one monitoring me.”
Chloe’s desire to surveil her baby only increased after she returned to work. She bought cheap, low-res security cameras and hid them under the living room bookshelves so she could observe her baby’s nanny.
“Then my husband confiscated them,” she said.
Once, she hid an Apple AirTag in her baby’s diaper bag. When the nanny took her son out for a walk, Chloe followed in her car.
“I was driving by the bench where the nanny was sitting with my baby, and my heart rate kind of rose up and I got that feeling in my stomach like, ‘I’m about to find something out that I want to know, but it’s going to change something.’”
“You’re seeing something that you’re not supposed to be seeing.”
“What sort of bad things might I uncover if I looked? The baby trusts me to be looking after him.”
Nanny cams and GPS tracking of childcare workers raise all kinds of ethical questions, but Meg Leta Jones, a policy and privacy scholar (and mom of three) says, “The high-level takeaway is that it feels bad to be far away from your kid.”
The ways in which technology complicates this distance is a common scholarly argument against tools like video monitors: they keep us both too far from and too close to our children.
In the book Supervision: On Motherhood and Surveillance, Sophie Hamacher says, “All of these baby monitors create a distance that seems unhealthy. If you closely observe and are caring for your child you don’t need all of this technology. Doesn’t care also have to do with proximity of the body to another body? With all this technology there is no proximity.”
Conversely, in the same book, Laëtita Badaut Haussmann says, “I think there is a forced, even unhealthy, proximity through surveillance tools, Let’s say you are in a different room from your child. You are going to have the monitor and you will be regularly checking while you read a book or whatever. So your screen will be lighting up every minute — it’s automatically and regularly updating. You cannot get a proper distance because you are constantly tethered to it. It’s actually terrifying.”
But figuring out the right distance from which to parent is a problem that existed long before pregnant people added video monitors to their digital gift registries.
In 2001, novelist Rachel Cusk published A Life’s Work, her first memoir, about becoming a mother. It investigates the ambivalence of parenthood so honestly that one critic called for the removal of her children from her care. It’s also the book I’ve seen my experience in more clearly than any other I have ever read.
Cusk writes, “It is as difficult to leave your children as it is to stay with them. To discover this is to feel that your life had become irretrievably mired in conflict, or caught in some mythic snare in which you will perpetually, vainly struggle.”
I’ve felt this struggle since the beginning of my pregnancy, when I couldn’t rationalize my inability to walk away from my role as incubator, even for a moment, pop off my belly for a quick breath of relief, or a bloody steak, or a martini.
I understood then and now as a parent that it is my consummate duty to keep my child safe, but I remain suspicious of the narrative that my biologically imbued motherly intuition is always and only the strongest force in ensuring her care.
What if surveillance can provide relief from the demands of parenthood that are otherwise so mind-bendingly total?
Ten months after my daughter was born and I’d undergone the categorical shift from woman to mother, I stood at a backyard party a few miles from our apartment, where her father had just put her to bed.
I’d spent the day with her; she’d eaten watermelon and gotten magnificently sticky and coated in its juice, and now I was out, on a perfect New York night, without her.
At some point in the evening, I reflexively slipped my phone from my pocket, opened the Nest app, and propped it up next to me so I could occasionally glance over and see her, asleep in her crib.
It wasn’t as if I thought I needed to watch my daughter on camera to ensure that she was safe and happy. I knew, rationally, that she was fine.
But witnessing the contented curl of her tiny body took away any vague guilt I had about being present somewhere without her. The presence of that shame was perhaps a bigger problem than whether I had a video monitor or not.
Some of my watching is twinged with terror, but most of it is more banal: she’s going to continue to grow and change, and I’m going to miss parts of it.
Surveillance sometimes feels like a way for me to try to hold onto the parts of her that I know I cannot keep.
Technology
US arrests soldier who allegedly made $400k on Maduro Polymarket bets
On or about January 6, 2026, for example, VAN DYKE asked Polymarket to delete his Polymarket account, falsely claiming that he had lost access to the email address to which the account had been associated. That same day, VAN DYKE changed the email registered to his cryptocurrency exchange account to an email address that was not subscribed to in his name, which email address was created on or about December 14., 2025.
Technology
How Florida retiree lost $200K in fake PayPal refund scam
NEWYou can now listen to Fox News articles!
Brian Oliver is retired, sharp and financially savvy enough to have a stock-and-bond portfolio worth hundreds of thousands of dollars. He is not the type of person you picture getting scammed. That is exactly why scammers picked him.
What happened to Oliver, 85, is the kind of story that makes your jaw drop, and your stomach turn at the same time. It started with a routine-looking email and ended with a box of gold coins rolling away in the back of a black Mustang. In between, Oliver lost $200,000 and nearly half of his retirement savings.
He told his story on my Beyond Connected podcast at getbeyondconnected.com, along with Detective Justin Torres of the Gainesville Police Department in Florida. What they shared together is equal parts chilling and clarifying.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
BEWARE FAKE CREDIT CARD ACCOUNT RESTRICTION SCAMS
Brian Oliver shares how a routine-looking email pulled him into a sophisticated refund scam that cost him $200,000. (Sebastian Gollnow/picture alliance)
It all started with a PayPal refund scam email
Brian got an email that said PayPal owed him money. It was not a wild claim. He had dealt with PayPal before and figured, “Maybe they found some money for me.” So he responded. The email included a phone number, and that number connected him to a man who called himself Andrew Johnson.
“Yeah, we have $450 for you. Type in the number 100 on your computer and we’ll get it started.”
Brian typed 100. Andrew immediately said he had made a mistake: “Oh no, you put in 10,000.”
Brian pushed back. He said he did not type 10,000. Andrew told him to check his Bank of America account. Brian opened it, and there it was: $10,000 sitting in his checking account.
Except it was not real. The scammers had somehow mirrored his bank’s website. What Brian saw looked exactly like his actual Bank of America page, complete with a new balance and a phone number embedded in the “Contact Us” section. That number was fake, too.
Brian called it. A man named Josh answered, identifying himself as a Bank of America representative. He told Brian that the only way to return the money without triggering a $3,500 tax penalty was to withdraw $10,000 in cash and feed it into a crypto ATM.
How the PayPal refund scam tricked Brian
Oliver had never heard of a crypto ATM before that day. Josh helpfully told him exactly where to find one. It was in a sketchy part of town, and Oliver walked in carrying $10,000 in his pocket.
“I’m on my knees, on a cement floor, and I’m 85,” Oliver said.
He fed one hundred $100 bills into the machine, bill by bill, watching over his shoulder the entire time. Some bills got kicked back out. He fed them in again. When the machine finally accepted all of them, he photographed the receipt and sent it to Andrew Johnson, just as he had been instructed.
Then Oliver went home and told Andrew it was done. Andrew told him they still had to take care of his refund. He told Oliver to type in the number 200.
FAKE PAYPAL EMAIL LET HACKERS ACCESS COMPUTER AND BANK ACCOUNT
Oliver typed it. Andrew’s response came fast: “Oh my God, my boss is going to kill me. It’s $200,000 we’ve transferred to your account.”
This type of scam is becoming more common, and it often involves criminals impersonating trusted platforms like PayPal.
“PayPal does not tolerate fraudulent activity, and we work hard to protect our customers from evolving phishing scams,” a spokesperson for PayPal told CyberGuy. “We always encourage consumers to learn how to spot the warning signs of common fraud, including our tips on the PayPal Newsroom for identifying phishing emails that attempt to impersonate trusted brands. We further recommend contacting Customer Support for assistance through official channels such as the PayPal app and our Contact Us webpage, and never responding to suspicious, unexpected emails.”
How the scam escalated to $200,000 in gold
Oliver opened his bank account again. The fake mirrored site showed $200,000 sitting there. Josh Wilson was back on the phone with a new plan. This time, the crypto ATM would not work because the amount was too large. Oliver needed to liquidate $200,000 from his stock and bond portfolio, convert it to cash and use it to buy gold coins.
Oliver protested. He told them to just reverse the transfer. They said it was impossible.
“This is my retirement money. 50% of my retirement money,” he said.
The scammers told him not to breathe a word to anyone. Josh specifically warned him that telling his broker the truth could trigger tax problems. So Oliver called his broker and said he had his eye on a piece of real estate he wanted to flip. The broker processed the sale without question.
YOUTUBE JOB SCAM TEXT: HOW TO SPOT IT FAST
Oliver went to a gold coin store, wrote a check for $198,560 and waited two to three days for it to clear. Andrew Johnson stayed in regular contact the entire time.
When the gold was ready, Johnson gave Oliver one final instruction. A courier would come to his door to pick up the box. Before handing it over, Oliver should ask the courier for a password. The password was “blue.”
The courier arrived. He was driving a black Mustang. He said the word blue. Oliver handed over the box.
“He told me the password,” Oliver said. “I handed the box, and off went my $200,000.”
The moment Brian Oliver realized it was all a scam
The day after the courier left, Andrew Johnson called back with urgency. He told Brian Oliver another $200,000 had landed in his account, and they needed to do the whole thing over again. That was the moment it broke.
“That’s when I came out from under the ether of this scam,” Oliver said. “And I said, this cannot be right.”
He immediately called the Gainesville Police Department.
The high-stakes sting that brought down a scam courier
Detective Justin Torres of the Gainesville Police Department took the call and started working the case immediately. The scammers had asked Oliver for photos of the gold and the purchase receipt, which gave law enforcement about a day and a half to set up an operation before the courier was scheduled to return.
Detective Torres pulled in four officers from the department’s Gun Violence Initiative unit, a team of intermediate detectives trained for exactly this kind of boots-on-ground work. They set up covert and marked vehicles around Oliver’s residence at a careful distance.
“It was pretty high intensity because I’m listening to Mr. Oliver’s conversation with Andrew,” Torres said. “And I’m also trying to be a good distance away to listen to my radio and be able to broadcast what I need to to the other officers on the outside.”
The scammers were suspicious. They kept pushing Oliver to be more compliant. Oliver pushed back. The goal was to keep them on the line long enough for the courier to show up. The courier, a man named Seth Wayne, drove in from Tampa. The officers waited. When he arrived, they arrested him. The case went to trial. Seth Wayne received an 18-year prison sentence.
A federal jury has since convicted a second courier in the same scheme. Atharva Shailesh Sathawane, 22, an undocumented immigrant from India, was found guilty of conspiracy to commit wire fraud and money laundering, with Brian Oliver among his victims.
Sathawane was arrested after the Gainesville Police Department set up a second sting operation at Brian’s home. Court documents showed Sathawane was involved in more than 30 transactions across multiple states, contributing to nearly $8 million stolen from elderly victims. He faces up to 20 years on each count, with sentencing scheduled for Dec. 16 in Gainesville, though he is appealing his conviction.
How refund scams are hitting multiple victims
The scam began with a convincing message and quickly escalated as criminals guided Brian Oliver step by step through fake account activity. (Halfpoint/iStock/Getty Images)
Ten other victims testified at Seth Wayne’s trial. They had come from all over the state of Florida, and their stories made Oliver furious.
Some had received fake arrest warrants, official-looking documents claiming their identities had been tied to gun running. They were told the only way to clear their names was to pull their savings and buy gold, which would be placed in a special locker in Washington, D.C., until their names were cleared.
One victim lost $1.8 million. Another lost $4.9 million. A third woman lost over $1 million across two separate pickups by the same courier. Her husband was in hospice care in Florida while all of this was happening. She drained her entire life savings, sold her condo and had to move in with her daughter and son-in-law in Alabama, leaving her dying husband behind.
Where the money from refund scams actually goes
Once the gold or cash leaves a victim’s hands, recovery is nearly impossible. Most of Seth Wayne’s deliveries went to parking lots at McDonald’s or shopping centers, where he handed the money directly to a controller. One pickup went to a jewelry store, where an employee came outside to collect it. That connection is still under active investigation by the IRS and FBI.
The call centers running these operations are overseas. Higher-level couriers in the United States are still being investigated. The full network is, as Detective Torres put it, “very intricate” and “very complicated.”
Seth Wayne himself was a mid-to-upper-level courier. He was also paying other couriers and compensating his handler. When investigators downloaded his cell phone after a judge-approved search warrant, they found evidence that he had researched exactly what he was doing before deciding the money was worth the risk.
SCAMS THAT AREN’T ILLEGAL (BUT SHOULD BE)
The defense of “willful blindness,” the idea that a courier can claim ignorance and escape responsibility, no longer holds up in Florida courts. Seth Wayne found that out the hard way.
For a deeper look at what Oliver went through, you can hear the full story on my Beyond Connected podcast at getbeyondconeccted.com.
How to stay safe from refund scams
Detective Torres laid out the most important red flags clearly, and Oliver added a few from painful personal experience. Here is what both of them want you to know.
1) Hang up on urgency
Scammers manufacture pressure because it works. If someone on the phone is telling you that you must act right now, that is not a real emergency. That is a tactic. Torres put it directly: “They want to make you believe that you have to do all this right now.”
2) Never call the number they give you
If someone calls claiming to be from PayPal, your bank or a law enforcement agency, hang up and find the real number yourself. The number embedded in Oliver’s fake bank website looked completely legitimate. It was not.
3) Pause for ten seconds
Literally ten seconds. Detective Torres confirmed what many security experts say: “If you pause these scams for just 10 seconds, many of them will just fall apart.” A scammer who is pushed back even slightly will often overreact, and that reaction will feel wrong.
4) Isolation is the biggest red flag
The moment someone on the phone tells you not to tell a family member, friend or neighbor what is happening, stop. That instruction exists for one reason: to prevent you from getting help before they get your money. “Once you start hearing that isolation conversation, that is the biggest red flag,” Torres said. “You need to hang up the phone.”
5) Gold is always a scam signal
Oliver made this one simple: “If you’re told to go buy gold, the only reason they tell you to buy gold is because it can never be traced. It’s a scam.” No legitimate company, government agency or financial institution will ever ask you to buy gold coins and hand them to a stranger.
6) The courier at your door means stop
If you have already bought gold and someone is coming to your home to pick it up in a box, Oliver’s advice is direct: “Stop right there. It’s a scam.”
7) Never move money to fix a ‘mistake’
If someone claims they accidentally sent you money and asks you to return it, stop right there. Real companies fix errors on their own systems. They will not ask you to withdraw cash, buy crypto or purchase gold to correct a transaction.
8) Verify your account on your own device
If you need to check your bank account, use your official banking app or type the website yourself. Do not trust links, screens or phone numbers provided during a call. In many cases, scammers create fake sites that look identical to the real thing.
9) Be wary of step-by-step instructions
Scammers often stay on the phone and guide you through every move. That level of control should raise concern. Legitimate companies do not walk you through withdrawing cash, using crypto ATMs or buying gold to solve a problem.
10) Bring in a second person
Before moving a large amount of money, pause and call someone you trust. A quick conversation with a family member or friend can shift your perspective. In many cases, that outside voice is enough to stop a scam in progress.
11) Limit how much of your information is online
Scammers build convincing stories using real details they find online. This can include your phone number, home address or financial history. To reduce that risk, consider removing your information from data broker and people-search sites. While you can do this manually, it often takes time, which is why some people use a data removal service such as Incogni to help automate the process and keep their information from resurfacing.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Scammers often operate behind the scenes, using technology and social engineering to manipulate victims into handing over cash or valuables. (Paul Chinn/The San Francisco Chronicle/Getty Images)
Kurt’s key takeaways
Brian Oliver lost $200,000, leaving him with only half of his retirement savings. Today, he says he is slowly sinking toward bankruptcy, and the odds of getting that money back are slim. Even so, he chose to go public so others could hear his story before it happens to them. What makes this case different is that it led to real consequences. Detective Torres and his team moved quickly and set up a sting operation. As a result, they arrested a courier who later received an 18-year prison sentence. Meanwhile, the IRS and FBI are still investigating the larger network. However, this kind of outcome is rare. In most cases, victims lose everything and never see justice. These scams are complex, often run from overseas, and are designed to move money fast. Because of that, law enforcement usually focuses on the people closest to the victim and works backward. In the end, Oliver’s turning point came during a second demand for money. At that moment, something felt off, so he paused. Then he said, “This cannot be right.” That instinct matters. In many cases, that brief pause is enough to break the scam.
If you were in Oliver’s position, at what exact moment do you think you would have stopped, and what would it have taken for you to make that call? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
BEWARE SOFTWARE BRAIN
Software brain is powerful stuff. It’s a way of thinking that basically created our modern world. Marc Andreessen, the literal embodiment of software brain, called it in 2011 when he wrote the piece “Why software is eating the world” as an op-ed in The Wall Street Journal. But software thinking has been turbocharged by AI in a way that I think helps explain the enormous gap between how excited the tech industry is about the technology and how regular people are growing to dislike it more and more over time.
In fact, the polling on this is so strong, I think it’s fair to say that a lot of people hate AI. And Gen Z in particular seems to hate AI more and more as they encounter it. There’s that NBC News poll showing AI with worse favorability than ICE and only a little bit above the war in Iran and the Democrats generally. That’s with nearly two thirds of respondents saying they used ChatGPT or Copilot in the last month. Quinnipiac just found that over half of Americans think AI will do more harm than good, while more than 80 percent of people were either very concerned or somewhat concerned about the technology. Only 35 percent of people were excited about it.
Poll after poll shows that Gen Z uses AI the most and has the most negative feelings about it. A recent Gallup poll found that only 18 percent of Gen Z was hopeful about AI, down from an already-bad 27 percent last year. At the same time, anger is growing: 31 percent of those Gen Z respondents said they feel angry about AI, up from 22 percent last year.
Now, I obviously talk to a lot of tech executives and policy people here on Decoder, and I will tell you, they all know AI isn’t popular, and they can all see how that’s playing out in real life. Here’s Microsoft CEO Satya Nadella talking about how the tech industry needs to make the case for the investments it’s making in AI:
Satya Nadella: At the end of the day, I think this industry, to which I belong, needs to earn the social permission to consume energy because we’re doing good in the world.
I think it’s safe to say that the tech industry and AI have not earned any of that social permission yet. Politicians from both sides of the aisle are opposing data center buildouts. Politicians in local communities that support data centers are getting voted out of office. And in the most depressing reminder of how much political violence has become a part of everyday American life, politicians who’ve supported data centers have had their houses shot at. OpenAI CEO Sam Altman has had Molotov cocktails thrown at his house.
It’s sad that I’m going to have to say this again on the show, and it’s sad that we’re going to have commenters who disagree, but this violence is unacceptable. If you want to meaningfully oppose AI in a way that lasts, you should speak loudly with your dollars in the market and your attention online, and you should speak loudly with your votes. You should participate in a democratic regulatory and political process. Anything else will get dismissed and perpetuate the cycle. That dismissal is already happening.
I also think it’s incredibly important for our politicians and tech executives to make sure our political process makes people feel empowered, not helpless, which is a specific kind of nihilism they have all greatly contributed to. The violence is a result of that helplessness and nihilism. And the most powerful people in our society ought to reckon with that, especially as they run around saying AI will wipe out all the jobs. I’m not even exaggerating this. Here’s Anthropic CEO Dario Amodei saying he thinks AI will wipe out all the jobs:
Dario Amodei: Entry-level jobs in areas like finance, consulting, tech and many other areas like that —- entry-level white-collar work — I worry that those things are going to be first augmented, but before long replaced by AI systems. We may indeed —- it’s hard to predict the future — but we may indeed have a serious employment crisis on our hands as the pipeline for this early-stage, white-collar work starts to contract and dry up.
What I see when I encounter clips like this is the true gap between the tech industry and regular people when it comes to AI — and also the limit of software brain. Like I said, everyone in tech understands how much regular people dislike AI. What I think they’re missing is why. They think this is a marketing problem. OpenAI just spent $200 million on the TBPN podcast because the company thinks it will help make people like AI more. Sam Altman has said so explicitly:
Sam Altman: Oh, they are genius marketers and I would love to have better marketing. Somebody said to me recently that if AI were a political candidate, it would be the least popular political candidate in history. And given the amazing things AI can do, I think there’s got to be better marketing for AI.
It feels like someone just needs to say this clearly, so I’m just going to do it. AI doesn’t have a marketing problem. People experience these tools every single day. ChatGPT has 900 million weekly users, trending to a billion, and everyone has seen AI Overviews in Google Search and massive amounts of slop on their feeds. You can’t advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives.
Image: The Verge
So what is software brain? The simplest definition I’ve come up with is that it’s when you see the whole world as a series of databases that can be controlled with structured language and software code. Like I said, this is a powerful way of seeing things. So much of our lives run through databases, and a bunch of important companies have been built around maintaining those databases and providing access to them.
Zillow is a database of houses. Uber is a database of cars and riders. YouTube is a database of videos. The Verge’s website is a database of stories. You can go on and on and on. Once you start seeing the world as a bunch of databases, it’s a small jump to feeling like you can control everything if you can just control the data.
But that doesn’t always work. Here’s an example: Elon Musk and DOGE showed up in the government, and the first thing they did was take control of a bunch of databases. And they ran into the undeniable fact that the databases aren’t reality, and DOGE ended in hilarious failure. It turns out software brain has a limit, and the government isn’t software. People aren’t computers, and they don’t live in automatable loops that can be neatly captured in databases.
Anyone who’s actually ever run a database knows this. At some point, the database stops matching reality. And at that point, we usually end up tweaking the database, not the world. The AI industry has fully lost sight of this. AI thrives on data. It’s just software. And so the ask is for more and more of us to conform our lives to the database, not the other way around.
Let me offer you another example that I think about all the time, especially as AI finds real fit as a business tool. It’s the idea that AI is coming for lawyers and the legal system. The AI industry loves to talk about not needing lawyers anymore, which is already getting all kinds of people into all kinds of trouble. But I get it. I’ve spent a lot of time with lawyers. I used to be a lawyer. My wife is still a lawyer. Some of my best friends are lawyers.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.
I also spend all of my time at work talking to tech people. And so over time, I’ve learned that the overlap between software brain and lawyer brain is very, very deep. Alluringly deep. If the heart of software brain is the idea that thinking in the structured language of code can make things happen in the real world, well, the heart of lawyer brain is that thinking in the structured legal language of statutes and citations can also make things happen. Hell, it can give you power over society.
There are other commonalities. Both software development and the law depend heavily on precedent. We have a body of case law in this country, and we use it over and over again to help us resolve disputes. Much like software engineers have libraries of code that they turn to repeatedly to build the foundations of their products. I can go on.
At the end of the day, both lawyers and engineers do their best to use formal, structured language to guide the behavior of complicated systems in predictable and potentially profitable ways. I am far from the first person with this idea. Larry Lessig wrote a book called Code and Other Laws of Cyberspace in 2000. It’s just as relevant today as it was a quarter century ago.
And so you have this intoxicating similarity between law and code, and it trips people up all the time. People are constantly trying to issue commands to society at large like it’s a computer that will obey instructions. There are examples of this big and small. My favorite are those Facebook forwards insisting Mark Zuckerberg does not have the right to publish people’s photos. Honestly, I look at these, and I think it would be great if the law was actually code. Maybe things would be more predictable. Maybe we’d feel more in control.
But law isn’t actually code, and society and courts aren’t computers. I have to remind our fairly technical audience on Decoder and at The Verge all the time that the law is not deterministic. You simply cannot take the facts of a case, the law as written, and predict the outcome of that case with any real certainty, even though the formality of the legal system makes people think it works like a computer, that it’s predictable.
Because at the end of the day, it’s actually ambiguity that’s at the very heart of our legal system. It’s ambiguity that makes lawyers lawyers. Honestly, it’s ambiguity that makes people hate lawyers because it’s always possible to argue the other side, and it’s always possible to find the gray area in the law. That’s why prosecutors end up working as defense attorneys and why our regulators tend to end up working for big corporations.
So you can see the obvious collision between software brain and lawyer brain. This thing that looks like a computer isn’t actually anything at all like a computer. A lot of people even argue that the law should be more like a computer, that the system should be verifiable and consistent, and that merely issuing the right commands at the right times should lead to objectively correct outcomes.
Bridget McCormack, who used to be the chief justice of the Michigan Supreme Court, was on Decoder a few months ago pitching a fully automated AI arbitration system. Her argument to me was that people perceive the traditional legal system to be so unfair, they will accept a worse outcome from an automated system as more fair as long as they feel heard. And if there’s one thing AI can do, it’s sit there and listen all day and night. I don’t know if any of that is correct or even workable, but I do know software brain, and that is pure software brain. The idea that we can force the real world to act like a computer and then have AI issue that computer instructions.
You can see the same thing happening in every other kind of industry. You don’t hire a big consulting firm to actually come in and study your business and make it more efficient. You hire them to make slide decks that justify layoffs to your board and shareholders. Big consulting firms are great at this, and now they’re just going to generate those decks with AI. They are already doing this and the layoffs have already begun.
Any business process that looks like code talking to a database in a repetitive way is up for grabs. That’s why Anthropic has been so relentlessly focused on enterprise customers, and it’s why OpenAI is now pivoting to business use. There’s real value in introducing AI to business because so much of modern business is already software, collecting data, analyzing it, and taking action on it over and over again in a loop. Businesses also control their data, and they can demand that all their databases work together. In this way, software brain has ruled the business world for a long time. And AI has made it easier than ever for more people to make more software than ever before, for every kind of business to automate big chunks of itself with software. The absolute cutting edge of advertising and marketing is automation with AI. It’s not being in creative.
But not everything is a business, not everything is a loop, and the entire human experience cannot be captured in a database. That’s the limit of software brain. That’s why people hate AI. It flattens them. Regular people don’t see the opportunity to write code as an opportunity at all. The people do not yearn for automation. I’m a full-on smart home sicko; the lights and shades and climate controls of this house are automated in dozens of ways. But huge companies like Apple, Google and Amazon have struggled for over a decade now to make regular people care about smart home automation at all. And they just don’t.
AI isn’t going to fix that. Most people are not collecting data about every single thing that they do. And if they’re collecting any at all, it’s stored across lots of different systems — your email in Gmail, your messages in iMessage, your work schedule in Outlook, your workouts in Peloton. Those systems don’t talk to each other and maybe they never will, because there’s no reason for them to. And asking people to connect them all freaks them out.
Even taking the time to consider how much of your life is captured in databases makes people unhappy. No one wants to be surveilled constantly, and especially not in a way that makes tech companies even more powerful. But getting everything in a database so software can see it is a preoccupation of the AI industry. It’s why all the meeting systems have AI note takers in them now. It’s why Canva, which is design software, now connects to corporate email systems. My friend Ezra Klein just went to Silicon Valley, and he described the people that are actively trying to flatten themselves into a database:
Ezra Klein: You might think that A.I. types in Silicon Valley, flush with cash, are on top of the world right now. I found them notably insecure. They think the A.I. age has arrived and its winners and losers will be determined, in part, by speed of adoption. The argument is simple enough: The advantages of working atop an army of A.I. assistants and coders will compound over time, and to begin that process now is to launch yourself far ahead of your competition later. And so they are racing one another to fully integrate A.I. into their lives and into their companies. But that doesn’t just mean using A.I. It means making themselves legible to the A.I.
You can give it access to everything that’s there: your files, your email, your calendar, your messages. It operates continuously in the background, building a persistent memory of your preferences and patterns so it can better act on your behalf. The cybersecurity risks are glaring, but there’s a reason millions of people are using it: The more of your life you open to A.I., the more valuable the A.I. becomes.
I’ve reviewed a lot of tech products over the past decade and a half, and all I can tell you is that it is a failure when you ask people to adapt to computers. Computers should adapt to people. And asking people to make themselves more legible to software, to turn themselves into a database, is a doomed idea. It’s an ask so big, I can’t imagine a reward that would make it worth it for anyone, even if the tech industry wasn’t constantly talking about how AI will eliminate all the jobs, require a wholesale rethinking of the social contract and — oops — also the latest models might cause catastrophic cybersecurity problems that might lead to the end of the world.
Does this sound like a good deal to you? Can you market your way out of this? This only makes sense if you have software brain, if your operative framework is to flatten everything into databases that you can control with structured language. The people paying thousands of dollars a month to set up swarms of OpenClaw agents and write thousands of lines of code, they’re people who look at the world and see opportunities for automation, to repeat tasks, to collect data, to build software. AI is great for them. It’s even exciting in ways that I think are important and will probably change our relationship to computers forever.
For everyone else, AI is just a demanding slop monster. It’s a threat. I’m not saying regular people don’t use Excel or Airtable to plan their weddings or have fun throwing PowerPoint parties, or even that AI won’t be useful to regular people over time. I think a lot of people enjoy data and tracking different parts of their lives. There’s my WHOOP band. I’m just saying these things aren’t everything. Not everything about our lives can be measured and automated and optimized. It shouldn’t be.
And so the tech industry is rushing forward to put AI everywhere at enormous cost — energy, emissions, manufacturing capacity, the ability to buy RAM — and locked into the narrow framework of software brain without realizing they are also asking people to be fundamentally less human. They then sit around wondering why everyone hates them. I don’t think a couple haircuts are going to fix it.
Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!
Decoder with Nilay Patel
A podcast from The Verge about big ideas and other problems.
SUBSCRIBE NOW!
-
Connecticut3 minutes agoAdvocates pushing to expand bill protecting Connecticut renters
-
Delaware9 minutes agoDelaware crabshack remains enthusiastic despite increased crab prices – 47abc
-
Florida15 minutes agoThe Vikings’ new DT Caleb Banks has strength that fits his massive size
-
Georgia21 minutes ago
Five Stats to Know about Texans G Keylan Rutledge
-
Hawaii27 minutes agoAloha in Action benefit concert raises money for flood victims
-
Idaho33 minutes agoIdaho officials review medical cannabis campaign as donor records change
-
Illinois39 minutes agoHow Illinois affordable housing bills could change suburban neighborhoods
-
Indiana45 minutes agoFull-length Replay: Indiana | FOX Sports