Connect with us

Technology

BEWARE SOFTWARE BRAIN

Published

on

BEWARE SOFTWARE BRAIN

Today on Decoder, I want to lay out an idea that’s been banging around my head for weeks now as we’ve been reporting on AI and having conversations here on this show. I’ve been calling it software brain, and it’s a particular way of seeing the world that fits everything into algorithms, databases and loops — software.

Software brain is powerful stuff. It’s a way of thinking that basically created our modern world. Marc Andreessen, the literal embodiment of software brain, called it in 2011 when he wrote the piece “Why software is eating the world” as an op-ed in The Wall Street Journal. But software thinking has been turbocharged by AI in a way that I think helps explain the enormous gap between how excited the tech industry is about the technology and how regular people are growing to dislike it more and more over time.

In fact, the polling on this is so strong, I think it’s fair to say that a lot of people hate AI. And Gen Z in particular seems to hate AI more and more as they encounter it. There’s that NBC News poll showing AI with worse favorability than ICE and only a little bit above the war in Iran and the Democrats generally. That’s with nearly two thirds of respondents saying they used ChatGPT or Copilot in the last month. Quinnipiac just found that over half of Americans think AI will do more harm than good, while more than 80 percent of people were either very concerned or somewhat concerned about the technology. Only 35 percent of people were excited about it.

Poll after poll shows that Gen Z uses AI the most and has the most negative feelings about it. A recent Gallup poll found that only 18 percent of Gen Z was hopeful about AI, down from an already-bad 27 percent last year. At the same time, anger is growing: 31 percent of those Gen Z respondents said they feel angry about AI, up from 22 percent last year.

Now, I obviously talk to a lot of tech executives and policy people here on Decoder, and I will tell you, they all know AI isn’t popular, and they can all see how that’s playing out in real life. Here’s Microsoft CEO Satya Nadella talking about how the tech industry needs to make the case for the investments it’s making in AI:

Advertisement

Satya Nadella: At the end of the day, I think this industry, to which I belong, needs to earn the social permission to consume energy because we’re doing good in the world.

I think it’s safe to say that the tech industry and AI have not earned any of that social permission yet. Politicians from both sides of the aisle are opposing data center buildouts. Politicians in local communities that support data centers are getting voted out of office. And in the most depressing reminder of how much political violence has become a part of everyday American life, politicians who’ve supported data centers have had their houses shot at. OpenAI CEO Sam Altman has had Molotov cocktails thrown at his house.

It’s sad that I’m going to have to say this again on the show, and it’s sad that we’re going to have commenters who disagree, but this violence is unacceptable. If you want to meaningfully oppose AI in a way that lasts, you should speak loudly with your dollars in the market and your attention online, and you should speak loudly with your votes. You should participate in a democratic regulatory and political process. Anything else will get dismissed and perpetuate the cycle. That dismissal is already happening.

I also think it’s incredibly important for our politicians and tech executives to make sure our political process makes people feel empowered, not helpless, which is a specific kind of nihilism they have all greatly contributed to. The violence is a result of that helplessness and nihilism. And the most powerful people in our society ought to reckon with that, especially as they run around saying AI will wipe out all the jobs. I’m not even exaggerating this. Here’s Anthropic CEO Dario Amodei saying he thinks AI will wipe out all the jobs:

Dario Amodei: Entry-level jobs in areas like finance, consulting, tech and many other areas like that —- entry-level white-collar work — I worry that those things are going to be first augmented, but before long replaced by AI systems. We may indeed —- it’s hard to predict the future — but we may indeed have a serious employment crisis on our hands as the pipeline for this early-stage, white-collar work starts to contract and dry up.

What I see when I encounter clips like this is the true gap between the tech industry and regular people when it comes to AI — and also the limit of software brain. Like I said, everyone in tech understands how much regular people dislike AI. What I think they’re missing is why. They think this is a marketing problem. OpenAI just spent $200 million on the TBPN podcast because the company thinks it will help make people like AI more. Sam Altman has said so explicitly:

Sam Altman: Oh, they are genius marketers and I would love to have better marketing. Somebody said to me recently that if AI were a political candidate, it would be the least popular political candidate in history. And given the amazing things AI can do, I think there’s got to be better marketing for AI.

It feels like someone just needs to say this clearly, so I’m just going to do it. AI doesn’t have a marketing problem. People experience these tools every single day. ChatGPT has 900 million weekly users, trending to a billion, and everyone has seen AI Overviews in Google Search and massive amounts of slop on their feeds. You can’t advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives.

Advertisement

Image: The Verge

So what is software brain? The simplest definition I’ve come up with is that it’s when you see the whole world as a series of databases that can be controlled with structured language and software code. Like I said, this is a powerful way of seeing things. So much of our lives run through databases, and a bunch of important companies have been built around maintaining those databases and providing access to them.

Zillow is a database of houses. Uber is a database of cars and riders. YouTube is a database of videos. The Verge’s website is a database of stories. You can go on and on and on. Once you start seeing the world as a bunch of databases, it’s a small jump to feeling like you can control everything if you can just control the data.

But that doesn’t always work. Here’s an example: Elon Musk and DOGE showed up in the government, and the first thing they did was take control of a bunch of databases. And they ran into the undeniable fact that the databases aren’t reality, and DOGE ended in hilarious failure. It turns out software brain has a limit, and the government isn’t software. People aren’t computers, and they don’t live in automatable loops that can be neatly captured in databases.

Anyone who’s actually ever run a database knows this. At some point, the database stops matching reality. And at that point, we usually end up tweaking the database, not the world. The AI industry has fully lost sight of this. AI thrives on data. It’s just software. And so the ask is for more and more of us to conform our lives to the database, not the other way around.

Advertisement

Let me offer you another example that I think about all the time, especially as AI finds real fit as a business tool. It’s the idea that AI is coming for lawyers and the legal system. The AI industry loves to talk about not needing lawyers anymore, which is already getting all kinds of people into all kinds of trouble. But I get it. I’ve spent a lot of time with lawyers. I used to be a lawyer. My wife is still a lawyer. Some of my best friends are lawyers.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

I also spend all of my time at work talking to tech people. And so over time, I’ve learned that the overlap between software brain and lawyer brain is very, very deep. Alluringly deep. If the heart of software brain is the idea that thinking in the structured language of code can make things happen in the real world, well, the heart of lawyer brain is that thinking in the structured legal language of statutes and citations can also make things happen. Hell, it can give you power over society.

There are other commonalities. Both software development and the law depend heavily on precedent. We have a body of case law in this country, and we use it over and over again to help us resolve disputes. Much like software engineers have libraries of code that they turn to repeatedly to build the foundations of their products. I can go on.

At the end of the day, both lawyers and engineers do their best to use formal, structured language to guide the behavior of complicated systems in predictable and potentially profitable ways. I am far from the first person with this idea. Larry Lessig wrote a book called Code and Other Laws of Cyberspace in 2000. It’s just as relevant today as it was a quarter century ago.

And so you have this intoxicating similarity between law and code, and it trips people up all the time. People are constantly trying to issue commands to society at large like it’s a computer that will obey instructions. There are examples of this big and small. My favorite are those Facebook forwards insisting Mark Zuckerberg does not have the right to publish people’s photos. Honestly, I look at these, and I think it would be great if the law was actually code. Maybe things would be more predictable. Maybe we’d feel more in control.

Advertisement

But law isn’t actually code, and society and courts aren’t computers. I have to remind our fairly technical audience on Decoder and at The Verge all the time that the law is not deterministic. You simply cannot take the facts of a case, the law as written, and predict the outcome of that case with any real certainty, even though the formality of the legal system makes people think it works like a computer, that it’s predictable.

Because at the end of the day, it’s actually ambiguity that’s at the very heart of our legal system. It’s ambiguity that makes lawyers lawyers. Honestly, it’s ambiguity that makes people hate lawyers because it’s always possible to argue the other side, and it’s always possible to find the gray area in the law. That’s why prosecutors end up working as defense attorneys and why our regulators tend to end up working for big corporations.

So you can see the obvious collision between software brain and lawyer brain. This thing that looks like a computer isn’t actually anything at all like a computer. A lot of people even argue that the law should be more like a computer, that the system should be verifiable and consistent, and that merely issuing the right commands at the right times should lead to objectively correct outcomes.

Bridget McCormack, who used to be the chief justice of the Michigan Supreme Court, was on Decoder a few months ago pitching a fully automated AI arbitration system. Her argument to me was that people perceive the traditional legal system to be so unfair, they will accept a worse outcome from an automated system as more fair as long as they feel heard. And if there’s one thing AI can do, it’s sit there and listen all day and night. I don’t know if any of that is correct or even workable, but I do know software brain, and that is pure software brain. The idea that we can force the real world to act like a computer and then have AI issue that computer instructions.

You can see the same thing happening in every other kind of industry. You don’t hire a big consulting firm to actually come in and study your business and make it more efficient. You hire them to make slide decks that justify layoffs to your board and shareholders. Big consulting firms are great at this, and now they’re just going to generate those decks with AI. They are already doing this and the layoffs have already begun.

Advertisement

Any business process that looks like code talking to a database in a repetitive way is up for grabs. That’s why Anthropic has been so relentlessly focused on enterprise customers, and it’s why OpenAI is now pivoting to business use. There’s real value in introducing AI to business because so much of modern business is already software, collecting data, analyzing it, and taking action on it over and over again in a loop. Businesses also control their data, and they can demand that all their databases work together. In this way, software brain has ruled the business world for a long time. And AI has made it easier than ever for more people to make more software than ever before, for every kind of business to automate big chunks of itself with software. The absolute cutting edge of advertising and marketing is automation with AI. It’s not being in creative.

But not everything is a business, not everything is a loop, and the entire human experience cannot be captured in a database. That’s the limit of software brain. That’s why people hate AI. It flattens them. Regular people don’t see the opportunity to write code as an opportunity at all. The people do not yearn for automation. I’m a full-on smart home sicko; the lights and shades and climate controls of this house are automated in dozens of ways. But huge companies like Apple, Google and Amazon have struggled for over a decade now to make regular people care about smart home automation at all. And they just don’t.

AI isn’t going to fix that. Most people are not collecting data about every single thing that they do. And if they’re collecting any at all, it’s stored across lots of different systems — your email in Gmail, your messages in iMessage, your work schedule in Outlook, your workouts in Peloton. Those systems don’t talk to each other and maybe they never will, because there’s no reason for them to. And asking people to connect them all freaks them out.

Even taking the time to consider how much of your life is captured in databases makes people unhappy. No one wants to be surveilled constantly, and especially not in a way that makes tech companies even more powerful. But getting everything in a database so software can see it is a preoccupation of the AI industry. It’s why all the meeting systems have AI note takers in them now. It’s why Canva, which is design software, now connects to corporate email systems. My friend Ezra Klein just went to Silicon Valley, and he described the people that are actively trying to flatten themselves into a database:

Ezra Klein: You might think that A.I. types in Silicon Valley, flush with cash, are on top of the world right now. I found them notably insecure. They think the A.I. age has arrived and its winners and losers will be determined, in part, by speed of adoption. The argument is simple enough: The advantages of working atop an army of A.I. assistants and coders will compound over time, and to begin that process now is to launch yourself far ahead of your competition later. And so they are racing one another to fully integrate A.I. into their lives and into their companies. But that doesn’t just mean using A.I. It means making themselves legible to the A.I.

You can give it access to everything that’s there: your files, your email, your calendar, your messages. It operates continuously in the background, building a persistent memory of your preferences and patterns so it can better act on your behalf. The cybersecurity risks are glaring, but there’s a reason millions of people are using it: The more of your life you open to A.I., the more valuable the A.I. becomes.

Advertisement

I’ve reviewed a lot of tech products over the past decade and a half, and all I can tell you is that it is a failure when you ask people to adapt to computers. Computers should adapt to people. And asking people to make themselves more legible to software, to turn themselves into a database, is a doomed idea. It’s an ask so big, I can’t imagine a reward that would make it worth it for anyone, even if the tech industry wasn’t constantly talking about how AI will eliminate all the jobs, require a wholesale rethinking of the social contract and — oops — also the latest models might cause catastrophic cybersecurity problems that might lead to the end of the world.

Does this sound like a good deal to you? Can you market your way out of this? This only makes sense if you have software brain, if your operative framework is to flatten everything into databases that you can control with structured language. The people paying thousands of dollars a month to set up swarms of OpenClaw agents and write thousands of lines of code, they’re people who look at the world and see opportunities for automation, to repeat tasks, to collect data, to build software. AI is great for them. It’s even exciting in ways that I think are important and will probably change our relationship to computers forever.

For everyone else, AI is just a demanding slop monster. It’s a threat. I’m not saying regular people don’t use Excel or Airtable to plan their weddings or have fun throwing PowerPoint parties, or even that AI won’t be useful to regular people over time. I think a lot of people enjoy data and tracking different parts of their lives. There’s my WHOOP band. I’m just saying these things aren’t everything. Not everything about our lives can be measured and automated and optimized. It shouldn’t be.

And so the tech industry is rushing forward to put AI everywhere at enormous cost — energy, emissions, manufacturing capacity, the ability to buy RAM — and locked into the narrow framework of software brain without realizing they are also asking people to be fundamentally less human. They then sit around wondering why everyone hates them. I don’t think a couple haircuts are going to fix it.

Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!

Advertisement

Decoder with Nilay Patel

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Microsoft’s Edge Copilot update uses AI to pull information from across your tabs

Published

on

Microsoft’s Edge Copilot update uses AI to pull information from across your tabs

Microsoft Edge is adding a new feature that will allow its Copilot AI chatbot to gather information from all of your open tabs. When you start a conversation with Copilot, you can ask the chatbot questions about what’s in your tabs, compare the products you’re looking at, summarize your open articles, and more.

In its announcement, Microsoft says you can “select which experiences you want or leave off the ones you don’t.” The company is retiring Copilot Mode as well, which could similarly draw information from your tabs but offered some agentic features, like the ability to book a reservation on your behalf. Microsoft has since folded these agentic capabilities into its “Browse with Copilot” tool.

Several other AI features are coming to Edge, including an AI-powered “Study and Learn” mode that can turn the article you’re looking at into a study session or interactive quiz. There’s a new tool that turns your tabs into AI-powered podcasts as well, similar to what you’d find on NotebookLM, and an AI writing assistant that will pop up when you start entering text on a webpage.

You can also give Copilot permission to access your browsing history to provide more “relevant, high-quality answers,” according to Microsoft. Copilot in Edge on desktop and mobile will come with “long-term memory” as well, which can tailor its responses based on your previous conversations. And, when you open up a new tab, you’ll see a redesigned page that combines chat, search, and web navigation, along with the Journeys feature, which uses AI to organize your browsing history into categories that you can revisit.

Meanwhile, an update to Edge’s mobile app will allow you to share your screen with Copilot and talk through the questions about what you’re seeing. Microsoft says you’ll see “clear visual cues” when Copilot is active, “so you know when it’s taking an action, helping, listening, or viewing.”

Advertisement
Continue Reading

Technology

Apple’s $250M Siri settlement: Are you owed cash?

Published

on

Apple’s 0M Siri settlement: Are you owed cash?

NEWYou can now listen to Fox News articles!

If you bought a newer iPhone because Apple made Siri sound like it was about to become your personal artificial intelligence sidekick, you may want to pay attention.

Apple has agreed to pay $250 million to settle a class-action lawsuit over claims that it misled customers about new Apple Intelligence and Siri features. The case centers on the iPhone 16 launch and certain iPhone 15 models that were marketed as ready for Apple’s next wave of AI. The settlement still needs court approval, and Apple denies wrongdoing.

The lawsuit argues that Apple promoted a smarter, more personal Siri before those features were actually available. For some buyers, that was a big deal. A new iPhone can cost hundreds of dollars, and many people upgrade only when they think they are getting something meaningfully new.

 Sign up for my FREE CyberGuy Report

Advertisement
  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

WHY IPHONE USERS ARE THE NEW PRIME SCAM TARGETS

U.S. buyers of certain iPhone 16 and iPhone 15 Pro models may qualify for payments if a judge approves Apple’s proposed settlement. (Getty Images)

What Apple is accused of promising

Apple introduced Apple Intelligence in June 2024 and promoted it as a major step forward for iPhone, iPad and Mac. A key part of that pitch was a more personalized Siri that could understand context, work across apps and help with everyday tasks in a more useful way.

The lawsuit claims Apple’s marketing made consumers believe those advanced Siri features would arrive with the iPhone 16 or soon after. Instead, buyers received phones that had some Apple Intelligence tools, but not the full Siri overhaul that many expected.

That gap is the heart of the case. Plaintiffs say customers bought or upgraded devices based on AI features that were not ready. Apple says it has rolled out many Apple Intelligence features and settled the case, so it can stay focused on its products. 

How much money could iPhone owners get?

The proposed settlement creates a $250 million fund. Eligible customers who file approved claims are expected to receive at least $25 per eligible device. That amount could rise to as much as $95 per device, depending on how many people file claims and other settlement factors.

Advertisement

That means this will not be a huge payday for most people. Still, if you bought one of the covered phones, it may be worth watching for a claim notice. A few minutes of paperwork could put some money back in your pocket.

Which iPhones may qualify?

The proposed settlement covers U.S. buyers who purchased any iPhone 16 model, iPhone 15 Pro or iPhone 15 Pro Max between June 10, 2024, and March 29, 2025.

Covered iPhone 16 models include the iPhone 16, iPhone 16 Plus, iPhone 16 Pro, iPhone 16 Pro Max and iPhone 16e. The settlement also includes the iPhone 15 Pro and iPhone 15 Pro Max, but not every iPhone 15 model.

The key details are the device model, the purchase date and whether the phone was bought in the United States.

HOW YOU CAN GET A SLICE OF APPLE’S $250M IPHONE SETTLEMENT

Advertisement

Apple has agreed to pay $250 million to settle claims it misled customers about Apple Intelligence and Siri features on newer iPhones. (Michael Nagle/Bloomberg)

How will you file a claim?

You do not need to do anything immediately. The settlement still needs a judge’s approval. Once the claims process opens, eligible customers are expected to receive a notice by email or mail with instructions on how to file through a settlement website.

That notice matters because scammers love moments like this. A real settlement notice should not ask for your Apple ID password, bank login or payment to claim your money. If you receive a message about this settlement, do not click blindly. Go slowly, check the sender and look for the official settlement administrator details once they are available.

Why this case matters beyond one Siri feature

This case hits a bigger nerve. Tech companies are racing to sell AI as the next must-have feature. That creates a problem for shoppers. You are often asked to buy now based on what a company says will arrive later.

That can be frustrating when the feature is the reason you upgraded. A smarter Siri sounds useful. A phone that can understand your personal context, search across apps and help with daily tasks could save time. But if those tools are delayed, limited or missing, the value of the upgrade changes.

Advertisement

This settlement also sends a message about AI marketing. Companies can talk about future features, but consumers need clear timing and plain explanations. “Coming soon” can mean very different things when you are spending $800, $1,000 or more.

We reached out to Apple for comment, but did not hear back before our deadline.

FIRST 15 THINGS TO DO OR TRY FIRST WHEN YOU GET A NEW IPHONE

Apple denies wrongdoing but agreed to settle claims tied to its marketing of Apple Intelligence and Siri features. (Qilai Shen/Bloomberg)

What this means to you

If you bought a covered iPhone during the settlement period, keep an eye on your email and regular mail. You may qualify for a payment if the court approves the deal.

Advertisement

You should also keep your receipt or proof of purchase if you have it. Your Apple purchase history, carrier account or retailer receipt may help if the claim process asks for details.

More broadly, this is a reminder to treat AI features like any other big tech promise. Before you upgrade, ask one simple question: Can the feature do what is being advertised today, or is the company asking me to wait?

That question can save you from buying a device for a future feature that may arrive much later than expected.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my quiz here: CyberGuy.com.

Advertisement

Kurt’s key takeaways

Apple has built its brand on making technology feel polished, personal and easy to use. That is why this Siri settlement hits a nerve. People were buying phones they use every day for texts, photos, directions, reminders and everything in between. Many expected AI to make those everyday tasks easier, which is why the delay felt frustrating. The proposed payout may be modest, but the bigger issue is trust. When a company sells AI as a reason to upgrade, customers deserve to know what actually works now and what is still coming later.

Would you still buy a new phone for promised AI features, or would you wait until they actually show up? Let us know by writing to us at CyberGuy.com.

Sign up for my FREE CyberGuy Report

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

Instagram hits the copy button again with new disappearing Instants photos

Published

on

Instagram hits the copy button again with new disappearing Instants photos

Instagram is once again cribbing from competitors like Snapchat and BeReal with a new photo-sharing format it calls “Instants,” which are ephemeral photos that you can’t edit and that you can only share with your close friends or followers that follow you back. Instants are available globally beginning on Wednesday as a feature in the inbox in the Instagram app and as a separate app that’s now in testing in select countries.

To access Instants from the Instagram app, go to your DM inbox and look in the bottom-right corner for an icon or a stack of photos. After you post a photo, your friends can emoji react to it and send a reply to your DMs, but after they see it, the photo disappears for them. Instants also disappear after 24 hours, and they can’t be captured in screenshots or screen recordings.

However, your Instants will remain in an archive for you for up to a year, and you can reshare them as a recap to your Instagram Stories if you’d like. You can also undo sending an Instant right after you post it or delete it from your archive.

The Instants mobile app, which popped up in Italy and Spain in April, gives you “immediate access to the camera” and only requires an Instagram account, Instagram says. “Instants you share on the separate app will show up for friends on Instagram and vice versa. We’re trying this separate app out to see how our community uses it, and we’ll continue to evolve it as we learn more.”

Instagram, in its testing, has seen that people “tend to use Instants to share much more casual, much more authentic moments about their day,” according to Instagram boss Adam Mosseri. “And we know that this type of sharing of personal moments with friends is a core part of what makes Instagram Instagram, but we also know that a lot of people don’t really share a lot to their profile grids anymore.”

Advertisement

Continue Reading
Advertisement

Trending