Connect with us

Technology

We can’t wait around for AI safety, Biden’s top tech adviser says

Published

on

We can’t wait around for AI safety, Biden’s top tech adviser says

Today, I’m talking with Arati Prabhakar, the director of the White House Office of Science and Technology Policy. That’s a cabinet-level position, where she works as the chief science and tech adviser to President Joe Biden. She’s also the first woman to hold the position, which she took on in 2022. 

Arati has a long history of working in government: she was the director of the National Institute of Standards and Technology, and she headed up the Defense Advanced Research Projects Agency (DARPA) for five years during the Obama administration. In between, she spent more than a decade working at several Silicon Valley companies and as a venture capitalist, so she has extensive experience in both the public and private sectors. 

Arati and her team of about 140 people at the OSTP are responsible for advising the president on big developments in science as well as major innovations in tech, much of which comes from the private sector. That means guiding regulatory efforts, government investment, and setting priorities around big-picture projects like Biden’s cancer moonshot and combating climate change. 

You’ll hear Arati and me talk about that pendulum swing between public and private sector R&D — how that affects what gets funded and what doesn’t and how she manages the tension between the hyper-capitalist needs of industry and the public interest of the federal government. 

Advertisement

We also talked a lot about AI, of course. Arati was notably the first person to show ChatGPT to President Biden; she has a funny story about how they had it write song lyrics in the style of Bruce Springsteen. But the OSTP is also now helping guide the White House’s approach to AI safety and regulation, including Biden’s AI executive order last fall. Arati and I talked at length about how she personally assesses the risks posed by AI, in particular around deepfakes, and what effect big tech’s often self-serving relationship to regulation might have on the current AI landscape. 

Another big interest area for Arati is semiconductors. She got her PhD in applied physics, with a thesis on semiconductor materials, and when she arrived on the job in 2022, Biden had just signed the CHIPS Act. I wanted to know whether the $52 billion in government subsidies to bring chip manufacturing back to America is starting to show results, and Arati had a lot to say on the strength of this kind of legislation. 

One note before we start: I sat down with Arati last month, just a couple of days before the first presidential debate and its aftermath, which swallowed the entire news cycle. So you’re going to hear us talk a lot about President Biden’s agenda and the White House’s policy record on AI, among other topics, but you’re not going to hear anything about the president, his age, or the presidential campaign.

Okay, OSTP Director Arati Prabhakar. Here we go.

This transcript has been lightly edited for length and clarity. 

Advertisement

Arati Prabhakar. You are the director of the White House’s Office of Science and Technology Policy and the science and technology adviser to the president. Welcome to Decoder.

It’s great to be with you.

I am really excited to talk to you. There’s a lot of science and technology policy to talk about right now. We’re also entering what promises to be a very contentious election season where I think some of these ideas are going to be up for grabs, so I want to talk about what is politicized, what is not, and where we might be going. But just let’s start at the start. For the listener, what is the Office of Science and Technology Policy?

We’re a White House office with two roles. One is whatever the president needs advice or help on that relates to science and technology, which is in everything. That’s part one. Part two is thinking about working on nurturing the entire innovation system in the country, especially the federal component, which is the R&D that’s done across literally dozens of federal agencies. Some of it’s for public missions. A lot of it forms the foundation for everything else in the innovation ecology across this country. That’s a huge part of our daily work. And as we do that, of course what we’re working on is how do we solve the big problems of our time, how do we make sure we’re using technology in ways that build our values. 

That’s a big remit. When people think about policymaking right now, I think there’s a lot of focus on Congress or maybe state-level legislatures. Which piece of the policy puzzle do you have? What are you able to most directly affect?

Advertisement

I’ll tell you how I think about it. The reason I was so excited when the president asked if I would do this job a couple of years ago is because my personal experience has been working in R&D and in technology and innovation from lots of different vantage points. I ran two very different parts of federal R&D. In between, I spent 15 years in Silicon Valley at a couple of companies, but most of that was early-stage venture capital. I started a nonprofit. 

What I learned from all of that is that we do huge things in this country, but it takes all of us doing them together — the huge advances that we’ve made in the information revolution and in now fighting climate change and advancing American health. We know how amazing R&D was for everything that we did in the last century, but this century’s got some different challenges. Even what national security looks like is different today because the geopolitics is different. What it means to create opportunity in every part of the country is different today, and we have challenges like climate change that people weren’t focused on last century, even though we now wish that they had been.

How do you aim innovation at the great aspirations of today? That’s the organizing principle, and that’s how we set priorities for where we focus our attention and where we work to get innovation aimed in the right direction and then cranking.

Is that the lens: innovation and forward-thinking? That you need to make some science and technology policy, and all that policy should be directed at what’s to come? Or do you think about what’s happening right now?

In my view, the purpose of R&D is to help create options so that we can choose the future that we really want and to make that possible. I think that has to be the ultimate objective. The work gets done today, and it gets done in the context of what’s happening today. It’s in the context of today’s geopolitics. It’s in the context of today’s powerful technologies, AI among them.

Advertisement

When I think about the federal government, it’s this large complicated bureaucracy. What buttons do you get to push? Do you just get to spend money on research projects? Do you get to tell people to stop things?

No, I don’t do that. When I ran DARPA [Defense Advanced Research Projects Agency] or when I ran the National Institute of Standards and Technology (NIST) over in the Commerce Department, I ran an agency, and so I had an aligned position, I had a budget, I had a bunch of responsibilities, and I had a blast working with great people and getting big things done. This is a different job. This is a staff job to the president first and foremost, and so this is a job about looking across the entire system. 

We actually have a very tiny budget, but we worry about the entire picture. So, what does that actually mean? It means, for example, helping the president find great people to lead federal R&D organizations across government. It means keeping an eye out on where shifts are happening that need to inform how we do research. Research security is a challenge today that, because of geopolitics and some of the issues with countries of concern, is going to have an impact on how universities conduct research. That’s something that we will take on working with all the agencies who work with universities.

It’s those kinds of cross-cutting issues. And then when there are strategic imperatives — whether it’s wrangling AI to make sure we get it right for the American people, whether it’s figuring out if we’re doing the work we need to decarbonize the economy fast enough to meet the climate crisis, or are we doing the things across everything it takes to cut the cancer death rate in half as fast as the president is pounding the table forward with his cancer moonshot — we sit in a place where we can look at all the puzzle pieces, make sure that they’re working together, and make sure that the gaps are getting addressed, either by the president or by Congress.

I want to draw a line here because I think most people think that the people working on tech in the government are actually affecting the functions of the government itself, like how the government might use technology. Your role seems a little more external. This is actually the policy of how technology will be developed and deployed across private industry or government, over time externally.

Advertisement

I would call it integrative because we’re very lucky to have great technologists who are building and using technology inside the government. That’s something we want to support and make sure is happening. Just as an example, one of our responsibilities for the AI work has been an AI talent surge to get the right kind of AI talent into government, which is now happening. Super exciting to see. But our day job is not that. It’s actually making sure that the innovation enterprise is robust and doing what it really needs to do.

How is your team structured? You’re not out there spending a bunch of money, but you have different focus areas. How do you think about structuring those focus areas, and what do they deliver?

Policy teams, and they’re organized specifically around these great aspirations that are the purpose for R&D and innovation. We have a team focused on health outcomes, among other things, that runs the president’s Cancer Moonshot. We have a team called Industrial Innovation that is about the fact that we now have, with this president, a very powerful industrial strategy that is revitalizing manufacturing in the United States, building our clean energy technologies and systems, that’s bringing leading-edge semiconductor manufacturing back to the United States. So, that’s an office that focuses on the R&D and all of that big picture of industrial revitalization that’s going on.

We have another team that focuses on climate and the environment, and that one is about things like making sure we can measure greenhouse gases appropriately. How do we use nature to fight climate change? And then we have a team that’s focused on national security just as you would expect, and each of those is a policy team. In each one of those, the leader of that organization is typically an extremely experienced person who has often worked inside and outside of government. They know how the government works, but they also really understand what it is the country’s trying to achieve, and they’re knitting together all the pieces. And then again, where there are gaps, where there are new policies that need to be advanced, that’s the work that our teams do.

Are you making direct policy recommendations? So, the environment team is saying, “Alright, every company in the country has promised a million trees. That’s great. We should incentivize some other behavior as well, and then here’s a plan to do that.” Or is it broader than that?

Advertisement

The way policies get implemented can be everything from agencies taking action within the laws that they live under, within their existing resources. It can be an executive order where a president says, “This is an urgent matter. We need to take action.” Again, it’s under existing law, but it’s the chief executive, the president, saying, “We need to take action.” Policy can be advanced through legislative proposals where we work with Congress to make something move forward. It’s a matter of what it takes to get what we really need, and often we start with actions within the executive branch, and then it expands from there.

How big is your office right now?

We’re about 140 people. Almost all of our team is people who are here on detail from other parts of government, sometimes from nonprofits outside of government or universities. The organization was designed that way because, again, it’s integrative. You have to have all of these different perspectives to be able to do this work effectively.

You’ve had a lot of roles. You led DARPA. That’s a very executive role within the government. You get to make decisions. You’ve been a VC. What’s your framework now for making decisions? How do you think about it?

The first question is what does the country need and what does the president care about? Again, a lot of the reason I was so excited to have this opportunity… by the time I came in, President Biden was well underway. I had my interview with him almost exactly two years ago — the summer of 2022. By then, it was already really clear, number one, that he really values science and technology because he’s all about how we build the future of this country. He understands that science and technology is a key ingredient to doing big things. Number two, he was really changing infrastructure: clean energy, meeting the climate crisis, dealing with semiconductor manufacturing. That was so exciting to see after so many decades. I’ve been waiting to see those things happen. It really gave me a lot of hope.

Advertisement

Across the line, I just saw his priorities really reflected what I deeply and passionately thought was so important for our country to meet the future effectively. That’s what drives the prioritization. Within that, I mean it’s like any other job where you’re leading people to try to get big hard things done. Not surprisingly, every year, I make a list of the things we want to get done, and through the year, we work to see what kind of progress we’re making, and we succeed wildly on some things, but sometimes we fail or the world changes or we have to take another run at it. But overall, I think we’re making huge progress, and that’s why I’m still running to work.

When you think about places you’ve succeeded wildly, what are the biggest wins you think you’ve had in your tenure?

In this role, I’ll tell you what happened. As I showed up in October of 2022 for this job, ChatGPT showed up in November of 2022. Not surprisingly, I would say largely my first year got hijacked by AI but in the best possible way. First, because I think it’s an important moment for society to contend with all the implications of AI, and secondly, because, as I’ve been doing this work, I think a lot of the reason AI is such an important technology in our lives today is because of its breadth. Part of what that means is that it is definitely a disruptor for every other major national ambition that we have. If we get it right, I think it can be a huge accelerator for better health outcomes, for meeting the climate crisis, for everything that we really have to get done.

In that sense, even though a lot of my personal focus was on AI matters and still is, that continues. While that was going on, I think we continued with my great team. We continued to make good progress on all the other things that we really care about.

Don’t worry, I’m going to ask a lot of AI questions. They’re coming, but I just want to get a sense of the office because you talked about coming in ’22. That office was in a little bit of turmoil, right? Trump had underfunded it. It had gone without any leadership for a minute. The person who preceded you left because they had contributed to a toxic workplace culture. You had a chance to reset it, to reboot it. The way it was was not the way anybody wanted it to be and not for some time. How did you think about making changes to the organization at that moment in time?

Advertisement

Between the time my predecessor left and the time I arrived, many months had passed. What was so fortunate for OSTP and the White House and for me is that Alondra Nelson stepped in during that time, and she just poured love on this organization. By the time I showed up, it had become — again, I would tell you — a very healthy organization. She gave me the great gift of a huge number of really smart, committed people who were coming to work with real passion about what they were doing. From there, we were able to build. We can talk about technology all day long, but when I think about the most meaningful work I’ve ever done in my professional life, it’s always about doing big things that change the future and improve people’s lives.

The satisfaction comes from working with great people to do that. For me, that is about infusing people with this passion for serving the country. That’s why they’re all here. But there’s a live conversation in our hallways about what we feel when we walk outside the White House gates, and we see people from around the country and around the world looking at the White House and the sense that we all share that we’re there to serve them. Those things are why people work here, but making that a live part of the culture, I think it’s important for making it a rich and meaningful experience for people, and that’s when they bring their best. I feel like we’ve really been able to do that here.

You might describe that feeling, and I’ve felt it, too, as patriotism. You look at the monuments in DC, and you feel something. One thing that I’ve been paying attention to a lot recently is the back-and-forth between the federal government spending on research, private companies spending on research. There’s a pretty enormous delta between the sums. And then I see the tech companies, particularly in AI, holding themselves out as national champions. Or you see a VC firm like Andreessen Horowitz, which did not care about the government at all, saying that its policy is America’s policy

Is that part of your remit to balance out how much these companies are saying, “Look, we are the national champions of AI or chip manufacturing,” or whatever it might be, “and we can plug into a policy”?

Well, I think you’re talking about something that is very much my day job, which is understanding innovation in America. Of course, the federal component of it, which is integral, but we have to look at the whole because that’s the ecosystem the country needs to move forward.

Advertisement

Let’s zoom back for a minute. The pattern that you’re describing is something that has happened in every industrializing economy. If you go back in history, it starts with public investment and R&D. When a country is wealthy enough to put some resources into R&D, it starts doing that because it knows that’s where its growth and its prosperity can come from. But the point of doing that is actually to seed private activity. In our country, like many other developed economies, the moment came when public funding of R&D, which continued to grow, was surpassed by private investment in R&D. Then private investment, with the intensification of the innovation economy with the information technology industries, just took off, and it’s been amazing and really great to see.

The most recent numbers — I believe these are from 2021 — are something like $800 billion a year that the United States spends on R&D. Overwhelmingly, that is from private industry. The fastest growth has come from industry and specifically from the information technology industries. Other industries like pharmaceuticals and manufacturing are R&D-intensive, but their pace of growth has been just… the IT industries are wiping out everyone else’s growth [by comparison]. That’s huge. One aspect of that is that’s where we’re seeing these big tech companies plowing billions of dollars into AI. If that’s happening in the world, I’m glad it’s happening in America, and I’m glad that they’ve been able to build on what has been decades now of federal research and development that laid the groundwork for it.

Now, it does then create a whole new set of issues. That really, I think, comes to where you were going because let’s back up. What is the role of federal R&D? Number one, it is the R&D you need to achieve public missions. It’s the “R” and the “D,” product development, that you need for national security. It’s the R&D that you need for health, for meeting the climate crisis. It’s all the things that we’ve been talking about. It’s also that, in the process of doing that work, part of what federal R&D does is to lay a very broad foundation of basic research because that’s important not just for public missions, but we know that that’s something that supports economic growth, too. It’s where students get educated. It’s where the fundamental research that’s broadly shared through publications, that’s a foundation that industry counts on. Economics has told us forever that that’s not returns that can be appropriated by companies, and so it’s so important for the public sector to do that.

The question really becomes then, when you step back and you say this huge growth in private sector R&D, how do we keep federal R&D? It doesn’t have to be the biggest for sure, but it certainly has to be able to continue to support the growth and the progress that we want in our economy, but then also broadly across these public missions. That’s why it was a priority for the president from the beginning, and he made really good progress the first couple of years in his administration on building federal R&D. It grew fairly substantially in the first couple of budget cycles. Then with these Republican budget caps from Capitol Hill in the last cycle, R&D took a hit, and that’s actually been a big problem that we are focused on.

The irony is that we’ve actually cut federal R&D in this last cycle in a time in which our primary economic and military emerging competitor is the People’s Republic of China (PRC). They boosted R&D by 10 percent while we were cutting. And it’s a time when it’s AI jump ball because a lot of AI advances came from American companies, but the advantages are not limited to America. It’s a time when we should be doubling down, and we’re doing the work to get back on track.

Advertisement

That is the national champion’s argument, right? I listen to OpenAI, Google, or Microsoft, and they say, “We’re American companies. We’re doing this here. Don’t regulate us so much. Don’t make us think about compliance costs or safety or anything else. We’ve got to go win this fight with China, which is unconstrained and spending more money. Let us just do this. Let us get this done.” Does that work with you? Is that argument effective?

First of all, that’s not really what I would say we’re hearing. We hear a lot of things. I mean, astonishingly, this is an industry that spends a lot of time saying, “Please do regulate us.” That’s an interesting situation, and there’s a lot to sort out. But look, I think this is really the point about all the work we’ve been doing on AI. It really started with the president and the vice president recognizing it as such a consequential technology, recognizing promise and peril, and they were very clear from the beginning about what the government’s role is and what governance really looks like here.

Number one is managing its risks. And the reason for that is number two, which is to harness its benefits. The government has, I think, two very important roles. It was visible and obvious even before generative AI happened, and it’s even more so now that the breadth of applications each come with a bright side and a dark side. So, of course, there are issues of embedded bias and privacy exposure and issues of safety and security, issues about the deterioration of our information environment. We know that there are impacts on work that have started and that it will continue. 

Those are all issues that require the government to play its role. It requires companies, it requires everyone to step up, and that’s a lot of the work that we have been doing. We can talk more about that, but again, in my mind, and I think for the president as well, the reason to do that work is so that we can use it to do big things. Some of those big things are being done by industry and the new markets that people are creating and the investment that comes in for that, as long as it’s done responsibly, we want to see that happen. That’s good for the country, and it can be good for the world as well. 

But there are public missions that are not going to be addressed just by this private investment that are ultimately still our responsibility. When I look at what AI can bring to each of the public missions that we’ve talked about, it’s everything from weather forecasting to [whether] we finally realize the promise of education tech for changing outcomes for our kids. I think there are ways that AI opens paths that weren’t available before, so I think it’s incredibly important that we also do the public sector work. By the way, it’s not all just using an LLM that someone’s been developing commercially. These are a very different array of technologies within AI, but that has to get done as well if we’re really going to succeed and thrive in this AI era.

Advertisement

When you say these companies want to be regulated, I’ve definitely heard that before, and one of the arguments they make is if you don’t regulate us and we just let market forces push us forward, we might kill everyone, which is a really incredible argument all the way through: “If we’re not regulated, we won’t be able to help ourselves. Pure capitalism will lead to AI doom.” Do you buy that argument that if they don’t stop it, they’re on a path toward the end of all humanity? As a policymaker, it feels like you need to have a position here.

I’ve got a position on that. First of all, I am struck by the irony of “it’s the end of the world, and therefore we have to drive.” I hear that as well. Look, here’s the thing. I think there’s a very garbled conversation about the implications, including safety implications, of AI technology. And, again, I’ll tell you how I see it, and you can tell me if it matches up to what you’re hearing. 

Number one, again, I start with the breadth of AI, and part of the cacophony in the AI conversation is that everyone is talking about the piece of it that they really care about, whether it’s bias in algorithms. If that’s what you care about, that’s killing people in your community, then, yes, that’s what you’re going to be talking about. But that’s actually a very different issue than misinformation being propagated more effectively. All of those are different issues than what kinds of new weapons can be designed.

I find it really important to be clear about what the specific applications are and the ways that the wheels can come off. I think there’s a tendency in the AI conversation to say that, in some future, there will be these devastating harms that are possible or that will happen. The fact of the matter is that there are devastating harms that are happening today, and I think we shouldn’t pretend that it’s only a future issue. The one I will cite that’s happening right now is online degradation, especially of women and girls. The idea of using nonconsensual intimate imagery to really just ruin people’s lives was around before AI, but when you have image generators that allow you to make deepfake nudes at a tremendous rate, it looks like this is actually the first manifestation of an acceleration in harms as opposed to just risks with generative AI.

The machines don’t have to make huge advances in capability for that to happen. That’s a today problem, and we need to get after it right now. We’re not philosophers; we’re trying to make policies that get this right for the country. For our work, I think it’s really important to be clear about the specific applications, the risks, the potential, and then take actions now on things that are problems now and then lay the ground so that we can avoid problems to the greatest degree possible going forward.

Advertisement

I hear that. That makes sense to me. What I hear often in opposition to that is, “Well, you could do that in Photoshop before, so the rules should be the same.” And then, to me at least, the difference is, “Well, you couldn’t just open Photoshop and tell it what you wanted and get it back.” You had to know what you’re doing and that there was a rate limiter there or a skill limiter there that prevented these bad things from happening at scale. The problem is I don’t know where you land the policy to prevent it. Do you tell Adobe not to do it? Do you tell Nvidia not to do it? Do you tell Apple not to do it at the operating system level? Where do you think, as a policymaker, those restrictions should live?

I’ll tell you how we’re approaching that specific issue. Number one, the president has called on Congress for legislation on privacy and on protecting our kids most particularly as well as broader legislation on AI risks and harms. And so some of the answer to this question requires legislation that we need for this problem, but also for—

Right, but is the legislation aimed at just the user? Are we just going to punish the people who are using the tools, or are we going to tell the toolmakers they can’t do the thing?

I want to reframe your question into a system because there’s not one place that this problem gets fixed, and it’s all the things that you were talking about. Some of the measures — for example, protecting kids and protecting privacy — require legislation, but they would have a broad inhibition of the kind of accelerated spread of these materials. In a very different act that we did recently working with the gender policy council here at the White House, we put out a call to action to companies because we know the legislation’s not going to happen overnight. We’ve been hoping and wishing that Congress could move on it, but this is a problem that’s right now, and the people who can take action right now are companies. 

We put out a call to action that called on payment processors and called on the platform companies and called on the device companies because they each have specific things that they can do that don’t magically solve the problem but inhibit this and make it harder and can reduce the spread and the volume. Just as an example, payment processors can have terms of service that say [they] won’t provide payment processing for these kinds of uses. Some actually have that in their terms of service. They just need to enforce it, and I’ve been happy to see a response from the industry. I think that’s an important first step, and we’ll continue to work on the things that might be longer-term solutions. 

Advertisement

I think everyone looks for a silver bullet, and almost every one of these real-world issues is something where there is no one magic solution, but there are so many things you can do if you understand all the different aspects of it — think of it as a systems problem and then just start shrinking the problem until you can choke it, right?

There’s a part of me that says, in the history of computing, there are very few things the government says I cannot do with my MacBook. I buy a MacBook or I buy a Windows laptop and I put Linux on it, and now I’m just pretty much free to run whatever code I want, and there’s a very, very tiny list of things I’m not allowed to do. I’m not allowed to counterfeit money with my computer. At some layers of the application stack, that is prevented. Printer drivers won’t let you print a dollar bill. 

Once you expand that to “there’s a bunch of stuff we won’t let AI do, and there are open-source AI models that you can just go get,” the question of where do you actually stop it, to me, feels like it requires both a cultural change in that we’re going to regulate what I can do with my MacBook in a way that we’ve never done before, and we might have to regulate it at the hardware level because if I can just download some open-source AI model and tell it to make me a bomb, all the rest of it might not matter.

Hold on that. I want to pull you up out of the place that you went for a minute because what you were talking about is regulating AI models at the software level or at the hardware level, but what I’ve been talking about is regulating the use of AI in systems, the use by people who are doing things that create harm. Let’s start with that. 

If you look at the applications, a lot of the things that we’re worried about with AI are already illegal. By the way, it was illegal for you to counterfeit money even if there wasn’t a hardware protection. That’s illegal, and we go after people for that. Committing fraud is illegal, and so is this kind of online degradation. So, where things are illegal, the issue is one of enforcement because it’s actually harder to keep up with the scale of acceleration with AI. But there are things that we can do about that, and our enforcement agencies are serious, and there are many examples of actions that they’re taking.

Advertisement

What you’re talking about is a different class of questions, and it’s one that we have been grappling with, which is what are the ways to slow and potentially control the technology itself? I think, for the reasons you mentioned and many more, that’s a very different kind of challenge because, at the end of the day, models are a collection of weights. It’s a bunch of software, and it may be computationally intensive, but it’s not like controlling nuclear materials. It’s a very different kind of situation, so I think that’s why that’s hard.

My personal view is that people would love to find a simple solution where you corral the core technology. I actually think that, in addition to being hard to do for all the reasons you mentioned, one of the persistent issues is that there’s a bright and dark side to almost every application. There’s a bright side to these image generators, which is phenomenal creativity. If you want to build biodesign tools, of course a bad actor can use them to build biological weapons. That’s going to get easier, unfortunately, unless we do the work to lock that down. But that’s actually going to have to happen if we’re going to solve vexing problems in cancer. So, I think what makes it so complex is recognizing that there’s a bright and a dark side and then finding the right way to navigate, and it’s different from one application to the next. 

You talk about the shift between public and private funding over time, and it moves back and forth. Computing is largely the same. There are open eras of computing and closed eras of computing. There are more controlled eras of computing. It feels like, with AI, we are headed toward a more controlled era of computing where we do want powerful biodesign tools, but we might only want some people to have them. As opposed to, I would say, up until now, software’s been pretty widely available, right? New software, new capabilities hit, and they get pretty broadly distributed right away. Do you feel that same shift — that we might end up in a more controlled era of computing?

I don’t know because it’s a live topic, and we’ve talked about some of the factors. One is: can you actually do it, or you’re just trying to hold water in your hand and it’s slipping out? Secondly, if you do it effectively, no action comes without a cost. So, what is the cost? Does it slow down your ability to design the breakthrough drugs that you need? Cybersecurity is the classic example because the exact same advanced capabilities that allow you to find vulnerabilities quickly, if you are a bad guy, that’s bad for the world, if you’re finding those vulnerabilities and patching them quickly, then it’s good for the world, but it’s the same core capability. Again, I think it’s not yet clear to me how this will play out, but I think it’s a tough road that everyone’s trying to sort out right now.

One of the things about that road that is fascinating to me is there seems to be a core assumption baked into everyone’s mental models that the capability of AI, as we know it today, will continue to increase almost at a linear rate. Like no one is predicting a plateau anytime soon. You mentioned that last year, it was pretty crazy for you. That’s leveled off. I would attribute at least part of that to the capabilities of the AI systems have leveled off. As you’ve had time to look at this and you think about the amount of technology you’ve been involved with over your career, do you think we’re overestimating the rate of progression here? Do you think particularly the LLM systems can live up to our expectations?

Advertisement

I have a lot to say about this. Number one, this is how we do things, right? We get very excited about some new capability, and we just go crazy about it, and people get so jazzed about what could be possible. It’s the classic hype curve, right? It’s the classic thing, so of course that’s going to happen. Of course we’re doing that in AI. When you peel the onion for really genuinely powerful technologies, when you’re through the hype curve, really big shifts have happened, and I’m quite confident that that’s what’s happening with AI broadly in this machine learning generation.

Broadly with machine learning or broadly with LLMs and with chatbots?

Machine learning. And that’s exactly where I want to go next because I think we are having a somewhat oversimplified conversation about where advances in capability come from, and capability always comes hand in hand with risks. I think about this a lot, both because of the things I want to do for the bright side, but also because it’s going to come with a dark side. The one dimension that we talk about a lot for all kinds of reasons is primarily about LLMs, but it’s also about very large foundation models, and it’s a dimension of increasing capability that’s defined by more data and more flops of computing. That’s what has dominated the conversation. I want to introduce two other dimensions. One is training on very different kinds of data. We’ve talked about biological data, but there are many other kinds of data: all kinds of scientific data, sensor data, administrative data about people. Those each bring different kinds of advances in capability and, with it, risks.

Then, the third dimension I want to offer is the fact that, with AI models, you never interact with an AI model. AI models live inside of a system. Even a chatbot is actually an AI model embedded in a system. But as AI models become embedded in more and more systems, including systems that take action in the online world or in the physical world like a self-driving car or a missile, that’s a very different dimension of risk — what actions ensue from the output of a model? And unless we really understand and think about all three of those dimensions together, I think we’re going to have an oversimplified conversation about capability and risk.

But let me ask the simplest version of that question. Right now, what most Americans perceive as AI is not the cool photo processing that has been happening on an iPhone for years. They perceive the chatbots — this is the technology that’s going to do the thing. Retrieval, augmented generation inside your workplace is going to displace an entire floor of analysts who might otherwise have asked the questions for you. This is the—

Advertisement

That’s one thing that people are worried about.

This is the pitch that I hear. Do you think that, specifically, LLM technology can live up to the burden of the expectations that the industry is putting on it? Because I feel like that whether or not you think that is true kind of implicates how you might want to regulate it, and that’s what most people are experiencing now and most people are worried about now.

I talk to a broader group of people who are seeing AI, I think, in different ways. What I’m hearing from you is, I think, a very good reflection of what I’m hearing in the business community. But if you talk to the broader research and technical community, I think you do get a bigger view on it because the implications are just so different in different areas, especially when you move to different data types. I don’t know if it’s going to live up to it. I mean, I think that’s an unknown question, and I think the answer is going to be both a technical answer and a practical one that businesses are sorting out. What are the applications in which the quality of the responses is robust and accurate enough for the work that needs to get done? I think that’s all got to still play out.

I read an interview you did with Steven Levy at Wired, who is wonderful, and you described showing ChatGPT to President Biden, and I believe you generated a Bruce Springsteen soundalike, which is fascinating. 

We had to write a Bruce Springsteen song. It was text, but yeah.

Advertisement

Wild all the way around. Incredible scene just to ponder in general. We’re talking just a couple of days after the music industry has sued a bunch of AI companies for training on their work. I’m a former copyright lawyer. I wasn’t any good at it, but I look at this, and I say, “Okay, there’s a legal house of cards that we’ve all built on, where everyone’s assumed they’re going to win the fair use argument the way that Google won the fair use argument 20 years ago, but the industry isn’t the same, the money isn’t the same, the politics aren’t the same, the optics aren’t the same.” Is there a chance that it’s actually copyright that ends up regulating this industry more than any sort of directed top-down policy from you?

I don’t know the answer to that. I talked about the places where AI accelerates harms or risks or things that we’re worried about, but they’re already illegal. You put your finger on what is my best example of new ground because this is a different use of intellectual property than we’ve had in the past. I mean, right now what’s happening is the courts are starting to sort it out as people bring lawsuits, and I think there’s a lot of sorting out to be done. I’m very interested in how that turns out from the perspective of LLMs and image generators, but I think it has huge implications for all the other things I care about using AI for.

I’ll give you an example. If you want to build biodesign tools that actually are great at generating good drug candidates, the most interesting data that you want in addition to everything you currently have is clinical data. What happens inside of human beings? Well, that data, there’s a lot of it, but it’s all locked up in one pharmaceutical company after another. Each one is really sure that they’ve got the crown jewels.

We’re starting to envision a path toward a future where you can build an AI model that trains across those data sets, but I don’t think we’re going to get there unless we find a way for all parties to come to an agreement about how they would be compensated for having their data trained on. It’s the same core issue that we’re dealing with LLMs and image generators. I think there’s a lot that the courts are going to have to sort out and that I think businesses are going to have to sort out in terms of what they consider to be fair value.

Does the Biden administration have a position on whether training is fair use?

Advertisement

Because this seems like the hard problem. Apple announced Apple Intelligence a few weeks ago and then sort of in the middle of the presentation said, “We trained on the public web, but now you can block it.” And that seems like, “Well, you took it. What do you want us to do now?” If you can build the models by getting a bunch of pharma companies to pool their data and extract value together from training on that, that makes sense. There’s an exchange there that feels healthy or at least negotiated for. 

On the other hand, you have OpenAI, which is the darling of the moment, getting in trouble over and over again for being like, “Yeah, we just took a bunch of stuff. Sorry, Scarlett Johansson.” Is that part of the policy remit for you, or is that, “We’re definitely going to let the court sort that out”?

For sure, we’re watching to see what happens, but I think that is in the courts right now. There are proposals on Capitol Hill. I know people are looking at it, but it’s not sorted at all right now.

It does feel like a lot of tech policy conversations land on speech issues one way or another, or copyright issues in one way or another. Is that something that’s on your mind that, as you make policy about investment over time or research and development over time in these areas, there’s this whole other set of problems that the federal government in particular is just not suited to solve around speech and copyright law?

Yeah, I mean freedom of speech is one of the most fundamental American values. It’s the foundation of so much that matters for our country, for our democracy, for how it works, and so it’s such a serious factor in everything. And before we get to the current generation of AI, of course that was a huge factor in how the social media story unfolded. We’re talking about a lot of things where I think civil society has an important role to play, but I think these topics, in particular, are ones where I think civil society… really, it rests on their shoulders because there are a set of things that are appropriate for the government to do, and then it really is up to the citizens.

Advertisement

The reason I ask that is that social media comparison comes up all the time. I spoke to President Obama when President Biden’s executive order on AI came out, and he made essentially the direct, “We cannot screw this up the way we did with social media.” 

I put it to him, and I’ll put it to you: The First Amendment is sort of in your way. If you tell a computer there are things you don’t want it to make, you have kind of passed a speech regulation one way or the other. You’ve said, “Don’t do deepfakes, but I want to deepfake President Biden or President Trump during the election season.” That’s a hard rule to write. It’s difficult in very real ways to implement that rule in a way that comports with the First Amendment, but we all know we should stop deepfakes. How do you thread that needle?

Well, I think you should go ask Senator Amy Klobuchar, who wrote the legislation on exactly that issue, because there are people who have thought very deeply and sincerely about exactly this issue. We’ve always had limits on First Amendment rights because of the harms that can come from the abuse of the First Amendment, and so I think that will be part of the situation here.

With social media, I think there’s a lot of regret about where things ended up. But again, Congress really does need to act, and there are things that can be done to protect privacy. That’s important for directly protecting privacy, but it is also a path to changing the pace at which bad information travels through our social media environment.

I think there’s been so much focus on generative AI and its potential to create bad or incorrect or misleading information. That’s true. But there wasn’t really much constraining the spread of bad information. And I’ve been thinking a lot about the fact that there’s a different AI. It’s the AI that was behind the algorithmic drive of what ads come to you and what’s next in your feed, which is based on learning more and more and more about you and understanding what will drive engagement. That’s not generative AI. It is not LLMs, but it’s a very powerful force that has been a big factor in the information environment that we were in before chatbots hit the scene.

Advertisement

I want to ask just one or two more questions about AI, and then I want to end on chips, which I think is an equally important aspect of this whole puzzle. President Biden’s AI executive order came out [last fall]. It prescribed a number of things. The one that stood out to me as potentially most interesting in my role as a journalist is a requirement that AI companies would have to share their safety test results and methodologies with the government. Is that happening? Have you seen the results there? Have you seen change? Have you been able to learn anything new?

As I recall, that’s above a particular threshold of compute. Again, so much of the executive order was dealing with the applications, the use of AI. This is the part that was about AI models, the technology itself, and there was a lot of thought about what was appropriate and what made sense and what worked under existing law. The upshot was a requirement to report once a company is training above a particular compute threshold, and I am not aware that we’ve yet hit that threshold. I think we’re sort of just coming into that moment, but the Department of Commerce executes that, and they’ve been putting all the guidelines in place to implement that policy, but we’re still at the beginning of that, as I understand it.

If you were to receive that data, what would you want to learn that would help you shape policy in the future?

The data about who’s training?

Not the data about who’s training. If you were to receive the safety test data from the companies as they train the next generation of models, what information is helpful for you to learn?

Advertisement

Let’s talk about two things. Number one, I think just understanding which companies are pursuing this particular dimension of advancement and capability, more compute, that’s helpful to understand, just to be aware of the potential for big advances, which might carry new risks with them. That’s the role that it plays.

I want to turn to safety because I think this is a really important subject. Everything that we want from AI hinges on the idea that we can count on it, that it’s effective at what it’s supposed to do, that it’s safe, that it’s trustworthy, and that’s very easy to want. It turns out, as you know, to be very hard to actually achieve, but it’s also hard to assess and measure. And all the benchmarks that exist for AI models, it’s interesting to hear how they do on standardized tests, but they just are benchmarks that tell you something. They don’t really tell you that much about what happens when humanity interacts with these AI models, right?

One of the limitations in the way we’re talking about this is we talk about the technology. All the interesting things happen when human beings interact with the technology. If you think models — AI models are complex and opaque — you should try human beings. I think we have to understand the scale of the challenge and the work that the AI Safety Institute here is doing. This is a NIST organization that was started in the executive order. They’re doing exactly the right first steps, which is working with industry, getting everyone to understand what current best practices are for red teaming. That’s exactly where to start. 

But I think we also just have to be clear that our current best practices for red teaming are not very good compared to the scale of the challenge. This is actually an area that’s going to require deep research and that’s ongoing in the companies and more and more with federal backing in universities, and I think it’s essential.

Let’s spend a few minutes talking about chips because that is the other piece of the puzzle. The entire tech industry right now is thinking about chips, particularly Nvidia’s chips — where they’re made, where they might be under threat quite literally because they’re made in Taiwan. There’s obviously the geopolitics of China involved there. 

Advertisement

There’s a lot of investment from the CHIPS Act to move ship manufacturing back in the United States. A lot of that depends again on the idea that we might have some national champions once again. I think Intel would love to be the beneficiary of all that CHIPS Act funding. They can’t operate at the same process nodes as TSMC right now. How do you think about that R&D? Is that longer range? Is that, “Well, let’s just get some TSMC fabs in Arizona and some other places and catch up”? What’s the plan?

There’s a comprehensive strategy built around the $52 billion that was funded by Congress with President Biden pushing hard to make sure we get semiconductors back at the leading edge in the United States. But I want to step back from that and tell you that this fall is 40 years since I finished my PhD, which was on semiconductor materials, and [when] I came to Washington, my hair was still black. This is really long ago. 

I came to Washington on a congressional fellowship, and what I did was write a study on semiconductor R&D for Congress. Back then, the US semiconductor industry was extremely dominant, and at that time, they were worried that these Japanese companies were starting to gain market share. And then a few actions happened. A lot of really good R&D happened. I got to build the first semiconductor office at DARPA, and every time I look at my cell phone, I think about the three or five technologies that I got to help start that are in those chips.

So, a lot of good R&D got done, but over those 40 years, great things happened, but all the manufacturing at the leading edge eventually moved out of the United States, putting us in this really, really bad situation for our supply chains, for jobs all those supply chains support. The president likes to talk about the fact that when a pandemic shut down a semiconductor fab in Asia, there were auto workers in Detroit who were getting laid off. So, these are the implications. Then, from a national security perspective, the issues are huge and, I think, very, very obvious. What was shocking to me is that after four decades of admiring this problem, we finally did something about it, and with the president and the Congress pulling together, a really big investment is happening. So, how do we get from here to the point where our vulnerability has been significantly reduced?

Again, you don’t get to have a perfect world, but we can get to a far better future. The investments that have been made include Intel, which is fighting to get back in and drive to the leading edge. It’s also, as you noted, TSMC and Samsung and Micron, all at the leading edge. Three of those are logic. Micron has memory. And Secretary [Gina] Raimondo has just really driven this hard, and we’re on track to have leading-edge manufacturing. Not all leading-edge manufacturing — we don’t need it all in the United States — but a substantial portion here in America. We’ll still be part of global supply chains, but we’re going to reduce that really critical vulnerability.

Advertisement

Is there a part where you say, “We need to fund more bleeding-edge process technology in our universities so that we don’t miss a turn, like Intel missed a turn with the UV”?

Number one, part of the CHIPS Act is a substantial investment, over $10 billion in R&D. Number two, I spent a lot of my career on semiconductor R&D — that’s not where we fell down. It’s about turning that R&D into US manufacturing capability. Once you lose the leading edge, then the next generation and the next generation is going to get driven wherever you’re leading edge is. So, R&D eventually moves. I think it was a well-constructed package in CHIPS that said we have to get manufacturing capacity at the leading edge back, and then we build the R&D to make sure that we also win in the future and be able to move out beyond that.

I always think about the fact that the entire chips supply chain is utterly dependent on ASML, the Dutch company that makes the lithography machines. Do you have a plan to make that more competitive?

That’s one of the hardest challenges, and I think we’re very fortunate that the company is a European company and has operations around the world, and that company in the country is a good partner in the ecosystem. And I think that that’s a very hard challenge, as you well know, because the cost and the complexity of those systems has just… It’s actually mind-boggling when you see what it takes to make this thing that ends up being a square centimeter, the complexity of what goes behind that is astonishing.

We’ve talked a lot about things that are happening now. That started a long time ago. The R&D investment in AI started a long time ago. The explosion is now. The investment in chips started a long time ago. That’s your career. The explosion and the focus is now. As you think about your office and the policy recommendations you’re making, what are the small things that are happening now that might be big in the future?

Advertisement

I think about that all the time. That’s one of my favorite questions. Twenty and 30 years ago, the answer to that was biology starting to emerge. Now I think that’s a full-blown set of capabilities. Not just cool science, but powerful capabilities, of course for pharmaceuticals, but also for bioprocessing, biomanufacturing to make sustainable pathways for things that we currently get through petrochemicals. I think that’s a very fertile area. It’s an area that we put a lot of focus on. Now, if you ask me what’s happening in research that could have huge implications, I would tell you it’s about what’s changing in the social sciences. We tend to talk about the progression of the information revolution in terms of computing and communications and the technology.

But as that technology has gotten so intimate with us, it is giving us ways to understand individual and societal behaviors and incentives and how people form opinions in ways that we’ve never had before. If you combine the classic insights of social science research with data and AI, I think it’s starting to be very, very powerful, which, as you know from everything I’ve told you, means it’s going to come with bright and dark sides. I think that’s one of the interesting and important frontiers.

Well, that’s a great place to end it, Director Prabhakar. Thank you so much for joining Decoder. This was a pleasure.

Great to talk with you. Thanks for having me. 

Decoder with Nilay Patel /

A podcast from The Verge about big ideas and other problems.

Advertisement

SUBSCRIBE NOW!

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Shedding light on Iran’s longest internet blackout

Published

on

Shedding light on Iran’s longest internet blackout

After protests broke out in early January, the Iranian regime shut down the internet, starting the longest blackout in Iranian history. Despite this attempt to stop the protests from spreading, they did not stop. Still, the internet shutdown slowed down the spread of information both inside and outside Iran.

Behind the heavily policed borders and the jammed signals, an unprecedented wave of state violence continues to add to a death toll somewhere between 3,000 and 30,000. Even at the lowest count, which has been acknowledged by the Iranian state and is likely a wild underestimate, these last few weeks have been one of the bloodiest uprisings in modern history.

The situation in Iran can be hard to grasp. The history is complicated; the state of the technology and internet infrastructure there is constantly in flux. To get a sense of what is happening right now, I turned to an expert. Mahsa Alimardani, the associate director of the Technology Threats & Opportunities program at WITNESS, has been a researcher and advocate in the digital rights space — particularly around Iran — since 2012. I spoke with her about what is happening in Iran, and how technology both props up and threatens repressive regimes.

The Verge: What is internet access in Iran like right now?

Mahsa Alimardani: Since the weekend [of January 24], there has been some resumption of connectivity. And I’m a little bit worried that this might convince people that things are back to normal. Last I saw, there was like 30 to 40 percent connectivity on some of the Cloudflare network data in Iran and there’s very inconsistent connectivity. Some circumvention tools have started to work.

Advertisement

Randomly, someone in Iran FaceTime called me yesterday. They were like, “My VPN stopped working, so I just tried to call with FaceTime, and for some reason, it didn’t even need a VPN.” But it was a momentary glitch. Various things are happening across the network, and it’s not really clear why there’s this opening, or what it means for long-term connectivity.

Since January 8th, when there was a surge in the uprising in the protest movement in Iran, there was an internet shutdown — the longest internet shutdown in Iran, they broke the record in length.

They also broke the record in number of protesters that have been massacred. It’s horrifying to think that technology helps enable such crimes.

Why does the Iranian government fear internet access?

In 1988, there was a fatwa where the government massacred a lot of political prisoners in a short span of time. I bring this up because it happened when there was no internet, and the media was heavily controlled and centralized by the state. If you did not flee Iran, and if you were not part of the generation of prisoners and political activists that survived, it was very hard to pass on the memory of that event. Peers of mine in Iran didn’t grow up with the same information. It’s so interesting having these conversations with people and realizing they are learning history only when they leave the country.

Advertisement

What’s been a real game changer is the way you can document and witness these kinds of crimes in the age of the internet. I think it’s obviously a big threat to the regime. It’s a massive threat to them to be able to hold them accountable, and be able to document and witness what they’re doing.

Anytime anyone sees a severe crackdown like an internet shutdown, you know that it’s going to be followed by violence. In 2019 there was a week-long internet shutdown, under the blanket of which they massacred 1,500 people. The reason why is because they don’t want people to use the internet for mobilization and communication, and they don’t want there to be a way to document what’s happening.

Anytime anyone sees a severe crackdown like an internet shutdown, you know that it’s going to be followed by violence

So the denial of the scale of their crimes is part of what they do in Iran, because it’s very hard to assess the percentage of legitimacy that the regime has, because obviously you can’t do free polling. You don’t have free media. Even when you have foreign journalists that go there, they’re followed by minders and the reporting is super-limited. The UN hasn’t been able to really have anyone do proper site visits for human rights documentation, since the start of this regime in 1979.

There isn’t any real access to professional on-the-ground documentation and fact-finding. So it all really depends on the internet, on people, on citizen media. People sending things, putting them online, and then having professional fact-checkers and verification.

Advertisement

What was internet access in Iran like most recently? What platforms and service providers did people use before the blackout started?

Iranians are extremely tech savvy because there’s been a cat-and-mouse game across the internet for most of its existence. Since 2017, 2018, on average, there’s been protests every two years. Each time they have a different level of censorship, new kinds of rules and regulations.

In 2017, [messaging app] Telegram was massive. Some people were even saying Telegram was the internet for Iranians, they were doing everything across Telegram. It worked really well, especially with network bandwidth being really low. So Telegram was a place for news, chatting, socializing everything, even like online markets. But then they blocked it in 2018 when protests started, because protest mobilization on there was a threat to the regime.

There was a move toward Instagram and WhatsApp becoming the most popular applications.

They had yet to be blocked back then. Instagram was more for fun, but it became much more politicized after Telegram was blocked. Then, during the Woman Life Freedom movement in 2022, Instagram and WhatsApp got blocked.

Advertisement

The regime has spent a lot of effort in trying to disable VPNs

Most people are just on VPNs. The regime has spent a lot of effort in trying to disable VPNs. There’s a lot of different VPN projects both for-profit and nonprofit that work within that cat-and-mouse game where protocols are being disabled and new ones are created.

An average Iranian often has many different VPNs. When one can’t work, they’ll turn on another one.

We’ve talked about how technology threatens the regime and how average Iranians use it. Let’s switch over to the other side of this issue: how does technology enable repression?

So there’s various different things the regime does, different levels of enacting information controls. There’s the censorship level of shutting it down.

Advertisement

Then there’s physical coercion. Like, I know people who have not reported their children who have been killed recently because they were so frightened by the process by which they had to get their loved one’s body.

They also flood the information space with a lot of misinformation. They create a lot of doubt.

They’ve been doing this information manipulation even before the internet. Iran is a very complicated information space. There are a lot of actors beyond the regime who also want to manipulate it. Even authentic dissidents and activism will get lumped in with Mossad or CIA operations.

Iran’s foreign relations muddy its information space

In 1953, the American CIA and British MI6 overthrew the democratic government of Iran, consolidating power under a monarchy that was more favorable to the US and the UK. Many believe that the political instability caused by the CIA and MI6 eventually led to the Islamic Revolution of 1979, which established the current authoritarian regime.

Advertisement

From 2014 to 2024, Iran and Russia joined a strategic partnership with the Syrian dictatorship as part of the Syrian civil war. The United States formed its own coalition; both coalitions purported to fight ISIS. The civil war spawned massive amounts of internet disinformation, and in 2018, Facebook and Twitter deleted hundreds of accounts originating in Russia and Iran that formed a global influence network pushing disinformation. The Syrian regime was overthrown at the end of 2024. The next year, following decades of hostilities, Israel and Iran engaged in a 12-day war.

These are some, but not all of the factors that contribute to the complicated information space in Iran that Alimardani is referring to.

The regime’s campaign existed pre-internet, but with technology, it went into overdrive. They’ve been quite clever in some of the ways they’ve covered the protests. They’ve been able to even mobilize, like, people who are sympathetic to the Palestinian cause, against, you know, the Iranian cause for liberation.

There have been a lot of documented efforts of them trying to manipulate protest documentation, undermine it, you know, use the concept of the Liar’s Dividend, which is very easy to use in the increasingly AI world we’re in.

Hold on, can you go through those examples you just mentioned? About mobilizing people who are sympathetic to the Palestinian cause?

Advertisement

Yeah, so, Iran is quite complicated in that it’s an Islamic fascist state. They use Islam in a lot of ways to repress the people. And there is a lot of very valid rhetoric about Islamophobia in the West, from the very specific context and history of the United States, such as what happened during the War on Terror.

But in Iran, it’s quite different. And this can really be manipulated and conflated, right? Mosques in Iran are often also the headquarters for the Basij [the Iranian paramilitary corps], and people might not know this. So there will be videos like, “Look at these protesters who are setting fire to this mosque. Look at these Islamophobic rioters.”

You might see that, without the context that the mosques also are places where the security forces that kill people are stationed, and lose why something like that would be attacked by Iranians seeking liberation.

You mentioned the regime’s use of AI — do you want to talk a little bit more about that?

Yeah, so, we didn’t need AI for authoritarian regimes to deny evidence of their crimes. Even before AI, Bashir al-Assad [the former dictator of Syria] was saying that reliable documentation of his crimes in Syria were not valid.

Advertisement

Whether we like it or not, AI is being integrated into a lot of things. AI editing is slowly becoming ubiquitous. Like, in fact, we might come to a point where editing photos or anything might become unavoidable without the use of generative AI.

So you no longer have that binary of like, if it’s AI, it’s fake. If there’s no AI, it’s real.

So there’s this very symbolic image that everyone has said reminded them of the Tiananmen Square Tank Man from 1989. But here, a protester is standing in front of armed security forces on motorcycles with weapons. [Ed. note: The New York Post ran with the headline “Powerful image of lone Iranian protester in front of security forces draws parallels to Tiananmen Square ‘Tank Man.’”]

This was a very low resolution video taken from a high rise [building]. Someone had screenshotted a frame from the video and it was quite blurry.

They used some AI editing software to enhance it, and you could see some AI artifacts. Nevertheless, this is an authentic, verified image of a brave protester. Lots of credible sources have verified it. But immediately, it was pointed out to have these AI artifacts, and a lot of the regime accounts started this narrative of “This is all AI slop from Zionists.”

Advertisement

And of course, because, you know, Israel has a special interest in Iran, they have a Farsi-language state account. Israel’s Farsi state account shared the image, which further fueled the claim that this authentic image from Iran was AI slop being pushed by the enemy, Israel.

As you’ve already mentioned, Iran has a complicated information environment. What would you say are the various actors in this space? What kinds of things are they doing?

Obviously there are foreign policy interests by Israel and the US in Iran, just because of the history and very antagonistic relationship they’ve had from the very beginning of the revolution.

The Iran-Israel war in June 2025 was a super interesting moment because the war started a few weeks after Google launched Veo 3, which has made access to very realistic generative AI content very easy. So right off the bat, you could see, from both sides, a lot of AI content coming from the war. This wasn’t the first war where that’s happened — like the Ukraine war has had so many different examples — but since Russia’s invasion of Ukraine [beginning in 2022], the technology has advanced far more, so it became a very big part of the narrative of the situation in Iran.

The most famous example from the Iran-Israel war was a piece of manipulated content that Citizen Lab later was able to attribute to the Israeli state. It was this AI-generated video of Israel bombing the gates of Evin Prison, perpetuating this narrative that they have very precise military operations and that they were freeing these political prisoners.

Advertisement

Evin is a very famous prison for a lot of activists and dissidents and intellectuals in Iran. Human Rights Watch and Amnesty International called the bombing of Evin Prison a war crime. And indeed, political prisoners were casualties of the bombing.

But that deepfaked video went viral. Mainstream media even reposted it immediately before a lot of various different researchers, including our deepfakes rapid response force and others, were able to attest that indeed this was a manipulated video.

So you have this information space that is quite complicated. But in this scenario, I think it would really be remiss to put that much emphasis on the role that these other actors have. There are things from these outside actors that fog up the information space, but ultimately what’s really happening is that there’s a really unprecedented massacre happening. And the perpetrator is the Islamic Republic of Iran.

I’ve seen some reporting about how Iranians bought Starlink terminals prior to the blackout. Can you say anything about that?

Yeah, I want to start by referencing a really great article by the Sudanese activist Yassmin Abdel-Magied, called “Sudanese People Don’t Have the Luxury of Hating Elon Musk.” Whatever my personal ideas are about Elon Musk, you have to give credit where credit is due. This technology is a game changer. It’s been a game changer in Sudan. And it has been in Iran.

Advertisement

We’ve had a few days of a little bit of connectivity of people coming online just through the ordinary network, but when the shutdown was full and complete, Starlink was really the only window we had into Iran.

When the shutdown was full and complete, Starlink was really the only window we had into Iran

And if you talk to documentation organizations, they’ll tell you, they were getting evidence and doing the verification through what was coming in from the Starlink connections. I know of people who had a Starlink and had like a whole neighborhood of people come in to check in and use the Wi-Fi.

The most credible stats before the situation was that there’s about 50,000 Starlinks. There’s likely more than 56,000 now. It became very popular during the Iran-Israel War, because of course, then the Islamic Republic enacted another shutdown. A lot of people invested in getting Starlink then.

You can get anything you want in Iran through smugglers — I think Starlink was like $1,000 at the time because demand was so high. Receivers are ordinarily a few hundred US dollars. The last price I heard was they were being sold for $2,000 in Iran. It’s a lot of money, but given the demand and the massive risk the smugglers have to undertake, I think it’s fair, but also, it means you can’t really scale this, and the people that have it are very privileged or have access to very privileged people.

Advertisement

What we’re seeing is a very small window. When having discussions with various folks that have been doing firsthand documentation, they’ve expressed, “We’re not getting enough from Kurdistan. We’re not getting enough documentation from Sistan and Baluchestan.” Historically, these areas are often at the forefront of protests, because the regime often has the bloodiest forms of repression in these provinces with marginalized ethnicities. Areas like Sistan and Baluchestan have a lot of economic poverty, so they’d have less access to something privileged like Starlink.

Satellite internet is really this way of reimagining connectivity

For all these years, myself, many people, have been working on this concept of internet censorship and internet shutdowns. And there really hasn’t been a way to reimagine this system. There’s this concept of digital sovereignty in place in terms of internet access and internet infrastructure that fits within national borders. In even the most democratic of countries, this is still national infrastructure that the government can have access or forms of control over.

This concept has to be broken. Satellite internet is really this way of reimagining connectivity, not just for Iran, but anywhere where lack of connectivity results in a crisis, whether humanitarian one, or a massacre of this proportion.

It’s really important to reconceive access to satellite internet in a way that could scale beyond those who are privileged and beyond those willing to take the risk. And one of the ideas that I’ve had and have been working on with other colleagues at Access Now has been to push for direct-to-cell access, which is a form of satellite internet connectivity that depends on technology that exists in phones created from 2020 onwards. We launched this campaign called Direct 2 Cell, hoping to push forward this concept.

Advertisement

On a personal note, how are you doing? Have you heard from your friends, family, other people you know in Iran recently?

I’ve been able to be in touch with some of my family and others here and there.

I also had that random FaceTime audio call from another person I know. I was very worried about them because they’ve been at the protests. I had heard through various people that they were okay, but I finally heard from them firsthand, and it was such a bizarre experience, speaking to them.

I had never heard them sound the way that they sounded: recounting their experience of leaving the protest before the military tanks came to open fire on the crowds, how they got tear gassed, and for the next few days, seeing water hoses washing blood off the streets. It sounded like they were making a lot of dark jokes — I had never heard them sound this way. I don’t know how you can walk the streets of your neighborhood, seeing people wash off blood, and just…. like, something not fundamentally change in your mind.

I just, I don’t, I can’t imagine how to process it if I was there. As someone in the diaspora, it’s hard to process being privileged and being away.

Advertisement
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Continue Reading

Technology

Tax season scams surge as filing confusion grows

Published

on

Tax season scams surge as filing confusion grows

NEWYou can now listen to Fox News articles!

Tax season already brings stress. In 2026, it brings added confusion. Changes to tax filing programs and the discontinuation of the free government-run filing system have left many taxpayers unsure about what is legitimate. That uncertainty has created an opening for scammers who move quickly when people hesitate. 

“Every tax season we see scammers ramp up their activity, and with likely confusion now that the free government-run filing system is discontinued, we’re sure scammers will take advantage,” said Lynette Owens, vice president of consumer marketing and education at Trend Micro.

In past years, scammers have leaned heavily on impersonation. Fake IRS emails promising refunds, text messages claiming accounts have been flagged under new rules and fraudulent tax help offers that promise faster returns continue to circulate, Owens said. As February begins, many taxpayers feel pressure to file quickly. That urgency creates the perfect conditions for fraud.

Sign up for my FREE CyberGuy Report

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

WHY SCAMMERS OPEN BANK ACCOUNTS IN YOUR NAME

Scam emails often pose as IRS notices and demand immediate action to protect a refund. The IRS does not contact taxpayers this way. (Kurt “CyberGuy” Knutsson)

Why scammers thrive when tax rules feel unclear

Uncertainty is one of the most effective tools scammers have. When taxpayers are unsure how filing rules work or whether a message is legitimate, criminals step in with communications designed to sound official and helpful. The goal is not clarity. It is speed.

“Scammers aim to create a heightened sense of anxiety among the people they are targeting,” Owens said. “When taxpayers don’t feel confident about what’s real, whether it’s new filing options, eligibility rules or program updates, criminals step in with messages that sound official and helpful.” They often pose as the IRS, a tax prep service, or even government support. Once trust is established, the message quickly turns transactional, asking for clicks, personal data or payments.

Advertisement

The most common IRS impersonation scams right now

While the delivery methods change, the core message rarely does. Something is wrong, and it must be fixed immediately. 

“The most common tactic we’re seeing is fake refund or account alert messages that claim something is wrong and demand immediate action,” Owens said. Other scams go a step further. Some direct victims to fake IRS login pages designed to steal credentials.

Others promote fraudulent tax assistance, presenting themselves as government-backed or low-cost help in order to collect personal and financial information. These scams arrive by email, text message, phone calls and fake websites. Many are polished enough to appear legitimate at first glance.

Why phrases like new rules and urgent issues work

Language plays a central role in tax scams. Phrases such as new rules or urgent account issues are designed to trigger panic before logic has a chance to catch up. They suggest the recipient has missed something important or risks losing money.

“Those phrases work because they can trigger panic and urgency, and people are more likely to react emotionally than logically,” Owens said. “New rules suggest you may have missed something important, and an urgent account issue creates fear of penalties, delays or losing a refund.” 

Advertisement

The safest response is to pause. Do not click links, reply to messages or call phone numbers included in the alert. Instead, go directly to a trusted source like IRS.gov using your own browser.

A real tax scam message that looks legitimate

Many tax scams follow a familiar structure. A common example reads: “IRS Notice: Your tax refund is on hold due to a filing discrepancy under updated 2026 rules. Verify your identity now to avoid delays.” 

At first glance, messages like this may appear credible. They often include official-looking logos, reference numbers and links that resemble real government pages.

“It may include a convincing IRS-style logo, a case number and a link that looks legitimate at a glance,” Owens said. “But the red flags are usually the same.” The message pressures immediate action, directs users to non-government websites, and requests sensitive information such as Social Security numbers, bank details or login credentials.

HOW TO STOP IMPOSTOR BANK SCAMS BEFORE THEY DRAIN YOUR WALLET

Advertisement

Fake IRS alerts use urgent language like “account issue” or “new rules” to trigger panic. Scammers rely on fear to push quick decisions. (Kurt “CyberGuy” Knutsson)

What happens after someone falls for a tax scam?

The damage rarely ends with a single click. 

“The most serious consequences are identity theft and financial loss,” Owens said. “Once scammers have personal information, they can file fraudulent tax returns, steal refunds, open credit accounts and access bank funds.”

Victims often spend months working to recover lost money, repair credit damage and restore their identities.

How the IRS really communicates with taxpayers

Despite repeated warnings, many people still believe the IRS might email or text them. 

Advertisement

“A legitimate tax service or the IRS won’t reach out unexpectedly by email, text or social media, and they won’t pressure you to act immediately,” Owens said.

Scam messages often share the same warning signs. They sound urgent, include links or attachments and ask for sensitive information right away. If a message creates panic or demands fast action, that alone is reason to be skeptical. The IRS primarily communicates by official mail. Unexpected digital contact should always raise concern.

What to watch for next as scams evolve

Tax scams continue to grow more sophisticated each year. 

“Taxpayers should watch for scams that feel more real than ever,” Owens said. “That includes highly polished phishing emails, refund texts designed for quick mobile clicks, fake tax help ads and cloned websites that mimic real IRS or tax prep portals.”

The biggest mistake people still make is treating an unexpected tax message like an emergency. 

Advertisement

“In tax season, speed is the scammer’s advantage,” Owens said. “Taking 30 seconds to double-check the source can prevent months of financial and identity damage.”

What to do if you clicked or responded by mistake

If someone realizes too late that a message was fraudulent, fast action can limit the damage. 

“First, stop engaging immediately,” Owens said. “Don’t click links, download attachments or reply.”

Next, report the incident. Forward phishing emails to phishing@irs.gov and file a report at reportfraud.ftc.gov.

After that, monitor financial accounts closely, change passwords and consider placing a fraud alert or credit freeze if necessary.

Advertisement

To learn more about how to do this, go to Cyberguy.com and search “How to freeze your credit.” 

SCAMMERS TARGET RETIREES AS MAJOR 401(K) RULE CHANGES LOOM FOR 2026 TAX YEAR AHEAD NATIONWIDE

Tax scammers target personal and financial data to steal refunds or commit identity theft. (Kurt “CyberGuy” Knutsson)

Ways to stay safe during tax season

Scammers count on rushed decisions. The good news is that a few smart habits can dramatically lower your risk.

1) Slow down before responding to tax messages

Urgency is the scammer’s favorite tool. Messages that demand immediate action aim to short-circuit your judgment. 

Advertisement

“Scammers rely on fear, urgency or false promises, especially during tax season,” Owens said. “It’s important to slow down, verify information through official channels, and use trusted security tools.” If a message pressures you to act fast, stop. Take a breath before doing anything else.

2) Verify filing changes through official IRS channels

Scam messages often reference new rules, updated policies or eligibility changes. That language sounds credible when filing programs shift. Always confirm changes by typing IRS.gov directly into your browser or signing in to your trusted tax provider account. Never rely on links or phone numbers included in a message.

3) Protect tax accounts with strong credentials

Tax portals hold valuable personal and financial data. Weak passwords make them easy targets. Use strong and unique passwords for every tax-related account. A password manager can help generate and store secure credentials without relying on memory.

Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Check out the best expert-reviewed password managers of 2026 at Cyberguy.com

Advertisement

4) Watch for pressure tactics and refund promises

Scammers know refunds motivate quick action. Messages claiming your refund is waiting, delayed or at risk often signal fraud. Be cautious of promises like faster refunds, guaranteed results or special access to government-backed assistance. Legitimate services do not operate that way.

5) Avoid links and secure your devices with strong antivirus software 

Clicking a single link can expose login credentials or install malware. Do not click on links in unexpected tax messages. Also, use strong antivirus software to help block malicious sites and detect threats before damage occurs.

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

6) Reduce your digital footprint

Personal data fuels tax scams. The more information criminals can find online, the easier impersonation becomes. Using a data removal service can help limit exposed personal details across data broker sites. Less data means fewer opportunities for scammers to exploit your identity.

Advertisement

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com

Kurt’s key takeaways

Tax season pressure makes even cautious people vulnerable. In 2026, filing confusion adds fuel to the fire. Scammers know this and design messages to look official, urgent and helpful. Pausing, verifying and trusting official sources remains the strongest defense. When something feels rushed, it is usually for a reason.

Have you received a suspicious IRS message this tax season, and what made you question whether it was real? Let us know by writing to us at Cyberguy.com

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com.  All rights reserved.

Continue Reading

Technology

Bill Gates says accusations contained in Epstein files are ‘absolutely absurd’

Published

on

Bill Gates says accusations contained in Epstein files are ‘absolutely absurd’

Reports of Bill Gates’ connections with Jeffrey Epstein grow more lurid with each dump of documents from the Department of Justice. The latest includes somewhat confusing emails that Epstein may have been drafting on behalf of someone named Boris, who worked at the Bill & Melinda Gates Foundation. The messages claim that Bill contracted an STD and wanted to “surreptitiously” give Melinda antibiotics. It also claims that Bill had “trysts” with married women and “Russian girls.”

“These claims are absolutely absurd and completely false. The only thing these documents demonstrate is Epstein’s frustration that he did not have an ongoing relationship with Gates and the lengths he would go to entrap and defame.”

It’s unclear who the Boris referenced in the emails is, or if the messages were ever sent to anyone. Only Epstein is listed in the to and from fields.

Gates’ relationship with Epstein has become a major issue for the billionaire philanthropist. He initially downplayed his connections, but documents have suggested the two were closer than Gates admitted. He has repeatedly denied associating with Epstein outside of fundraising and philanthropic efforts and said their meetings were a “huge mistake.” However, Melinda Gates has stated that Bill’s association with Epstein played a role in her decision to file for divorce.

Continue Reading
Advertisement

Trending