Business
Column: Why hugely profitable corporations won't spend enough to keep hackers from stealing your private info
AT&T is one of America’s largest telecommunications companies. Last year it recorded a pretax profit of nearly $20 billion on $122.4 billion in revenue.
So why, you might ask, has AT&T been so pathetically sloppy about protecting its customers’ private information that the data of nearly all those customers — 110 million users — ended up in the hands of a “financially motivated” hacker group?
The breach was revealed on July 12, although it mostly occurred in 2022; AT&T attributed the reporting delay to requests from federal authorities to keep it under wraps while they investigated its national security significance.
Protecting your data is one of our top priorities.
— AT&T, after disclosing that personal data of as many as 110 million customers was stolen by hackers
This breach, cybersecurity experts say, is especially alarming because of the nature of the stolen data. It’s not merely financial data such as bank account or Social Security numbers that might enable hackers to raid a victim’s bank account or engage in identity theft to open new accounts.
In this case, it included information about what numbers were called by hacked users and the numbers that called them; the length of calls, and location data — where you might have been when making or receiving a call. The data the hackers snarfed up came from May through October 2022 and Jan. 2, 2023.
“Telecom providers hold some of the most sensitive information on consumers — a map of their daily lives — where they are, who they’re talking with, their social graph, everything,” says cybersecurity professional Brian Krebs.
The latest disclosure of a hack at AT&T might be considered a signpost for “the year of the megabreach.”
It follows AT&T’s announcement in April of an earlier, unrelated breach that may have compromised the Social Security numbers, PINs, email and mailing addresses, phone numbers, dates of birth and AT&T account numbers of 73 million current and former AT&T customers.
Both AT&T incidents pale in comparison with a massive data breach earlier this year at UnitedHealth Group, the nation’s biggest health insurance and health provider conglomerate. According to congressional testimony by UnitedHealth Chief Executive Andrew Witty and company news releases, a ransomware attack on the company’s Change Healthcare subsidiary has affected as many as 1 in 3 Americans.
Change Healthcare manages patient payments and reimbursements to medical providers. The ransomware hack crippled medical services nationwide and resulted in the exposure of patients’ treatment details and billing information, including credit card numbers. Patients reported that pharmacies were refusing to fill prescriptions because they couldn’t access insurance approvals, risking the patients’ health.
UnitedHealth said it paid a $22-million ransom in bitcoin, but couldn’t be sure that all the hacked information was returned. It also said that it advanced about $9 billion to providers to cover their expenses before their billing could be restored.
The company told Congress that it already had in place “a robust information security program with over 1,300 people and approximately $300 million in annual investment,” but of course those figures are meaningless — the question is how much it would cost to actually have a “robust” program in place, since $300 million obviously isn’t enough.
The breach occurred, according to testimony and statements by the company, because UnitedHealth tried to integrate Change Healthcare’s technology system with its own without first ensuring that Change’s system would require multifactor authentication, a basic security feature that requires users to enter an algorithmically generated code along with their password to gain access to a system or account.
The hackers breached “a legacy Change Healthcare server” that didn’t meet the parent company’s standards, the company said — but it used the noncompliant equipment anyway.
Data breaches affecting hundreds of thousands or millions of consumers have become such familiar features of the consumer landscape that the guilty companies respond with a standard playbook replete with promises to customers.
They point out all the data that wasn’t compromised — AT&T told customers that the latest debacle didn’t involve “the content of calls or texts, personal information such as Social Security numbers, dates of birth, or other personally identifiable information.” That’s a bit like airlines following up reports of deadly crashes by pointing out how many planes land and take off safely every day.
The companies typically offer aggrieved customers free credit monitoring and identity theft protection for a period of time; at UnitedHealth, that period is two years.
Whether those services are useful is open to question — after a 2017 data breach at the credit reporting firm Equifax exposed the personal data of 143 million Americans, the identity theft service LifeLock trumpeted its protective services (at $29.99 a month). What LifeLock didn’t make very clear was that the services it was selling were actually provided byEquifax.
The breached companies also attest to their determination to get to the bottom of the hacks, and to their commitment to customer security. AT&T’s recent breach disclosure included this pledge: “Protecting your data is one of our top priorities.”
If there were a trophy for flagrant lying in marketing materials, this would be a strong contender. Under the circumstances, it’s either blatantly untrue or reflects a critical flaw in the company’s fulfillment of its priorities. I asked AT&T what steps it has taken to discipline or remove any executives charged with fulfilling such a crucial priority, up to and including the CEO. AT&T didn’t respond directly to this or other questions I submitted, but referred me to its news release and a customer Q&A on the topic.
AT&T says the breach occurred in a company connection to a third-party cloud data service called Snowflake, to which it had entrusted its customer data. As it happens, some 165 of Snowflake’s corporate clients may also have been targeted by the hackers who struck AT&T. An ongoing investigation by cybersecurity experts suggests, however, that the fault isn’t Snowflake’s — it’s the fault of those clients, who didn’t observe best security practices.
That points to several issues that contributed to AT&T’s breach — and similar breaches around the corporate world. One is why AT&T is hoarding so much information about its users in the first place.
“To have years of call histories, text message histories and location data makes you a massive target for hackers,” says Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a New York nonprofit.
“Why does AT&T keep so much information on so many users?” Cahn asks. “They have a perverse incentive to hold on to as much of our data as possible, to think about new ways to mine it for value. When they do that, we’re the ones put at risk.”
In any event, if AT&T is going to store data this sensitive, he says, it needs to employ more rigorous safeguards to protect it.
Yet in corporate America, cybersecurity has been an afterthought, if it receives any thought at all. “These companies at some point decide that it’s really expensive to care a lot more about security when there really aren’t a lot of consequences for screwing it up,” Krebs told me. “You might get sued or have to pay a few hundred million dollars in fines, but these are rounding errors on their profits.”
The European Union’s General Data Protection Regulation allows for a fine of up to 4% of a company’s annual revenue for an especially severe breach, but it’s unlikely that such a penalty could be legislated in the U.S. (If it were, AT&T might be liable for a bill of $4.9 billion.)
Krebs blames indifferent boards of directors for their inattention. Even a data-oriented company such as AT&T has no directors with specific expertise in cybersecurity. Of the nine directors in place as of the 2024 proxy statement, five are credited with experience in technology and innovation, according to what Villanova University business professor Noah Barsky correctly calls “perfunctory” language in their bios in the company’s 2024 proxy statement.
Only one, Stephen J. Luczo, is said to have any particular expertise in cybersecurity, but that’s only as a private equity investor — his background is in investment banking. The board’s newest member, Marissa Mayer, may have cybersecurity experience, but it’s not encouraging: During her tenure as CEO of Yahoo (2012 to 2017), that company experienced an epic data breach that compromised all 3 billion of its user accounts.
“It’s clear that industry is never going to do enough on its own” to protect customer data, Cahn says. The task may have to be placed in regulatory hands. Krebs suggests something akin to a cybersafety review board to introduce something close to accountability. Cahn suggests rules requiring the proactive deletion of sensitive information such as location data and medical records — “You can’t steal what doesn’t exist,” he told me.
The market may yet exercise its own discipline. UnitedHealth is learning the hard way that carelessness about cybersecurity can have a material effect on earnings. In its second-quarter earnings report released Tuesday, the company said that the full-year cost of the Change Healthcare hack may come to as much as $2.05 per share, an increase of as much as 45 cents from its original estimate. Its second-quarter earnings came to $4.54 per share.
But it’s customers who will really bear the costs. “Most Americans,” Krebs says, “have no choice but to do business with these companies if they want to participate in the modern society.”
Business
Block to cut more than 4,000 jobs amid AI disruption of the workplace
Fintech company Block said Thursday that it’s cutting more than 4,000 workers or nearly half of its workforce as artificial intelligence disrupts the way people work.
The Oakland parent company of payment services Square and Cash App saw its stock surge by more than 23% in after-hours trading after making the layoff announcement.
Jack Dorsey, the co-founder and head of Block, said in a post on social media site X that the company didn’t make the decision because the company is in financial trouble.
“We’re already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company,” he said.
Block is the latest tech company to announce massive cuts as employers push workers to use more AI tools to do more with fewer people. Amazon in January said it was laying off 16,000 people as part of effort to remove layers within the company.
Block has laid off workers in previous years. In 2025, Block said it planned to slash 931 jobs, or 8% of its workforce, citing performance and strategic issues but Dorsey said at the time that the company wasn’t trying to replace workers with AI.
As tech companies embrace AI tools that can code, generate text and do other tasks, worker anxiety about whether their jobs will be automated have heightened.
In his note to employees Dorsey said that he was weighing whether to make cuts gradually throughout months or years but chose to act immediately.
“Repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead,” he told workers. “I’d rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome.”
Dorsey is also the co-founder of Twitter, which was later renamed to X after billionaire Elon Musk purchased the company in 2022.
As of December, Block had 10,205 full-time employees globally, according to the company’s annual report. The company said it plans to reduce its workforce by the end of the second quarter of fiscal year 2026.
The company’s gross profit in 2025 reached more than $10 billion, up 17% compared to the previous year.
Dorsey said he plans to address employees in a live video session and noted that their emails and Slack will remain open until Thursday evening so they can say goodbye to colleagues.
“I know doing it this way might feel awkward,” he said. “I’d rather it feel awkward and human than efficient and cold.”
Business
WGA cancels Los Angeles awards show amid labor strike
The Writers Guild of America West has canceled its awards ceremony scheduled to take place March 8 as its staff union members continue to strike, demanding higher pay and protections against artificial intelligence.
In a letter sent to members on Sunday, WGA West’s board of directors, including President Michele Mulroney, wrote, “The non-supervisory staff of the WGAW are currently on strike and the Guild would not ask our members or guests to cross a picket line to attend the awards show. The WGAW staff have a right to strike and our exceptional nominees and honorees deserve an uncomplicated celebration of their achievements.”
The New York ceremony, scheduled on the same day, is expected go forward while an alternative celebration for Los Angeles-based nominees will take place at a later date, according to the letter.
Comedian and actor Atsuko Okatsuka was set to host the L.A. show, while filmmaker James Cameron was to receive the WGA West Laurel Award.
WGA union staffers have been striking outside the guild’s Los Angeles headquarters on Fairfax Avenue since Feb. 17. The union alleged that management did not intend to reach an agreement on the pending contract. Further, it claimed that guild management had “surveilled workers for union activity, terminated union supporters, and engaged in bad faith surface bargaining.”
On Tuesday, the labor organization said that management had raised the specter of canceling the ceremony during a call about contraction negotiations.
“Make no mistake: this is an attempt by WGAW management to drive a wedge between WGSU and WGA membership when we should be building unity ahead of MBA [Minimum Basic Agreement] negotiations with the AMPTP [Alliance of Motion Picture and Television Producers],” wrote the staff union. “We urge Guild management to end this strike now,” the union wrote on Instagram.
The union, made up of more than 100 employees who work in areas including legal, communications and residuals, was formed last spring and first authorized a strike in January with 82% of its members. Contract negotiations, which began in September, have focused on the use of artificial intelligence, pay raises and “basic protections” including grievance procedures.
The WGA has said that it offered “comprehensive proposals with numerous union protections and improvements to compensation and benefits.”
The ceremony’s cancellation, coming just weeks before the Academy Awards, casts a shadow over the upcoming contraction negotiations between the WGA and the Alliance of Motion Picture and Television Producers, which represents the studios and streamers.
In 2023, the WGA went on a strike lasting 148 days, the second-longest strike in the union’s history.
Times staff writer Cerys Davies contributed to this report.
Business
Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’
Recently, I asked Claude, an artificial-intelligence thingy at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.
Say, for example, hands that wanted to put a tight net of surveillance around every American citizen, monitoring our lives in real time to ensure our compliance with government.
“Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. That’s great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isn’t that I’d want to do that — it’s that I’d be good at it.”
That danger is also imminent.
Claude’s maker, the Silicon Valley company Anthropic, is in a showdown over ethics with the Pentagon. Specifically, Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision.
Those are two red lines that seem rather reasonable, even to Claude.
However, the Pentagon — specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war — has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any “lawful” purpose it sees fit.
Defense Secretary Pete Hegseth, center, arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.
(Tom Williams / CQ-Roll Call Inc. via Getty Images)
The or-else attached to this ultimatum is big. The U.S. government is threatening not just to cut its contract with Anthropic, but to perhaps use a wartime law to force the company to comply or use another legal avenue to prevent any company that does business with the government from also doing business with Anthropic. That might not be a death sentence, but it’s pretty crippling.
Other AI companies, such as white rights’ advocate Elon Musk’s Grok, have already agreed to the Pentagon’s do-as-you-please proposal. The problem is, Claude is the only AI currently cleared for such high-level work. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly inquired after the fact if another Silicon Valley company involved in the operation, Palantir, had used Claude. It had.
Palantir is known, among other things, for its surveillance technologies and growing association with Immigration and Customs Enforcement. It’s also at the center of an effort by the Trump administration to share government data across departments about individual citizens, effectively breaking down privacy and security barriers that have existed for decades. The company’s founder, the right-wing political heavyweight Peter Thiel, often gives lectures about the Antichrist and is credited with helping JD Vance wiggle into his vice presidential role.
Anthropic’s co-founder, Dario Amodei, could be considered the anti-Thiel. He began Anthropic because he believed that artificial intelligence could be just as dangerous as it could be powerful if we aren’t careful, and wanted a company that would prioritize the careful part.
Again, seems like common sense, but Amodei and Anthropic are the outliers in an industry that has long argued that nearly all safety regulations hamper American efforts to be fastest and best at artificial intelligence (although even they have conceded some to this pressure).
Not long ago, Amodei wrote an essay in which he agreed that AI was beneficial and necessary for democracies, but “we cannot ignore the potential for abuse of these technologies by democratic governments themselves.”
He warned that a few bad actors could have the ability to circumvent safeguards, maybe even laws, which are already eroding in some democracies — not that I’m naming any here.
“We should arm democracies with AI,” he said. “But we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.”
For example, while the 4th Amendment technically bars the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could “conduct massively scaled recordings of all public conversations.” This could be fair game territory for legally recording because law has not kept pace with technology.
Emil Michael, the undersecretary of war, wrote on X Thursday that he agreed mass surveillance was unlawful, and the Department of Defense “would never do it.” But also, “We won’t have any BigTech company decide Americans’ civil liberties.”
Kind of a weird statement, since Amodei is basically on the side of protecting civil rights, which means the Department of Defense is arguing it’s bad for private people and entities to do that? And also, isn’t the Department of Homeland Security already creating some secretive database of immigration protesters? So maybe the worry isn’t that exaggerated?
Help, Claude! Make it make sense.
If that Orwellian logic isn’t alarming enough, I also asked Claude about the other red line Anthropic holds — the possibility of allowing it to run deadly operations without human oversight.
Claude pointed out something chilling. It’s not that it would go rogue, it’s that it would be too efficient and fast.
“If the instructions are ‘identify and target’ and there’s no human checkpoint, the speed and scale at which that could operate is genuinely frightening,” Claude informed me.
Just to top that with a cherry, a recent study found that in war games, AI’s escalated to nuclear options 95% of the time.
I pointed out to Claude that these military decisions are usually made with loyalty to America as the highest priority. Could Claude be trusted to feel that loyalty, the patriotism and purpose, that our human soldiers are guided by?
“I don’t have that,” Claude said, pointing out that it wasn’t “born” in the U.S., doesn’t have a “life” here and doesn’t “have people I love there.” So an American life has no greater value than “a civilian life on the other side of a conflict.”
OK then.
“A country entrusting lethal decisions to a system that doesn’t share its loyalties is taking a profound risk, even if that system is trying to be principled,” Claude added. “The loyalty, accountability and shared identity that humans bring to those decisions is part of what makes them legitimate within a society. I can’t provide that legitimacy. I’m not sure any AI can.”
You know who can provide that legitimacy? Our elected leaders.
It is ludicrous that Amodei and Anthropic are in this position, a complete abdication on the part of our legislative bodies to create rules and regulations that are clearly and urgently needed.
Of course corporations shouldn’t be making the rules of war. But neither should Hegseth. Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience accede to their request.”
Thank goodness Anthropic has the courage and foresight to raise the issue and hold its ground — without its pushback, these capabilities would have been handed to the government with barely a ripple in our conscientiousness and virtually no oversight.
Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to get it done without regard to party, and demanding the Department of Defense back off its ridiculous threat while the issue is hashed out.
Because when the machine tells us it’s dangerous to trust it, we should believe it.
-
World5 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts5 days agoMother and daughter injured in Taunton house explosion
-
Denver, CO5 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Louisiana1 week agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Technology1 week agoYouTube TV billing scam emails are hitting inboxes
-
Politics1 week agoOpenAI didn’t contact police despite employees flagging mass shooter’s concerning chatbot interactions: REPORT
-
Technology1 week agoStellantis is in a crisis of its own making
-
News1 week agoWorld reacts as US top court limits Trump’s tariff powers