Connect with us

Technology

The executive that helped build Meta’s ad machine is trying to expose it

Published

on

The executive that helped build Meta’s ad machine is trying to expose it

Brian Boland spent more than a decade figuring out how to build a system that would make Meta money. On Thursday, he told a California jury it incentivized drawing more and more users, including teens, onto Facebook and Instagram — despite the risks.

Boland’s testimony came a day after Meta CEO Mark Zuckerberg took the stand in a case over whether Meta and YouTube are liable for allegedly harming a young woman’s mental health. Zuckerberg framed Meta’s mission as balancing safety with free expression, not revenue. Boland’s role was to counter this by explaining how Meta makes money, and how that shaped its platforms’ design. Boland testified that Zuckerberg fostered a culture that prioritized growth and profit over users’ wellbeing from the top down. He said he’s been described as a whistleblower — a term Meta has broadly sought to limit for fear it would prejudice the jury, but which the judge has generally allowed. Over his 11 years at Meta, Boland said he went from having “deep blind faith” in the company to coming to the “firm belief that competition and power and growth were the things that Mark Zuckerberg cared about most.”

Boland last served as Meta’s VP of partnerships before leaving in 2020, working to bring content to the platform that it could monetize, and previously worked in a variety of advertising roles beginning in 2009. He testified that Facebook’s infamous early slogan of “move fast and break things” represented “a cultural ethos at the company.” He said the idea behind the motto was generally, “don’t really think about what could go wrong with a product, but just get it out there and learn and see.” At the height of its prominence internally, employees would sit down at their desks to see a piece of paper that said, “what will you break today?” Boland testified.

“The priorities were on winning growth and engagement”

Zuckerberg consistently made his priorities for the company abundantly clear, according to Boland. He’d announce them in all hands meetings and leave no shadow of a doubt what the company should be focused on, whether it was building its products to be mobile-first, or getting ahead of the competition. When Zuckerberg realized that then-Facebook had to get into shape to compete with a rumored Google social network competitor (which he didn’t name, but seemed to refer to Google+), Boland recalled a digital countdown clock in the office that symbolized how much time they had left to achieve their goals during what the company called a “lockdown.” During his time at the company, Boland testified, there was never a lockdown around user safety, and Zuckerberg allegedly instilled in engineers that “the priorities were on winning growth and engagement.”

Advertisement

Meta has repeatedly denied that it tries to maximize users’ engagement on its platforms over safeguarding their wellbeing. In the past weeks, both Zuckerberg and Instagram CEO Adam Mosseri testified that building platforms that users enjoy and feel good on is in their long-term interest, and that’s what drives their decisions.

Boland disputes this. “My experience was that when there were opportunities to really try to understand what the products might be doing harmfully in the world, that those were not the priority,” he testified. “Those were more of a problem than an opportunity to fix.”

When safety issues came up through press reports or regulatory questions, Boland said, “the primary response was to figure out how to manage through the press cycle, to what the media was saying, as opposed to saying, ‘let’s take a step back and really deeply understand.” Though Boland said he told his advertising-focused team that they should be the ones to discover “broken parts,” rather than those outside the company, he said that philosophy didn’t extend to the rest of the company.

On the stand the day before, Zuckerberg pointed to documents around 2019 showing disagreement among his employees with his decisions, saying they demonstrated a culture that encourages a diversity of opinion. Boland, however, testified that while that might have been the case earlier in his tenure, it later became “a very closed down culture.”

“There’s not a moral algorithm, that’s not a thing … Doesn’t eat, doesn’t sleep, doesn’t care”

Advertisement

Since the jury can only consider decisions and products that Meta itself made, rather than content it hosted from users, lead plaintiff attorney Mark Lanier also had Boland describe how Meta’s algorithm works, and the decisions that went into making and testing it. Algorithms have an “immense amount of power,” Boland said, and are “absolutely relentless” in pursuing their programmed goals — in many cases at Meta, that was allegedly engagement. “There’s not a moral algorithm, that’s not a thing,” Boland said. “Doesn’t eat, doesn’t sleep, doesn’t care.”

During his testimony on Wednesday, Zuckerberg commented that Boland “developed some strong political opinions” toward the end of his time at the company. (Neither Zuckerberg nor Boland offered specifics, but in a 2025 blog post, Boland indicated he was deleting his Facebook account in part over disagreements with how Meta handled events like January 6th, writing that he believed “Facebook had contributed to spreading ‘Stop the Steal’ propaganda and enabling this attempted coup.”) Lanier spent time establishing that Boland was respected by peers, showing a CNBC article about his departure that quoted a glowing statement from his then-boss, and a reference to an unnamed source who reportedly described Boland as someone with a strong moral character.

On cross examination, Meta attorney Phyllis Jones clarified that Boland didn’t work on the teams tasked with understanding youth safety at the company. Boland agreed that advertising business models are not inherently bad, and neither are algorithms. He also admitted that many of his concerns involved the content users were posting, which is not relevant to the current case.

During his direct examination, Lanier asked if Boland had ever expressed his concerns to Zuckerberg directly. Boland said he’d told the CEO he’d seen concerning data showing “harmful outcomes” of the company’s algorithms and suggested that they investigate further. He recalled Zuckerberg responding something to the effect of, “I hope there’s still things you’re proud of.” Soon after, he said, he quit.

Boland said he left upwards of $10 million worth of unvested Meta stock on the table when he departed, though he admitted he made more than that over the years. He said he still finds it “nerve-wracking” every time he speaks out about the company. “This is an incredibly powerful company,” he said.

Advertisement
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Microsoft starts removing Copilot buttons from Windows 11 apps

Published

on

Microsoft starts removing Copilot buttons from Windows 11 apps

Microsoft is starting to remove “unnecessary” Copilot buttons from its Windows 11 apps. In the latest version of the Notepad app for Windows Insiders, Microsoft has removed the Copilot button in favor of a “writing tools” menu. The Copilot button in the Snipping Tool app also no longer appears when you select an area to capture.

The change is part of “reducing unnecessary Copilot entry points, starting with apps like Snipping Tool, Photos, Widgets and Notepad,” that Microsoft promised to complete as part of its broader plan to fix Windows 11. While Copilot buttons are being removed, it looks like the underlying AI features are here to stay, though.

The Copilot button has been removed from Notepad, but the writing tools replacement still uses AI-powered features and looks like the identical menu of options that existed before. I still think these features are largely unnecessary in what’s supposed to be a lightweight text app, but removing the superfluous Copilot branding is a good first step.

Continue Reading

Technology

AI chatbots refilling psych meds sparks debate

Published

on

AI chatbots refilling psych meds sparks debate

NEWYou can now listen to Fox News articles!

If you have ever waited weeks just to renew a mental health prescription, you already know how frustrating the system can feel. Now imagine handling that refill through a chatbot instead of a doctor.

That kind of thing is already starting to happen. In Utah, a new pilot program is allowing an artificial intelligence system from Legion Health to renew certain psychiatric medications without direct approval from a physician each time. State officials say this could speed things up and reduce costs.

Many psychiatrists are not convinced. They are asking whether this actually solves the problem it claims to fix.

Sign up for my FREE CyberGuy Report

Advertisement
  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

AMAZON HEALTH AI BRINGS A DOCTOR TO YOUR POCKET
 

Utah launches AI chatbot to renew select psychiatric prescriptions, raising questions about safety and oversight. (pocketlight/Getty Images)

How the AI prescription system works

Before this starts sounding like a robot psychiatrist, the program stays tightly limited. The AI only renews a short list of lower-risk medications that a doctor has already prescribed. These include commonly used antidepressants like Prozac, Zoloft and Wellbutrin. 

To qualify, patients must meet strict requirements. You need to be stable on your current medication. Recent dosage changes or a psychiatric hospitalization will disqualify you. You also need to check in with a healthcare provider after a set number of refills or within a certain time frame.

During the process, the chatbot asks about symptoms, side effects and warning signs such as suicidal thoughts. If anything raises concern, it sends the case to a real doctor before approving a refill. According to an agreement filed with Utah’s Office of Artificial Intelligence Policy, the pilot includes strict safeguards, including human review thresholds and automatic escalation for higher-risk cases. The system cannot prescribe new medications or manage drugs that require close monitoring. As a result, it leaves out many complex conditions from the pilot.

Why some experts are pushing back

Even with those guardrails, many psychiatrists are uneasy. Brent Kious, a psychiatrist and professor at the University of Utah School of Medicine, has questioned whether AI systems like this actually solve the access problem they are designed to address. 

Advertisement

He has suggested that the benefits of an AI-based refill system may be overstated, especially since patients must already be stable and under care to qualify. Kious has also raised concerns about how much these systems rely on self-reported answers. Patients may not recognize side effects, may answer inaccurately, or may adjust their responses to get the outcome they want. 

He has further questioned whether current AI tools can safely handle even routine parts of psychiatric care, noting that treatment decisions often depend on factors that go beyond simple screening questions. He has also pointed to a lack of transparency in how these systems operate, which can make it harder for doctors and patients to fully trust them. 

HEALTHCARE DATA BREACH HITS SYSTEM STORING PATIENT RECORDS
 

A new pilot program allows AI to handle some mental health medication refills without direct doctor approval. (Sezeryadigar/Getty Images)

The promise behind the technology

Supporters of the program are focused on access. A lot of people in Utah still struggle to get mental health care. Wait times can stretch for weeks. In some areas, there simply are not enough providers available. The idea is that AI can take care of routine refill requests so doctors have more time to focus on patients with more complex needs. That could help take some pressure off the system. Legion Health is also leaning into convenience. The service is expected to cost about $19 a month and is designed to make refills quicker and easier for patients who qualify. From a big-picture view, that could help. From a patient’s point of view, the tradeoff may feel a little more complicated. We reached out to Legion Health for comment, but did not hear back before our deadline.

Advertisement

What this means to you

If you rely on mental health medication, this kind of system could change how you manage your care. You may be able to get refills more quickly if your condition is stable and your treatment plan is not changing. At the same time, this does not replace your doctor. It does not handle new diagnoses or complex decisions. It also adds another layer between you and your care. Instead of a conversation, you are interacting with a system that depends on how you answer a series of questions. Mental health treatment often depends on small details. Changes in mood, sleep or behavior can matter more than a simple yes or no response. That is where some experts believe human care still has a clear advantage.

The bigger question about AI in healthcare

This pilot is only one step in a much larger shift. Utah is already experimenting with AI in other areas of healthcare. Companies like Legion are signaling plans to expand beyond a single state. What starts with simple refills could eventually move into more complex decisions. That is where the conversation becomes more urgent. Is this a practical way to improve access to care, or does it risk reducing something deeply personal into a transaction driven by software?

HOW ARTIFICIAL INTELLIGENCE IS TRANSFORMING HEALTHCARE
 

Psychiatrists question whether AI prescription refills address access issues or create new risks for patients. (SDI Productions/Getty Images)

Take my quiz: How safe is your online security?

Advertisement

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

There is no question that access to mental health care needs improvement. Long wait times and limited availability are real problems that affect millions of people. AI may help in specific situations, especially when the task is routine and the patient is stable. Still, convenience should not be confused with quality. For now, this system is narrow in scope and closely monitored. That makes it easier to test. It also highlights how early we are in this transition. The technology will continue to evolve. The real question is whether the safeguards, oversight and transparency will evolve at the same pace.

Would you feel comfortable letting a chatbot handle part of your mental health care, or is that a line you do not want technology to cross? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report

Advertisement
  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Copyright 2026 CyberGuy.com. All rights reserved.

Continue Reading

Technology

ChatGPT has a new $100 per month Pro subscription

Published

on

ChatGPT has a new 0 per month Pro subscription

OpenAI has announced a new version of its ChatGPT Pro subscription that costs $100 per month. The new Pro tier offers “5x more” usage of its Codex coding tool than the $20 per month Plus subscription and “is best for longer, high-effort Codex sessions,” OpenAI says.

The company is introducing the new tier as it tries to win over users from Anthropic and its popular Claude Code tool. ChatGPT’s $100 per month option will directly compete with Anthropic’s “Max” tier for Claude, which costs the same price. It also offers a middle ground between the $20 per month Plus tier and the $200 version of the Pro tier.

(Yes, there are now two tiers of “Pro”; while the new tier “still offers access to all Pro features,” OpenAI says that the more expensive one has even higher usage limits.)

According to OpenAI, ChatGPT Plus will “will continue to be the best offer at $20 for steady, day-to-day usage of Codex, and the new $100 Pro tier offers a more accessible upgrade path for heavier daily use.” OpenAI also offers an $8 per month Go tier and a free tier.

Continue Reading
Advertisement

Trending