About five hours into Elon Musk’s testimony, I typed the following sentence into my notes: “I have never been more sympathetic to Sam Altman in my life.”
Technology
Data brokers accused of hiding opt-out pages from Google
NEWYou can now listen to Fox News articles!
If you have ever tried to opt out of a data broker site, you know the drill. You search. You scroll. You click through layers of legal jargon. Then you wonder if they even want you to find the exit door. Now we know the answer.
A U.S. Senate investigation found that several major data brokers placed code on their opt-out pages that blocked search engines from indexing them. In practical terms, that meant you could not easily find the page where you ask them to stop selling your data.
After pressure from Sen. Maggie Hassan, four companies have now removed that code.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Which data brokers hid their opt-out pages?
The companies named in the report include:
- Comscore
- IQVIA Digital
- Telesign
- 6sense Insights
These firms collect and sell personal information for marketing, analytics or identity verification. That data can include browsing behavior, device details, location history and in some cases highly sensitive identifiers.
A U.S. Senate investigation found major data brokers used no index code to hide opt-out pages from Google, making it harder for people to stop the sale of their personal data. (Kurt “CyberGuy” Knutsson)
An earlier investigation by The Markup and CalMatters found that dozens of brokers used “no index” code to hide opt-out instructions from Google search results. Some removed the code after reporters reached out. However, Sen. Hassan’s office later found that the four companies above still had opt-out pages blocked from search engines. They have since removed the code.
MAKE 2026 YOUR MOST PRIVATE YEAR YET BY REMOVING BROKER DATA
One more company, Findem, has not removed the no-index code from its “Do not sell or share my personal information” page, according to the report. The company later said an email from the senator’s office did not reach its CEO due to spam filtering and that its privacy channels are actively monitored. The Committee report noted this lack of action raises serious concerns about responsiveness to privacy requests and about whether opt-out rights are being made truly accessible.
We reached out to all five companies for comment. A spokesperson for 6sense provided the following statement:
“6sense takes privacy transparency seriously and has always fully indexed our Privacy Center, where individuals may exercise their opt-out rights in compliance with applicable laws. For a period of time, we included a “no index” directive on the Privacy Policy page to reduce spam volume to privacy request email aliases and protect the integrity of request handling systems. Once the issue was raised by the Committee, that code was immediately removed. Our Privacy Center opt-out page has remained indexed, and our Privacy Policy has always been accessible and prominently visible on our web properties, as well as directly linked in our publicly available data broker registrations. We regularly review our security and privacy practices to meet evolving regulatory requirements, and our commitment has been independently validated annually through ISO/IEC 27001:2022, ISO/IEC 42001:2023, and SOC 2, Type II certifications.”
2026 VALENTINE’S ROMANCE SCAMS AND HOW TO AVOID THEM
6sense said it takes privacy transparency “seriously.” (iStock)
Why hidden data broker opt-out pages matter for your privacy
Opt-out pages are not a courtesy. In many states, they are required by law. When companies hide those pages from search engines, they make it harder for you to take control of your own information. And that matters. The more complicated the process feels, the more likely people are to give up halfway through. Meanwhile, data broker breaches have been expensive and damaging. Committee calculations estimate that identity theft tied to four major data broker breaches cost U.S. consumers more than $20 billion. That is not a minor privacy slip. That is real money, real consequences and real stress for families trying to clean up the mess.
Why scammers care about your data
When detailed personal information falls into the wrong hands, it fuels scams that feel alarmingly real. Criminal networks can use data like Social Security numbers, home addresses and phone numbers to create highly customized emails, texts and phone calls. The more accurate the details, the more convincing the scam. That is one reason data broker breaches are not just a privacy issue. They are a consumer protection issue.
Sen. Maggie Hassan’s investigation is part of her broader effort to combat scams, which now account for nearly half a trillion dollars in losses annually and have grown into one of the world’s largest illicit industries. She has also opened inquiries into the roles that satellite internet providers, online dating platforms, AI companies and federal agencies play in preventing fraud.
The investigation was led by Democratic Sen. Maggie Hassan of New Hampshire. (Sen. Maggie Hassan reelection campaign)
What this means for your personal data and privacy
Here is the uncomfortable truth. Your personal data likely sits in dozens, maybe hundreds of databases you have never heard of. You did not sign up. You did not click agree. But your information still travels through a vast marketplace. Even when opt-out forms exist, finding and completing them can feel like a part-time job. And since the U.S. still lacks a comprehensive federal privacy law like Europe’s GDPR, rules vary by state. So yes, the opt-out pages are now easier to find for these companies. But the bigger system remains largely intact.
How to opt out of data brokers and protect your information
You cannot erase yourself from the internet overnight. However, you can reduce your exposure.
1) Search your name regularly
Type your full name and city into Google. Look for data broker listings. Many include an opt-out link buried in the privacy policy.
2) Use state privacy tools if available
California residents can use a free state-run tool called DROP at privacy.ca.gov/drop/ to request deletion from more than 500 registered brokers. Other states are rolling out similar systems.
3) Submit opt-out requests directly
Visit the privacy or “Do not sell my information” page on broker sites. Follow instructions carefully and keep confirmation emails.
4) Consider a data removal service
Data removal services can automate opt-out requests across dozens of brokers. They are not perfect, but they save time. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
5) Lock down core accounts
Use strong, unique passwords stored in a password manager. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com. Also, turn on two-factor authentication (2FA) for financial email and social accounts. That way, even if your data circulates, criminals have a harder time breaking in.
The larger problem with the data broker industry
The data broker industry is legal. It operates in plain sight. Yet most people have no idea how many companies trade in their information. Until Congress passes a national privacy law, oversight will remain patchwork. That leaves you to chase down your own records one company at a time. Transparency should not require a Senate investigation.
Kurt’s key takeaways
This story is about more than hidden code. It is about control. When companies quietly block search engines from indexing opt-out pages, they tilt the playing field. After public scrutiny, those pages are easier to find. That is a step forward. Still, your data continues to move through an ecosystem designed to profit from it. So the real question is not whether opt-out pages appear on Google.
How much of your personal life are you comfortable leaving in the hands of companies you have never heard of? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Now California’s cops can give tickets to driverless cars
Autonomous vehicles roving California’s roads will no longer be immune to traffic tickets starting on July 1st. New regulations announced by the California DMV this week allow law enforcement to give AV manufacturers a “notice of AV noncompliance” when one of their cars commits a traffic violation, like running a red light or failing to stop for school buses.
The updated regulations come after years of viral traffic violations and multiple safety investigations involving robotaxis. Tesla’s Full Self-Driving (FSD) system is also under investigation for running red lights and driving in the wrong direction. Now, driverless vehicle companies can get cited for those violations, at least in California.
California’s new regulations could also help prevent driverless cars from getting in the way during emergencies, like an incident in San Francisco last year when Waymos blocked traffic during a power outage. AV companies will now have to answer first-responder calls within 30 seconds and must allow emergency responders to “issue electronic geofencing directives,” which will block AVs from entering active emergency areas. Any driverless cars already in the area will have to leave.
The new regulations also allow AV companies to test and deploy heavy-duty autonomous trucks and include “licensing qualifications and permitting and training requirements for remote drivers and assistants.”
Technology
Meta tracks workers to train AI agents
NEWYou can now listen to Fox News articles!
Inside Meta, the parent company of Facebook, Instagram and WhatsApp, employees’ everyday clicks, shortcuts and screen habits are now part of how the company trains its artificial intelligence systems.
Meta has started rolling out internal software that tracks how employees use their computers, including how they move through apps and complete routine tasks. The company says this data will help build smarter AI tools, but it also raises new questions about how far workplace monitoring should go.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my I free when you join.
HOW TO OPT OUT OF AI DATA COLLECTION IN POPULAR APPS
Inside Meta, employee computer habits are becoming training data as the company pushes deeper into AI-powered workplace automation. (Unknown)
What Meta’s employee tracking tool actually does
The system is called the Model Capability Initiative, or MCI. It runs on work apps and websites used by employees.
Here is what it tracks:
- Mouse movements and clicks
- Keystrokes and keyboard shortcuts
- Navigation behavior like dropdown selections
- Occasional screenshots of what is on screen
Meta says the idea is simple. If AI is supposed to act like a human using a computer, it needs real examples of how people actually work.
“If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them – things like mouse movements, clicking buttons, and navigating dropdown menus,” a Meta spokesperson told CyberGuy. “To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models. There are safeguards in place to protect sensitive content, and the data is not used for any other purpose.”
The company insists that data collected through this tool is used only for model training, not for employee performance reviews, and managers do not have access to it. Company devices were already subject to monitoring, and this isn’t unique to Meta.
Why Meta is collecting employee data for AI
Meta isn’t collecting this information just for insight. It is feeding it into a broader push to build artificial intelligence agents that can handle work tasks. In an internal memo, Meta’s CTO Andrew Bosworth described a future where AI agents do most of the work while humans guide and review.
The company is already reorganizing around that idea. Internal programs like “AI for Work,” now called the Agent Transformation Accelerator, are designed to bring AI into daily workflows across teams.
Meta believes this approach will make operations faster and more efficient. The trade-off is that human work becomes training data for the systems that may replace parts of it.
META EMPLOYEE ACCUSED OF ACCESSING PRIVATE IMAGES
Meta is rolling out a workplace tracking tool that records employee clicks, keystrokes and screen activity to help train its AI systems. (Joan Cros/NurPhoto via Getty Images)
Privacy concerns around Meta’s employee tracking
Workplace monitoring has been around for years, but this takes it a step further. For example, tracking keystrokes and clicks in real time creates a level of oversight that companies have more often used with gig workers than office employees. As a result, employers can now watch day-to-day activity more closely.
At the same time, a legal gray area exists. In the United States, companies generally have broad authority to monitor employees as long as they provide notice. Because of that, employers have significant room to expand how they collect data.
However, outside the U.S., the rules can be stricter, and some regions place tighter limits on how companies collect and use employee data.
Even so, knowing someone is tracking your activity at this level can change how you work, how you communicate and how much autonomy you feel on the job.
How this fits into the broader AI job shift
Meta is hardly alone in pushing toward automation. Companies across Silicon Valley are investing heavily in AI systems that can write code, organize data and assist with decision-making. At the same time, many are cutting jobs or reshaping roles.
Meta plans to reduce its workforce by about 10 percent globally. Amazon has also trimmed tens of thousands of corporate roles in recent months.
The message is clear. AI has evolved beyond a tool that helps employees. It is increasingly positioned as a replacement for certain types of work.
JOBS THAT ARE MOST AT RISK FROM AI, ACCORDING TO MICROSOFT
Meta says its new internal monitoring tool will improve AI agents, but the program is also raising fresh concerns about employee privacy. (Donato Fasano/Getty Images)
What this means to you
Even if you do not work at Meta, this shift has wider implications. First, workplace monitoring is expanding beyond factories and delivery jobs into office environments. That could become standard across industries.
Second, your everyday work habits may become valuable data. Companies are realizing that human behavior is one of the most useful training resources for AI.
The line between assisting and replacing workers is getting thinner. Tools that start as helpers often evolve into something more autonomous over time.
If your job involves repetitive computer tasks, it is worth paying attention to how AI is being trained to handle them.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: CyberGuy.com.
Kurt’s key takeaways
Meta’s move marks a turning point. AI no longer relies only on public data or curated datasets. It now learns directly from how people work in real time. That shift raises practical questions about productivity and efficiency. It also brings deeper concerns about privacy, control and the future role of human workers. Companies argue they need this data to build better tools. At the same time, employees now help train systems that could eventually replace parts of their roles.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
If your daily work became training data for AI that could eventually do your job, would you be comfortable with that? Let us know by writing to us at CyberGuy.com.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Elon Musk’s worst enemy in court is Elon Musk
Musk’s direct testimony was an improvement over yesterday — even if his lawyer kept asking leading questions to cue him in how to answer. But that memory was immediately obliterated by an absolutely miserable cross-examination. For hours, Musk refused to answer yes or no questions with yes or no, occasionally “forgot” things he’d testified to in the morning, and scolded defense lawyer William Savitt. I watched a few jury members glance at each other. During one testy exchange, one woman was rubbing her head. Me too, babe.
Even the judge, who at times prompted Musk to answer “yes” or “no,” was having a bad time. “He was at times difficult,” said Yvonne Gonzalez Rogers after Musk after the jury left the room. (At one point, when she’d cut off his argumentative answer, she got the biggest laugh of the day.) “Part of management from my perspective is just to get through testimony.”
“I don’t yell at people,” Musk said
Musk spent a lot of yesterday painting this heroic picture of himself, and this morning, near the end of his direct examination, said, “I don’t lose my temper,” and “I don’t yell at people.” He said he might have called someone a “jackass,” but only in the spirit of saying something like, “don’t be a jackass.”
Immediately afterward, Savitt baited him into being petty, irritating, and generally hard to deal with. At one point, we all watched Musk lose his temper. He spent hours quibbling over simple questions. Again and again, Savitt referred back to Musk’s deposition, where he’d answered questions slightly differently, calling Musk’s accounts into question. Even if the average juror didn’t think he was lying, he was certainly inconsistent.
Savitt’s cross-examination left the distinct impression that Musk quit his quarterly payments to OpenAI because he wasn’t going to get full control of the company, then tried to kneecap it and fold it into Tesla. Initially, Musk wanted four board seats and 51 percent of the shares. The other co-founders would get three seats, together, to be voted on by shareholders (including other employees). Though Musk said that the eventual plan was to expand to 12 seats, it was obvious that Musk had full control on the initial board of seven.
When Musk didn’t get what he wanted, he pulled the plug on his funding commitment and hired Andrej Karpathy, OpenAI’s second-best engineer, to Tesla in 2017. Despite his fiduciary duty to OpenAI as a board member, he did not try to get Karpathy to stay at OpenAI when he said he heard Karpathy wanted to leave. (“I think people should have a right to work where they want to work,” Musk said on the stand.)
“In my and Andrej’s opinion, Tesla is the only path that could even hope to hold a candle to Google.”
By 2018, Musk was saying that OpenAI had no path forward with its current structure, declaring it was on “a path of certain failure” in emails to Ilya Sutskever and Greg Brockman. His proposed solution was to merge Tesla and OpenAI. “In my and Andrej’s opinion, Tesla is the only path that could even hope to hold a candle to Google,” Musk said. The plan never came to fruition, and Musk resigned from OpenAI’s board that year.
As early as 2016, Musk had his own concerns about OpenAI as a non-profit. In an email to a colleague at Neuralink, he wrote “Deepmind is moving very fast. I am concerned that OpenAI is not on a path to catch up. Setting it up as non-profit might, in hindsight, have been the wrong move. Sense of urgency is not as high.”
Asked about this, Musk said he was just speculating. Savitt said, “Those are your words, yes or no?”
“You mostly do unfair questions.”
Musk replied, “This is a hypothetical.”
Savitt said, “So you thought it might have been a wrong move? That’s what you said?”
Getting Musk to put any of that on the record was intensely difficult. He refused repeatedly to answer questions like whether he knew cutting off OpenAI donations would create financial pressure, or whether he’d asked Karpathy to stay at OpenAI. He accused Savitt of asking questions that were “designed to trick me,” and we got multiple versions of this:
Musk: You mostly do unfair questions
Savitt: I am trying to put the questions as fairly as I can. I am doing my best.
Musk: That’s not true.
Musk was trying to make this as painful as possible for Savitt, but he also made it as painful as possible for everyone else, including the jury. Watching him simply refuse to answer questions during cross he’d easily answered during direct was annoying. Watching him refuse to admit he understood the nature of linear time — and therefore the fact that he was still a director of OpenAI’s board before he resigned in 2018 — was infuriating. It made him look dishonest.
“I’d lost trust in Altman and I was concerned they were really trying to steal the charity.”
Musk’s basic, oft-repeated story during this week’s testimony has been that OpenAI is “stealing a charity” and “looting a non-profit.” He maintains that he was all right with some limited for-profit activity, but not anything that would overshadow OpenAI’s nonprofit work and constitute “the tail wagging the dog” — another phrase he reached for, over and over, like a security blanket. In direct testimony, he painted himself as a trusting “fool” who had believed the wily promises of Sam Altman and his cohort: “I gave them $38 million of essentially free funding, which they used to create an $800 billion for-profit company,” he lamented. His own lawyer’s questioning wrapped up with Musk being purportedly blindsided by a multibillion-dollar deal with Microsoft.
“I’d lost trust in Altman and I was concerned they were really trying to steal the charity,” Musk said. “It turned out to be true.”
“I said I didn’t look closely! I read the headline!”
On cross examination, Musk would barely even explain how much he bothered to learn about OpenAI’s operations before suing over them a few years later. When OpenAI proposed a for-profit arm around 2018, he got an email outlining the proposed corporate structure. On the stand, he said he’d only read the very first section of it,, which said that contributors should consider the investments as donations that may have no return. “I read the highlighted box with ‘important warning,’” Musk said.
Savitt asked Musk if he’d raised any objection to the structure then, when he’d received the documents. Musk said that he didn’t read beyond that first box.
Musk: I didn’t read the fine print.. We’re going into the fine print of this document.
Savitt: It’s a four-page document.
Musk then said he hadn’t read beyond taking this in the “spirit of a donation.” And then we got the deposition, where Musk said, “I don’t think I read this term sheet… I’m not sure I actually read this term sheet… I did not closely look at this term sheet.” Savitt pointed out that nowhere in the deposition did Musk say he’d read the first paragraph and Musk, raising his voice and effectively undermining his claims from the morning that he doesn’t lose his temper (lol) or yell at people (lmao), said, “I said I didn’t look closely! I read the headline!”
Imagine having to deal with this man as your cofounder. I think I would sooner open a vein.
-
South Dakota2 minutes agoSculptureTour Salina Began with Trip to South Dakota
-
Tennessee8 minutes agoUniversity of Tennessee, Knoxville Becomes a University MNPS Partner
-
Texas14 minutes agoHow to get FEMA aid, Red Cross help and state assistance after tornado, storm damage in North Texas
-
Utah20 minutes agoHow Jaren Kump used extra eligibility to earn a master’s degree that he hopes will help retiring college athletes
-
Vermont26 minutes agoCommentary | Notes from a Vermont Activist by Nancy Braus: Why the sudden push for teen pregnancies?
-
Virginia32 minutes agoPipeline developer to restart Virginia project this week
-
Wisconsin44 minutes agoWisconsin secures 11th commitment from three-star defensive lineman
-
West Virginia50 minutes agoWhat’s happening with house prices in Charleston, West Virginia in 2026? – AOL