About five hours into Elon Musk’s testimony, I typed the following sentence into my notes: “I have never been more sympathetic to Sam Altman in my life.”
Technology
Genealogy boom exposes personal data scammers can exploit
NEWYou can now listen to Fox News articles!
Millions of Americans are digging into their roots. Genealogy has quietly become one of the fastest-growing hobbies in North America, with the industry now valued at more than $5 billion. From DNA kits to digital family tree builders, people are discovering relatives, tracing migration stories and reconnecting with their past.
There is something deeply meaningful about learning where you come from. However, there is another side to this trend that many people never consider.
The same information that helps you find your great-grandparents can also help scammers find you. Once personal details appear online, they rarely stay in one place. And that can create unexpected security risks.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
DNA KITS MAY SHARE PERSONAL DATA AFTER DEATH
A woman looks at the contents of a 23andMe DNA testing kit in Oakland, California, on June 8, 2018. Millions of Americans using family tree platforms may be unknowingly sharing sensitive details like maiden names and birthplaces online. (Cayce Clifford/Bloomberg via Getty Images)
What family tree sites encourage you to upload
Genealogy platforms feel harmless. In fact, they are designed to feel warm, nostalgic and personal.
To build a detailed family tree, users often upload information such as:
- Full legal names, including maiden names
- Birth dates
- Places of birth
- Marriage records
- Address history
- Names of children, siblings and relatives
- Old family photos
- Obituaries and memorial information
Each detail may seem harmless on its own. But together, they create something extremely valuable: a fully mapped identity profile. Not just of you, but of your entire family network. And that kind of information is exactly what scammers look for.
Once information is uploaded, it rarely stays private
Many genealogy platforms allow public trees by default. Even when accounts are private, information can still spread in several ways.
For example, data can appear through:
- Shared family trees
- Public obituaries
- Search features
- Data scraping tools
- Third-party integrations
Over time, this information becomes searchable. It may be indexed by search engines. Bots can scrape it. Data brokers can absorb it into their databases. Once that happens, your family details no longer live only on a genealogy website. They can appear on people search websites, background check platforms and marketing databases. And you may never know it happened.
The 23andMe wake-up call
The recent bankruptcy of the DNA testing company 23andMe served as a reminder for millions of users. When companies change ownership or shut down, your data does not simply disappear. Genetic data raises serious privacy concerns on its own.
However, the broader genealogy ecosystem carries a similar risk. When you upload deeply personal, multi-generational information, you lose control over how long it is stored, who can access it and where it may end up in the future. Even if you trust a company today, you cannot control what happens tomorrow.
23ANDME PROBE LAUNCHED TO PREVENT CUSTOMER DNA DATA FROM BEING SOLD TO CHINA OR OTHER BAD ACTORS
A woman collects a DNA sample in Oakland, California, on June 8, 2018. Personal data uploaded to genealogy sites can spread across data broker networks, making it difficult to control where information appears. (Cayce Clifford/Bloomberg via Getty Images)
Why scammers love family tree data
Cybercriminals no longer focus only on credit card numbers. Instead, they want context. They want personal details that help them impersonate you or bypass security checks. Family tree websites provide exactly that. Here are three ways criminals can exploit genealogy data.
1) Answering security questions
Many financial institutions still rely on knowledge-based authentication questions, such as:
Unfortunately, those answers often appear directly in public family trees. With enough background information, scammers may bypass account protections without ever knowing your password.
2) Crafting believable impersonation scams
Now imagine receiving a message like this: “Hi, Aunt Linda, it’s Jake. I’m stuck overseas and need help.”
If a scammer already knows:
- Your relatives’ names
- Who is related to whom
- Where family members live
They can create highly believable emergency scams. These are no longer random “grandparent scams.” They are customized attacks, and genealogy data makes that customization easy.
3) Targeting entire families
When one person’s information becomes exposed, it rarely stops there. A scammer can quickly map your entire family network. They may identify:
- Adult children
- Elderly parents
- Siblings
- Multiple addresses
Then they can launch phishing attempts across several family members at once. In other words, one data leak can turn into a family-wide vulnerability.
How genealogy data strengthens data broker profiles
Here is where the situation becomes even more concerning. Data brokers do not just collect phone numbers and addresses. They build detailed relational profiles.
These profiles often include:
- Household connections
- Extended relatives
- Age ranges
- Property ownership
- Income indicators
When genealogy data gets scraped or resold, it strengthens those profiles. Your listing may suddenly include:
- An accurate maiden name
- Verified birth year
- Confirmed past addresses
- Detailed family connections
The richer the profile becomes, the more valuable it is-not only to marketers but also to criminals. “But I set my tree to private.” Privacy settings certainly help. However, they do not solve the entire problem.
Even if your family tree is private:
- Relatives may publish overlapping information
- Obituaries remain public records
- Historical records continue to be digitized
- Other users may repost or copy data
Once information spreads across multiple websites, tracking it becomes extremely difficult. In addition, data brokers constantly refresh their databases. Even if you remove your data once, it may quietly reappear months later.
COULD HACKERS STEAL YOUR DNA AND SELL IT?
A technician works on a device that conducts direct-to-consumer genetic testing at the University of Tokyo’s Institute of Medical Science in Tokyo, Japan, on July 9, 2014. Genealogy websites may help you trace your roots, but experts warn they can also expose personal data that scammers use to target entire families. (Kiyoshi Ota/Bloomberg via Getty Images)
How to enjoy genealogy without exposing yourself
You do not have to give up genealogy. You simply need to approach it the same way you approach social media.
Consider these precautions:
- Limit public visibility on family trees
- Avoid posting full birthdates
- Be cautious with maiden names
- Remove exact address histories
- Think carefully before sharing details about living relatives
Most importantly, remember that the real risk is not the genealogy site itself. The risk is where that data travels next.
Stop your family history from becoming a scammer’s playbook
Once personal information enters the data broker ecosystem, it can spread far beyond the original platform. That is why proactive privacy protection matters.
Data brokers collect and resell personal information gathered from public records, websites and scraped databases. If genealogy details such as maiden names, birthplaces and family relationships get pulled into those systems, they can quietly appear across people-search sites and background check databases.
Over time, this information can make it easier for scammers to build detailed identity profiles. Those profiles can be used for impersonation scams, phishing attacks or attempts to bypass security questions.
You can take steps by searching your name and relatives online to see what information is publicly visible, submitting removal requests to people-search sites and limiting what you share publicly on genealogy platforms. Taking these precautions can help prevent your family history from becoming a roadmap for scammers.
However, manually tracking down and removing your information across hundreds of sites can be time-consuming and difficult to keep up with.
One of the most effective steps you can take is to use a data removal service to help remove your information from data broker and people-search websites. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.
These services do the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. They also continue scanning for new exposures, which helps prevent your data from quietly reappearing later.
It’s what gives me peace of mind and has proven to be one of the most effective ways to erase personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing breach data with details they might find online, making it much harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Kurt’s key takeaways
Genealogy can be an incredibly rewarding hobby. Discovering where your family came from often creates a deeper sense of connection and identity. But the digital tools that make this research easier can also expose more information than many people realize. A family tree filled with birthplaces, maiden names and relatives may look harmless, yet it can quietly create a roadmap for scammers. The good news is you do not have to stop exploring your ancestry. You simply need to share carefully, protect your data and understand how information travels online.
Have you ever searched for your own name or family members online and been surprised by how much personal information was publicly available? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Now California’s cops can give tickets to driverless cars
Autonomous vehicles roving California’s roads will no longer be immune to traffic tickets starting on July 1st. New regulations announced by the California DMV this week allow law enforcement to give AV manufacturers a “notice of AV noncompliance” when one of their cars commits a traffic violation, like running a red light or failing to stop for school buses.
The updated regulations come after years of viral traffic violations and multiple safety investigations involving robotaxis. Tesla’s Full Self-Driving (FSD) system is also under investigation for running red lights and driving in the wrong direction. Now, driverless vehicle companies can get cited for those violations, at least in California.
California’s new regulations could also help prevent driverless cars from getting in the way during emergencies, like an incident in San Francisco last year when Waymos blocked traffic during a power outage. AV companies will now have to answer first-responder calls within 30 seconds and must allow emergency responders to “issue electronic geofencing directives,” which will block AVs from entering active emergency areas. Any driverless cars already in the area will have to leave.
The new regulations also allow AV companies to test and deploy heavy-duty autonomous trucks and include “licensing qualifications and permitting and training requirements for remote drivers and assistants.”
Technology
Meta tracks workers to train AI agents
NEWYou can now listen to Fox News articles!
Inside Meta, the parent company of Facebook, Instagram and WhatsApp, employees’ everyday clicks, shortcuts and screen habits are now part of how the company trains its artificial intelligence systems.
Meta has started rolling out internal software that tracks how employees use their computers, including how they move through apps and complete routine tasks. The company says this data will help build smarter AI tools, but it also raises new questions about how far workplace monitoring should go.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my I free when you join.
HOW TO OPT OUT OF AI DATA COLLECTION IN POPULAR APPS
Inside Meta, employee computer habits are becoming training data as the company pushes deeper into AI-powered workplace automation. (Unknown)
What Meta’s employee tracking tool actually does
The system is called the Model Capability Initiative, or MCI. It runs on work apps and websites used by employees.
Here is what it tracks:
- Mouse movements and clicks
- Keystrokes and keyboard shortcuts
- Navigation behavior like dropdown selections
- Occasional screenshots of what is on screen
Meta says the idea is simple. If AI is supposed to act like a human using a computer, it needs real examples of how people actually work.
“If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them – things like mouse movements, clicking buttons, and navigating dropdown menus,” a Meta spokesperson told CyberGuy. “To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models. There are safeguards in place to protect sensitive content, and the data is not used for any other purpose.”
The company insists that data collected through this tool is used only for model training, not for employee performance reviews, and managers do not have access to it. Company devices were already subject to monitoring, and this isn’t unique to Meta.
Why Meta is collecting employee data for AI
Meta isn’t collecting this information just for insight. It is feeding it into a broader push to build artificial intelligence agents that can handle work tasks. In an internal memo, Meta’s CTO Andrew Bosworth described a future where AI agents do most of the work while humans guide and review.
The company is already reorganizing around that idea. Internal programs like “AI for Work,” now called the Agent Transformation Accelerator, are designed to bring AI into daily workflows across teams.
Meta believes this approach will make operations faster and more efficient. The trade-off is that human work becomes training data for the systems that may replace parts of it.
META EMPLOYEE ACCUSED OF ACCESSING PRIVATE IMAGES
Meta is rolling out a workplace tracking tool that records employee clicks, keystrokes and screen activity to help train its AI systems. (Joan Cros/NurPhoto via Getty Images)
Privacy concerns around Meta’s employee tracking
Workplace monitoring has been around for years, but this takes it a step further. For example, tracking keystrokes and clicks in real time creates a level of oversight that companies have more often used with gig workers than office employees. As a result, employers can now watch day-to-day activity more closely.
At the same time, a legal gray area exists. In the United States, companies generally have broad authority to monitor employees as long as they provide notice. Because of that, employers have significant room to expand how they collect data.
However, outside the U.S., the rules can be stricter, and some regions place tighter limits on how companies collect and use employee data.
Even so, knowing someone is tracking your activity at this level can change how you work, how you communicate and how much autonomy you feel on the job.
How this fits into the broader AI job shift
Meta is hardly alone in pushing toward automation. Companies across Silicon Valley are investing heavily in AI systems that can write code, organize data and assist with decision-making. At the same time, many are cutting jobs or reshaping roles.
Meta plans to reduce its workforce by about 10 percent globally. Amazon has also trimmed tens of thousands of corporate roles in recent months.
The message is clear. AI has evolved beyond a tool that helps employees. It is increasingly positioned as a replacement for certain types of work.
JOBS THAT ARE MOST AT RISK FROM AI, ACCORDING TO MICROSOFT
Meta says its new internal monitoring tool will improve AI agents, but the program is also raising fresh concerns about employee privacy. (Donato Fasano/Getty Images)
What this means to you
Even if you do not work at Meta, this shift has wider implications. First, workplace monitoring is expanding beyond factories and delivery jobs into office environments. That could become standard across industries.
Second, your everyday work habits may become valuable data. Companies are realizing that human behavior is one of the most useful training resources for AI.
The line between assisting and replacing workers is getting thinner. Tools that start as helpers often evolve into something more autonomous over time.
If your job involves repetitive computer tasks, it is worth paying attention to how AI is being trained to handle them.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: CyberGuy.com.
Kurt’s key takeaways
Meta’s move marks a turning point. AI no longer relies only on public data or curated datasets. It now learns directly from how people work in real time. That shift raises practical questions about productivity and efficiency. It also brings deeper concerns about privacy, control and the future role of human workers. Companies argue they need this data to build better tools. At the same time, employees now help train systems that could eventually replace parts of their roles.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
If your daily work became training data for AI that could eventually do your job, would you be comfortable with that? Let us know by writing to us at CyberGuy.com.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Elon Musk’s worst enemy in court is Elon Musk
Musk’s direct testimony was an improvement over yesterday — even if his lawyer kept asking leading questions to cue him in how to answer. But that memory was immediately obliterated by an absolutely miserable cross-examination. For hours, Musk refused to answer yes or no questions with yes or no, occasionally “forgot” things he’d testified to in the morning, and scolded defense lawyer William Savitt. I watched a few jury members glance at each other. During one testy exchange, one woman was rubbing her head. Me too, babe.
Even the judge, who at times prompted Musk to answer “yes” or “no,” was having a bad time. “He was at times difficult,” said Yvonne Gonzalez Rogers after Musk after the jury left the room. (At one point, when she’d cut off his argumentative answer, she got the biggest laugh of the day.) “Part of management from my perspective is just to get through testimony.”
“I don’t yell at people,” Musk said
Musk spent a lot of yesterday painting this heroic picture of himself, and this morning, near the end of his direct examination, said, “I don’t lose my temper,” and “I don’t yell at people.” He said he might have called someone a “jackass,” but only in the spirit of saying something like, “don’t be a jackass.”
Immediately afterward, Savitt baited him into being petty, irritating, and generally hard to deal with. At one point, we all watched Musk lose his temper. He spent hours quibbling over simple questions. Again and again, Savitt referred back to Musk’s deposition, where he’d answered questions slightly differently, calling Musk’s accounts into question. Even if the average juror didn’t think he was lying, he was certainly inconsistent.
Savitt’s cross-examination left the distinct impression that Musk quit his quarterly payments to OpenAI because he wasn’t going to get full control of the company, then tried to kneecap it and fold it into Tesla. Initially, Musk wanted four board seats and 51 percent of the shares. The other co-founders would get three seats, together, to be voted on by shareholders (including other employees). Though Musk said that the eventual plan was to expand to 12 seats, it was obvious that Musk had full control on the initial board of seven.
When Musk didn’t get what he wanted, he pulled the plug on his funding commitment and hired Andrej Karpathy, OpenAI’s second-best engineer, to Tesla in 2017. Despite his fiduciary duty to OpenAI as a board member, he did not try to get Karpathy to stay at OpenAI when he said he heard Karpathy wanted to leave. (“I think people should have a right to work where they want to work,” Musk said on the stand.)
“In my and Andrej’s opinion, Tesla is the only path that could even hope to hold a candle to Google.”
By 2018, Musk was saying that OpenAI had no path forward with its current structure, declaring it was on “a path of certain failure” in emails to Ilya Sutskever and Greg Brockman. His proposed solution was to merge Tesla and OpenAI. “In my and Andrej’s opinion, Tesla is the only path that could even hope to hold a candle to Google,” Musk said. The plan never came to fruition, and Musk resigned from OpenAI’s board that year.
As early as 2016, Musk had his own concerns about OpenAI as a non-profit. In an email to a colleague at Neuralink, he wrote “Deepmind is moving very fast. I am concerned that OpenAI is not on a path to catch up. Setting it up as non-profit might, in hindsight, have been the wrong move. Sense of urgency is not as high.”
Asked about this, Musk said he was just speculating. Savitt said, “Those are your words, yes or no?”
“You mostly do unfair questions.”
Musk replied, “This is a hypothetical.”
Savitt said, “So you thought it might have been a wrong move? That’s what you said?”
Getting Musk to put any of that on the record was intensely difficult. He refused repeatedly to answer questions like whether he knew cutting off OpenAI donations would create financial pressure, or whether he’d asked Karpathy to stay at OpenAI. He accused Savitt of asking questions that were “designed to trick me,” and we got multiple versions of this:
Musk: You mostly do unfair questions
Savitt: I am trying to put the questions as fairly as I can. I am doing my best.
Musk: That’s not true.
Musk was trying to make this as painful as possible for Savitt, but he also made it as painful as possible for everyone else, including the jury. Watching him simply refuse to answer questions during cross he’d easily answered during direct was annoying. Watching him refuse to admit he understood the nature of linear time — and therefore the fact that he was still a director of OpenAI’s board before he resigned in 2018 — was infuriating. It made him look dishonest.
“I’d lost trust in Altman and I was concerned they were really trying to steal the charity.”
Musk’s basic, oft-repeated story during this week’s testimony has been that OpenAI is “stealing a charity” and “looting a non-profit.” He maintains that he was all right with some limited for-profit activity, but not anything that would overshadow OpenAI’s nonprofit work and constitute “the tail wagging the dog” — another phrase he reached for, over and over, like a security blanket. In direct testimony, he painted himself as a trusting “fool” who had believed the wily promises of Sam Altman and his cohort: “I gave them $38 million of essentially free funding, which they used to create an $800 billion for-profit company,” he lamented. His own lawyer’s questioning wrapped up with Musk being purportedly blindsided by a multibillion-dollar deal with Microsoft.
“I’d lost trust in Altman and I was concerned they were really trying to steal the charity,” Musk said. “It turned out to be true.”
“I said I didn’t look closely! I read the headline!”
On cross examination, Musk would barely even explain how much he bothered to learn about OpenAI’s operations before suing over them a few years later. When OpenAI proposed a for-profit arm around 2018, he got an email outlining the proposed corporate structure. On the stand, he said he’d only read the very first section of it,, which said that contributors should consider the investments as donations that may have no return. “I read the highlighted box with ‘important warning,’” Musk said.
Savitt asked Musk if he’d raised any objection to the structure then, when he’d received the documents. Musk said that he didn’t read beyond that first box.
Musk: I didn’t read the fine print.. We’re going into the fine print of this document.
Savitt: It’s a four-page document.
Musk then said he hadn’t read beyond taking this in the “spirit of a donation.” And then we got the deposition, where Musk said, “I don’t think I read this term sheet… I’m not sure I actually read this term sheet… I did not closely look at this term sheet.” Savitt pointed out that nowhere in the deposition did Musk say he’d read the first paragraph and Musk, raising his voice and effectively undermining his claims from the morning that he doesn’t lose his temper (lol) or yell at people (lmao), said, “I said I didn’t look closely! I read the headline!”
Imagine having to deal with this man as your cofounder. I think I would sooner open a vein.
-
News25 minutes agoTrump gives the go-ahead for a major new Canada-U.S. oil pipeline
-
New York2 hours agoComputer Outage Disrupts Student Exams in New York State
-
Detroit, MI2 hours ago‘He went on an adventure’: Detroit bus driver, police praised for reuniting missing 9-year-old with family
-
San Francisco, CA3 hours agoSan Francisco’s free, discounted childcare program adds over 700 new spots
-
Miami, FL3 hours agoThis Miami food truck was just named Florida’s top independent restaurant
-
Boston, MA3 hours agoPolice Blotter: Sticky fingers: Boston cops looking for South End candy store robber
-
Denver, CO3 hours agoEx-Broncos wide receiver lands in UFL; ex-Denver RB joins 49ers
-
Seattle, WA3 hours agoSeahawks receiver makes surprise switch to cornerback