Technology
Genealogy boom exposes personal data scammers can exploit
NEWYou can now listen to Fox News articles!
Millions of Americans are digging into their roots. Genealogy has quietly become one of the fastest-growing hobbies in North America, with the industry now valued at more than $5 billion. From DNA kits to digital family tree builders, people are discovering relatives, tracing migration stories and reconnecting with their past.
There is something deeply meaningful about learning where you come from. However, there is another side to this trend that many people never consider.
The same information that helps you find your great-grandparents can also help scammers find you. Once personal details appear online, they rarely stay in one place. And that can create unexpected security risks.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
DNA KITS MAY SHARE PERSONAL DATA AFTER DEATH
A woman looks at the contents of a 23andMe DNA testing kit in Oakland, California, on June 8, 2018. Millions of Americans using family tree platforms may be unknowingly sharing sensitive details like maiden names and birthplaces online. (Cayce Clifford/Bloomberg via Getty Images)
What family tree sites encourage you to upload
Genealogy platforms feel harmless. In fact, they are designed to feel warm, nostalgic and personal.
To build a detailed family tree, users often upload information such as:
- Full legal names, including maiden names
- Birth dates
- Places of birth
- Marriage records
- Address history
- Names of children, siblings and relatives
- Old family photos
- Obituaries and memorial information
Each detail may seem harmless on its own. But together, they create something extremely valuable: a fully mapped identity profile. Not just of you, but of your entire family network. And that kind of information is exactly what scammers look for.
Once information is uploaded, it rarely stays private
Many genealogy platforms allow public trees by default. Even when accounts are private, information can still spread in several ways.
For example, data can appear through:
- Shared family trees
- Public obituaries
- Search features
- Data scraping tools
- Third-party integrations
Over time, this information becomes searchable. It may be indexed by search engines. Bots can scrape it. Data brokers can absorb it into their databases. Once that happens, your family details no longer live only on a genealogy website. They can appear on people search websites, background check platforms and marketing databases. And you may never know it happened.
The 23andMe wake-up call
The recent bankruptcy of the DNA testing company 23andMe served as a reminder for millions of users. When companies change ownership or shut down, your data does not simply disappear. Genetic data raises serious privacy concerns on its own.
However, the broader genealogy ecosystem carries a similar risk. When you upload deeply personal, multi-generational information, you lose control over how long it is stored, who can access it and where it may end up in the future. Even if you trust a company today, you cannot control what happens tomorrow.
23ANDME PROBE LAUNCHED TO PREVENT CUSTOMER DNA DATA FROM BEING SOLD TO CHINA OR OTHER BAD ACTORS
A woman collects a DNA sample in Oakland, California, on June 8, 2018. Personal data uploaded to genealogy sites can spread across data broker networks, making it difficult to control where information appears. (Cayce Clifford/Bloomberg via Getty Images)
Why scammers love family tree data
Cybercriminals no longer focus only on credit card numbers. Instead, they want context. They want personal details that help them impersonate you or bypass security checks. Family tree websites provide exactly that. Here are three ways criminals can exploit genealogy data.
1) Answering security questions
Many financial institutions still rely on knowledge-based authentication questions, such as:
Unfortunately, those answers often appear directly in public family trees. With enough background information, scammers may bypass account protections without ever knowing your password.
2) Crafting believable impersonation scams
Now imagine receiving a message like this: “Hi, Aunt Linda, it’s Jake. I’m stuck overseas and need help.”
If a scammer already knows:
- Your relatives’ names
- Who is related to whom
- Where family members live
They can create highly believable emergency scams. These are no longer random “grandparent scams.” They are customized attacks, and genealogy data makes that customization easy.
3) Targeting entire families
When one person’s information becomes exposed, it rarely stops there. A scammer can quickly map your entire family network. They may identify:
- Adult children
- Elderly parents
- Siblings
- Multiple addresses
Then they can launch phishing attempts across several family members at once. In other words, one data leak can turn into a family-wide vulnerability.
How genealogy data strengthens data broker profiles
Here is where the situation becomes even more concerning. Data brokers do not just collect phone numbers and addresses. They build detailed relational profiles.
These profiles often include:
- Household connections
- Extended relatives
- Age ranges
- Property ownership
- Income indicators
When genealogy data gets scraped or resold, it strengthens those profiles. Your listing may suddenly include:
- An accurate maiden name
- Verified birth year
- Confirmed past addresses
- Detailed family connections
The richer the profile becomes, the more valuable it is-not only to marketers but also to criminals. “But I set my tree to private.” Privacy settings certainly help. However, they do not solve the entire problem.
Even if your family tree is private:
- Relatives may publish overlapping information
- Obituaries remain public records
- Historical records continue to be digitized
- Other users may repost or copy data
Once information spreads across multiple websites, tracking it becomes extremely difficult. In addition, data brokers constantly refresh their databases. Even if you remove your data once, it may quietly reappear months later.
COULD HACKERS STEAL YOUR DNA AND SELL IT?
A technician works on a device that conducts direct-to-consumer genetic testing at the University of Tokyo’s Institute of Medical Science in Tokyo, Japan, on July 9, 2014. Genealogy websites may help you trace your roots, but experts warn they can also expose personal data that scammers use to target entire families. (Kiyoshi Ota/Bloomberg via Getty Images)
How to enjoy genealogy without exposing yourself
You do not have to give up genealogy. You simply need to approach it the same way you approach social media.
Consider these precautions:
- Limit public visibility on family trees
- Avoid posting full birthdates
- Be cautious with maiden names
- Remove exact address histories
- Think carefully before sharing details about living relatives
Most importantly, remember that the real risk is not the genealogy site itself. The risk is where that data travels next.
Stop your family history from becoming a scammer’s playbook
Once personal information enters the data broker ecosystem, it can spread far beyond the original platform. That is why proactive privacy protection matters.
Data brokers collect and resell personal information gathered from public records, websites and scraped databases. If genealogy details such as maiden names, birthplaces and family relationships get pulled into those systems, they can quietly appear across people-search sites and background check databases.
Over time, this information can make it easier for scammers to build detailed identity profiles. Those profiles can be used for impersonation scams, phishing attacks or attempts to bypass security questions.
You can take steps by searching your name and relatives online to see what information is publicly visible, submitting removal requests to people-search sites and limiting what you share publicly on genealogy platforms. Taking these precautions can help prevent your family history from becoming a roadmap for scammers.
However, manually tracking down and removing your information across hundreds of sites can be time-consuming and difficult to keep up with.
One of the most effective steps you can take is to use a data removal service to help remove your information from data broker and people-search websites. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.
These services do the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. They also continue scanning for new exposures, which helps prevent your data from quietly reappearing later.
It’s what gives me peace of mind and has proven to be one of the most effective ways to erase personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing breach data with details they might find online, making it much harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Kurt’s key takeaways
Genealogy can be an incredibly rewarding hobby. Discovering where your family came from often creates a deeper sense of connection and identity. But the digital tools that make this research easier can also expose more information than many people realize. A family tree filled with birthplaces, maiden names and relatives may look harmless, yet it can quietly create a roadmap for scammers. The good news is you do not have to stop exploring your ancestry. You simply need to share carefully, protect your data and understand how information travels online.
Have you ever searched for your own name or family members online and been surprised by how much personal information was publicly available? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
AI chatbots refilling psych meds sparks debate
NEWYou can now listen to Fox News articles!
If you have ever waited weeks just to renew a mental health prescription, you already know how frustrating the system can feel. Now imagine handling that refill through a chatbot instead of a doctor.
That kind of thing is already starting to happen. In Utah, a new pilot program is allowing an artificial intelligence system from Legion Health to renew certain psychiatric medications without direct approval from a physician each time. State officials say this could speed things up and reduce costs.
Many psychiatrists are not convinced. They are asking whether this actually solves the problem it claims to fix.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
AMAZON HEALTH AI BRINGS A DOCTOR TO YOUR POCKET
Utah launches AI chatbot to renew select psychiatric prescriptions, raising questions about safety and oversight. (pocketlight/Getty Images)
How the AI prescription system works
Before this starts sounding like a robot psychiatrist, the program stays tightly limited. The AI only renews a short list of lower-risk medications that a doctor has already prescribed. These include commonly used antidepressants like Prozac, Zoloft and Wellbutrin.
To qualify, patients must meet strict requirements. You need to be stable on your current medication. Recent dosage changes or a psychiatric hospitalization will disqualify you. You also need to check in with a healthcare provider after a set number of refills or within a certain time frame.
During the process, the chatbot asks about symptoms, side effects and warning signs such as suicidal thoughts. If anything raises concern, it sends the case to a real doctor before approving a refill. According to an agreement filed with Utah’s Office of Artificial Intelligence Policy, the pilot includes strict safeguards, including human review thresholds and automatic escalation for higher-risk cases. The system cannot prescribe new medications or manage drugs that require close monitoring. As a result, it leaves out many complex conditions from the pilot.
Why some experts are pushing back
Even with those guardrails, many psychiatrists are uneasy. Brent Kious, a psychiatrist and professor at the University of Utah School of Medicine, has questioned whether AI systems like this actually solve the access problem they are designed to address.
He has suggested that the benefits of an AI-based refill system may be overstated, especially since patients must already be stable and under care to qualify. Kious has also raised concerns about how much these systems rely on self-reported answers. Patients may not recognize side effects, may answer inaccurately, or may adjust their responses to get the outcome they want.
He has further questioned whether current AI tools can safely handle even routine parts of psychiatric care, noting that treatment decisions often depend on factors that go beyond simple screening questions. He has also pointed to a lack of transparency in how these systems operate, which can make it harder for doctors and patients to fully trust them.
HEALTHCARE DATA BREACH HITS SYSTEM STORING PATIENT RECORDS
A new pilot program allows AI to handle some mental health medication refills without direct doctor approval. (Sezeryadigar/Getty Images)
The promise behind the technology
Supporters of the program are focused on access. A lot of people in Utah still struggle to get mental health care. Wait times can stretch for weeks. In some areas, there simply are not enough providers available. The idea is that AI can take care of routine refill requests so doctors have more time to focus on patients with more complex needs. That could help take some pressure off the system. Legion Health is also leaning into convenience. The service is expected to cost about $19 a month and is designed to make refills quicker and easier for patients who qualify. From a big-picture view, that could help. From a patient’s point of view, the tradeoff may feel a little more complicated. We reached out to Legion Health for comment, but did not hear back before our deadline.
What this means to you
If you rely on mental health medication, this kind of system could change how you manage your care. You may be able to get refills more quickly if your condition is stable and your treatment plan is not changing. At the same time, this does not replace your doctor. It does not handle new diagnoses or complex decisions. It also adds another layer between you and your care. Instead of a conversation, you are interacting with a system that depends on how you answer a series of questions. Mental health treatment often depends on small details. Changes in mood, sleep or behavior can matter more than a simple yes or no response. That is where some experts believe human care still has a clear advantage.
The bigger question about AI in healthcare
This pilot is only one step in a much larger shift. Utah is already experimenting with AI in other areas of healthcare. Companies like Legion are signaling plans to expand beyond a single state. What starts with simple refills could eventually move into more complex decisions. That is where the conversation becomes more urgent. Is this a practical way to improve access to care, or does it risk reducing something deeply personal into a transaction driven by software?
HOW ARTIFICIAL INTELLIGENCE IS TRANSFORMING HEALTHCARE
Psychiatrists question whether AI prescription refills address access issues or create new risks for patients. (SDI Productions/Getty Images)
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
There is no question that access to mental health care needs improvement. Long wait times and limited availability are real problems that affect millions of people. AI may help in specific situations, especially when the task is routine and the patient is stable. Still, convenience should not be confused with quality. For now, this system is narrow in scope and closely monitored. That makes it easier to test. It also highlights how early we are in this transition. The technology will continue to evolve. The real question is whether the safeguards, oversight and transparency will evolve at the same pace.
Would you feel comfortable letting a chatbot handle part of your mental health care, or is that a line you do not want technology to cross? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
ChatGPT has a new $100 per month Pro subscription
OpenAI has announced a new version of its ChatGPT Pro subscription that costs $100 per month. The new Pro tier offers “5x more” usage of its Codex coding tool than the $20 per month Plus subscription and “is best for longer, high-effort Codex sessions,” OpenAI says.
The company is introducing the new tier as it tries to win over users from Anthropic and its popular Claude Code tool. ChatGPT’s $100 per month option will directly compete with Anthropic’s “Max” tier for Claude, which costs the same price. It also offers a middle ground between the $20 per month Plus tier and the $200 version of the Pro tier.
(Yes, there are now two tiers of “Pro”; while the new tier “still offers access to all Pro features,” OpenAI says that the more expensive one has even higher usage limits.)
According to OpenAI, ChatGPT Plus will “will continue to be the best offer at $20 for steady, day-to-day usage of Codex, and the new $100 Pro tier offers a more accessible upgrade path for heavier daily use.” OpenAI also offers an $8 per month Go tier and a free tier.
Technology
Humanoid robots hit mass production in China
NEWYou can now listen to Fox News articles!
For years, humanoid robots felt like something you watched on social media. Impressive, yes. Practical, not quite. That line just got blurry.
A new factory in China is now producing humanoid robots at a pace that feels closer to car manufacturing. One robot rolls off the line every 30 minutes.
That adds up to about 10,000 units a year. This is not a prototype phase anymore. This is real production.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
HOME ROBOT COOKS, CLEANS AND ORGANIZES YOUR LIFE
A Chinese factory is producing humanoid robots every 30 minutes, signaling a shift from experimental tech to mass production. (Tang Yanjun/China News Service/VCG via Getty Images)
Inside China’s humanoid robot factory
The production line comes from a partnership between Leju Robotics and Dongfang Precision Science & Technology. What makes this facility stand out is how structured and repeatable the process has become.
There are 24 precision assembly stages. On top of that, 77 inspection steps check everything before a robot leaves the line. That level of testing matters because reliability has always been a weak spot for humanoid machines. Efficiency also jumped. The company says output improved by more than 50 percent compared to older production methods.
Then there is flexibility. The system can switch between robot models without shutting everything down. That means the same factory can serve multiple industries, from automotive to home appliances. This is how you move from cool tech to actual business.
Why humanoid robot production at 10,000 units matters
The robotics industry has reached a turning point. It is no longer enough to show what a robot can do. Companies now need to prove they can build them at scale.
That shift is showing up across the market.
- Agibot has already hit 10,000 units
- Unitree Robotics is planning a major expansion with new funding
- UBTECH Robotics is working to lower costs to below $20,000 per robot
Investors are watching production numbers closely. High output signals that a company can move beyond demos and into real deployment. It also shows confidence that there will be actual demand.
US TARGETS CHINESE ROBOTS OVER SECURITY FEARS
High-volume humanoid robot production marks a turning point for the global robotics industry. (Kevin Frayer/Getty Images)
The shift to large-scale humanoid robot manufacturing
There is another important change here that is easy to miss. Companies are splitting roles. In this case, Leju Robotics focuses on design and software. Dongfang Precision Science & Technology handles production and scaling. This model looks a lot like how other tech industries evolved. One group builds the brain. Another builds the product at scale. That separation could speed things up across the entire robotics space.
What is still holding humanoid robots back
Even with all this progress, one big problem remains. Software. Building the body is getting easier. Teaching it how to function in the real world is still difficult. Homes, warehouses and public spaces are unpredictable. Objects vary in shape. Lighting changes. Tasks that seem simple for humans can confuse a machine. Factories can now produce thousands of robots. That does not guarantee those robots will be useful right away. The pressure is shifting toward AI developers to close that gap.
What this means to you
This might feel far removed from everyday life. It is not. As production ramps up, costs usually come down. That opens the door for more businesses to adopt humanoid robots. You could start seeing them in warehouses, retail environments or service roles sooner than expected. At the same time, this raises questions about jobs, safety and how comfortable people feel interacting with machines that look and move like humans. The speed of this shift is what stands out. What felt experimental last year is now moving toward mainstream deployment.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my quiz here: Cyberguy.com.
ARE ROBOTS COMING TO A MCDONALD’S NEAR YOU?
China ramps up humanoid robot manufacturing with a facility capable of producing 10,000 units annually. (Tang Yanjun/China News Service/VCG via Getty Images)
Kurt’s key takeaways
Humanoid robots are entering a new phase. The conversation is no longer about whether they can be built. It is about how quickly they can be produced and where they will actually work. Factories like this one in China are setting the pace. Now the rest of the industry has to keep up.
If humanoid robots become common in workplaces, where would you draw the line between helpful automation and going too far? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
-
Atlanta, GA5 days ago1 teenage girl killed, another injured in shooting at Piedmont Park, police say
-
Education1 week agoVideo: We Put Dyson’s $600 Vacuum to the Test
-
Movie Reviews1 week agoVaazha 2 first half review: Hashir anchors a lively, chaos-filled teen tale
-
Georgia3 days agoGeorgia House Special Runoff Election 2026 Live Results
-
Pennsylvania3 days agoParents charged after toddler injured by wolf at Pennsylvania zoo
-
Education1 week agoVideo: YouTube’s C.E.O. on the Rise of Video and the Decline of Reading
-
Milwaukee, WI4 days agoPotawatomi Casino Hotel evacuated after fire breaks out in rooftop HVAC system
-
Education1 week agoVideo: Toy Testing with a Discerning Bodega Cat