Technology
Dangers of oversharing with AI tools
Have you ever stopped to think about how much your chatbot knows about you? Over the years, tools like ChatGPT have become incredibly adept at learning your preferences, habits and even some of your deepest secrets. But while this can make them seem more helpful and personalized, it also raises some serious privacy concerns. As much as you learn from these AI tools, they learn just as much about you.
Stay protected & informed! Get security alerts & expert tech tips – sign up for Kurt’s ‘The CyberGuy Report’ now.
A man using ChatGPT on his laptop (Kurt “CyberGuy” Knutsson)
What ChatGPT knows
ChatGPT learns a lot about you through your conversations, storing details like your preferences, habits and even sensitive information you might inadvertently share. This data, which includes both what you type and account-level information like your email or location, is often used to improve AI models but can also raise privacy concerns if mishandled.
Many AI companies collect data without explicit consent and rely on vast datasets scraped from the web, which can include sensitive or copyrighted material. These practices are now under scrutiny by regulators worldwide, with laws like Europe’s GDPR emphasizing users’ “right to be forgotten.” While ChatGPT can feel like a helpful companion, it’s essential to remain cautious about what you share to protect your privacy.
ChatGPT on a phone (Kurt “CyberGuy” Knutsson)
GEN-AI, THE FUTURE OF FRAUD AND WHY YOU MAY BE AN EASY TARGET
Why sharing sensitive information is risky
Sharing sensitive information with generative AI tools like ChatGPT can expose you to significant risks. Data breaches are a major concern, as demonstrated in March 2023 when a bug allowed users to see others’ chat histories, highlighting vulnerabilities in AI systems. Your chat history could also be accessed through legal requests, such as subpoenas, putting your private data at risk. User inputs are also often used to train future AI models unless you actively opt out, and this process isn’t always transparent or easy to manage.
These risks underscore the importance of exercising caution and avoiding the disclosure of sensitive personal, financial or proprietary information when using AI tools.
A woman using ChatGPT on her laptop (Kurt “CyberGuy” Knutsson)
5 WAYS TO ARM YOURSELF AGAINST CYBERATTACKS
What not to share with ChatGPT
To protect your privacy and security, it’s crucial to be mindful of what you share. Here are some things you should definitely keep to yourself.
- Identity details: Social Security numbers, driver’s license numbers and other personal identifiers should never be disclosed
- Medical records: While it might be tempting to seek interpretations for lab results or symptoms, these should be redacted before uploading
- Financial information: Bank account numbers and investment details are highly vulnerable if shared
- Corporate secrets: Proprietary data or confidential work-related information can expose trade secrets or client data
- Login credentials: Passwords, PINs and security answers should remain within secure password managers
ChatGPT on a Wikipedia page on a phone (Kurt “CyberGuy” Knutsson)
DON’T LET AI PHANTOM HACKERS DRAIN YOUR BANK ACCOUNT
How to protect your privacy while using Chatbots
If you rely on AI tools but want to safeguard your privacy, consider these strategies.
1) Delete conversations regularly: Most platforms allow users to delete chat histories. Doing so ensures that sensitive prompts don’t linger on servers.
2) Use temporary chats: Features like ChatGPT’s Temporary Chat mode prevent conversations from being stored or used for training purposes.
3) Opt out of training data usage: Many AI platforms offer settings to exclude your prompts from being used for model improvement. Explore these options in account settings.
4) Anonymize inputs: Tools like Duck.ai anonymize prompts before sending them to AI models, reducing the risk of identifiable data being stored.
5) Secure your account: Enable two-factor authentication and use strong passwords for added protection against unauthorized access. Consider using a password manager to generate and store complex passwords. Remember, your account-level details like email addresses and location can be stored and used to train AI models, so securing your account helps limit how much personal information is accessible. Get more details about my best expert-reviewed password managers of 2025 here.
6) Use a VPN: Employ a reputable virtual private network (VPN) to encrypt internet traffic and conceal your IP address, enhancing online privacy during chatbot use. A VPN adds a crucial layer of anonymity, especially since data shared with AI tools can include sensitive or identifying information, even unintentionally. A reliable VPN is essential for protecting your online privacy and ensuring a secure, high-speed connection. For the best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android and iOS devices.
DATA REMOVAL DOES WHAT VPNS DON’T: HERE’S WHY YOU NEED BOTH
Kurt’s key takeaways
Chatbots like ChatGPT are undeniably powerful tools that enhance productivity and creativity. However, their ability to store and process user data demands caution. By understanding what not to share and taking steps to protect your privacy, you can enjoy the benefits of AI while minimizing risks. Ultimately, it’s up to you to strike a balance between leveraging AI’s capabilities and safeguarding your personal information. Remember: Just because a chatbot feels human doesn’t mean it should be treated like one. Be mindful of what you share and always prioritize your privacy.
Do you think AI companies need to do more to protect users’ sensitive information and ensure transparency in data collection and usage? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Ask Kurt a question or let us know what stories you’d like us to cover.
Follow Kurt on his social channels:
Answers to the most-asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Google’s annual revenue tops $400 billion for the first time
Google’s parent company, Alphabet, has earned more than $400 billion in annual revenue for the first time. The company announced the milestone as part of its Q4 2025 earnings report released on Wednesday, which highlights the 15 percent year-over-year increase as its cloud business and YouTube continue to grow.
As noted in the earnings report, Google’s Cloud business reached a $70 billion run rate in 2025, while YouTube’s annual revenue soared beyond $60 billion across ads and subscriptions. Alphabet CEO Sundar Pichai told investors that YouTube remains the “number one streamer,” citing data from Nielsen. The company also now has more than 325 million paid subscribers, led by Google One and YouTube Premium.
Additionally, Pichai noted that Google Search saw more usage over the past few months “than ever before,” adding that daily AI Mode queries have doubled since launch. Google will soon take advantage of the popularity of its Gemini app and AI Mode, as it plans to build an agentic checkout feature into both tools.
Technology
Waymo under federal investigation after child struck
NEWYou can now listen to Fox News articles!
Federal safety regulators are once again taking a hard look at self-driving cars after a serious incident involving Waymo, the autonomous vehicle company owned by Alphabet.
This time, the investigation centers on a Waymo vehicle that struck a child near an elementary school in Santa Monica, California, during morning drop-off hours. The crash happened Jan. 23 and raised immediate questions about how autonomous vehicles behave around children, school zones and unpredictable pedestrian movement.
On Jan. 29, the National Highway Traffic Safety Administration confirmed it had opened a new preliminary investigation into Waymo’s automated driving system.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
TESLA’S SELF-DRIVING CARS UNDER FIRE AGAIN
Waymo operates Level 4 self-driving vehicles in select U.S. cities, where the car controls all driving tasks without a human behind the wheel. (AP Photo/Terry Chea, File)
What happened near the Santa Monica school?
According to documents posted by NHTSA, the crash occurred within two blocks of an elementary school during normal drop-off hours. The area was busy. There were multiple children present, a crossing guard on duty and several vehicles double-parked along the street.
Investigators say the child ran into the roadway from behind a double-parked SUV while heading toward the school. The Waymo vehicle struck the child, who suffered minor injuries. No safety operator was inside the vehicle at the time.
NHTSA’s Office of Defects Investigation is now examining whether the autonomous system exercised appropriate caution given its proximity to a school zone and the presence of young pedestrians.
AI TRUCK SYSTEM MATCHES TOP HUMAN DRIVERS IN MASSIVE SAFETY SHOWDOWN WITH PERFECT SCORES
Federal investigators are now examining whether Waymo’s automated system exercised enough caution near a school zone during morning drop-off hours. (Waymo)
Why federal investigators stepped in
The NHTSA says the investigation will focus on how Waymo’s automated driving system is designed to behave in and around school zones, especially during peak pickup and drop-off times.
That includes whether the vehicle followed posted speed limits, how it responded to visual cues like crossing guards and parked vehicles and whether its post-crash response met federal safety expectations. The agency is also reviewing how Waymo handled the incident after it occurred.
Waymo said it voluntarily contacted regulators the same day as the crash and plans to cooperate fully with the investigation. In a statement, the company said it remains committed to improving road safety for riders and everyone sharing the road.
Waymo responds to the federal investigation
We reached out to Waymo for comment, and the company provided the following statement:
“At Waymo, we are committed to improving road safety, both for our riders and all those with whom we share the road. Part of that commitment is being transparent when incidents occur, which is why we are sharing details regarding an event in Santa Monica, California, on Friday, January 23, where one of our vehicles made contact with a young pedestrian. Following the event, we voluntarily contacted the National Highway Traffic Safety Administration (NHTSA) that same day. NHTSA has indicated to us that they intend to open an investigation into this incident, and we will cooperate fully with them throughout the process.
“The event occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into our vehicle’s path. Our technology immediately detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph before contact was made.
“To put this in perspective, our peer-reviewed model shows that a fully attentive human driver in this same situation would have made contact with the pedestrian at approximately 14 mph. This significant reduction in impact speed and severity is a demonstration of the material safety benefit of the Waymo Driver.
“Following contact, the pedestrian stood up immediately, walked to the sidewalk and we called 911. The vehicle remained stopped, moved to the side of the road and stayed there until law enforcement cleared the vehicle to leave the scene.
This event demonstrates the critical value of our safety systems. We remain committed to improving road safety where we operate as we continue on our mission to be the world’s most trusted driver.”
Understanding Waymo’s autonomy level
Waymo vehicles fall under Level 4 autonomy on NHTSA’s six-level scale.
At Level 4, the vehicle handles all driving tasks within specific service areas. A human driver is not required to intervene, and no safety operator needs to be present inside the car. However, these systems do not operate everywhere and are currently limited to ride-hailing services in select cities.
The NHTSA has been clear that Level 4 vehicles are not available for consumer purchase, even though passengers may ride inside them.
This is not Waymo’s first federal probe
This latest investigation follows a previous NHTSA evaluation that opened in May 2024. That earlier probe examined reports of Waymo vehicles colliding with stationary objects like gates, chains and parked cars. Regulators also reviewed incidents in which the vehicles appeared to disobey traffic control devices.
That investigation was closed in July 2025 after regulators reviewed the data and Waymo’s responses. Safety advocates say the new incident highlights unresolved concerns.
UBER UNVEILS A NEW ROBOTAXI WITH NO DRIVER BEHIND THE WHEEL
No safety operator was inside the vehicle at the time of the crash, raising fresh questions about how autonomous cars handle unpredictable situations involving children. (Waymo)
What this means for you
If you live in a city where self-driving cars operate, this investigation matters more than it might seem. School zones are already high-risk areas, even for attentive human drivers. Autonomous vehicles must be able to detect unpredictable behavior, anticipate sudden movement and respond instantly when children are present.
This case will likely influence how regulators set expectations for autonomous driving systems near schools, playgrounds and other areas with vulnerable pedestrians. It could also shape future rules around local oversight, data reporting and operational limits for self-driving fleets.
For parents, commuters and riders, the outcome may affect where and when autonomous vehicles are allowed to operate.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
Self-driving technology promises safer roads, fewer crashes and less human error. But moments like this remind us that the hardest driving scenarios often involve human unpredictability, especially when children are involved. Federal investigators now face a crucial question: Did the system act as cautiously as it should have in one of the most sensitive driving environments possible? How they answer that question could help define the next phase of autonomous vehicle regulation in the United States.
Do you feel comfortable sharing the road with self-driving cars near schools, or is that a line technology should not cross yet? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Adobe actually won’t discontinue Animate
Adobe is no longer planning to discontinue Adobe Animate on March 1st. In an FAQ, the company now says that Animate will now be in maintenance mode and that it has “no plans to discontinue or remove access” to the app. Animate will still receive “ongoing security and bug fixes” and will still be available for “both new and existing users,” but it won’t get new features.
An announcement email that went out to Adobe Animate customers about the discontinuation did “not meet our standards and caused a lot of confusion and angst within the community,” according to a Reddit post from Adobe community team member Mike Chambers.
Animate will be available in maintenance mode “indefinitely” to “individual, small business, and enterprise customers,” according to Adobe. Before the change, Adobe said that non-enterprise customers could access Animate and download content until March 1st, 2027, while enterprise customers had until March 1st, 2029.
-
Indiana3 days ago13-year-old rider dies following incident at northwest Indiana BMX park
-
Massachusetts4 days agoTV star fisherman, crew all presumed dead after boat sinks off Massachusetts coast
-
Tennessee5 days agoUPDATE: Ohio woman charged in shooting death of West TN deputy
-
Pennsylvania1 week agoRare ‘avalanche’ blocks Pennsylvania road during major snowstorm
-
Movie Reviews1 week agoVikram Prabhu’s Sirai Telugu Dubbed OTT Movie Review and Rating
-
Indiana3 days ago13-year-old boy dies in BMX accident, officials, Steel Wheels BMX says
-
Culture1 week agoTry This Quiz on Oscar-Winning Adaptations of Popular Books
-
Politics6 days agoVirginia Democrats seek dozens of new tax hikes, including on dog walking and dry cleaning