AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”
Technology
21-year-old whose speech was impaired by tumor has voice replicated through AI smartphone app

- Lexi Bogan, 21, lost her voice last summer after doctors removed a life-threatening tumor lodged near the back of her brain.
- In April, she regained her voice through an AI-generated clone trained on a 15-second recording of her teenage voice.
- Bogan and her medical team believe it has valuable medical applications for those with speech impediments or losses.
The voice Alexis “Lexi” Bogan had before last summer was exuberant.
She loved to belt out Taylor Swift and Zach Bryan ballads in the car. She laughed all the time — even while corralling misbehaving preschoolers or debating politics with friends over a backyard fire pit. In high school, she was a soprano in the chorus.
Then that voice was gone.
ARTIFICIAL INTELLIGENCE HELPS PREDICT SENIORS’ LONG-TERM CARE NEEDS: ‘CRITICAL NEXT STEPS’
Doctors in August removed a life-threatening tumor lodged near the back of her brain. When the breathing tube came out a month later, Bogan had trouble swallowing and strained to say “hi” to her parents. Months of rehabilitation aided her recovery, but her speech is still impaired. Friends, strangers and her own family members struggle to understand what she is trying to tell them.
Alexis Bogan, whose speech was impaired by a brain tumor, uses an AI-powered smartphone app to create an audible drink order at a Starbucks drive-thru on April 29, 2024, in Lincoln, Rhode Island. The app converts her typed entries into a verbal message created using her original voice. (AP Photo/Steven Senne)
In April, the 21-year-old got her old voice back. Not the real one, but a voice clone generated by artificial intelligence that she can summon from a phone app. Trained on a 15-second time capsule of her teenage voice — sourced from a cooking demonstration video she recorded for a high school project — her synthetic but remarkably real-sounding AI voice can now say almost anything she wants.
She types a few words or sentences into her phone and the app instantly reads it aloud.
“Hi, can I please get a grande iced brown sugar oat milk shaken espresso,” said Bogan’s AI voice as she held the phone out her car’s window at a Starbucks drive-thru.
NEW AI TOOLS CAN HELP DOCTORS TAKE NOTES, MESSAGE PATIENTS, BUT THEY STILL MAKE MISTAKES
Experts have warned that rapidly improving AI voice-cloning technology can amplify phone scams, disrupt democratic elections and violate the dignity of people — living or dead — who never consented to having their voice recreated to say things they never spoke.
It’s been used to produce deepfake robocalls to New Hampshire voters mimicking President Joe Biden. In Maryland, authorities recently charged a high school athletic director with using AI to generate a fake audio clip of the school’s principal making racist remarks.
But Bogan and a team of doctors at Rhode Island’s Lifespan hospital group believe they’ve found a use that justifies the risks. Bogan is one of the first people — the only one with her condition — who have been able to recreate a lost voice with OpenAI’s new Voice Engine. Some other AI providers, such as the startup ElevenLabs, have tested similar technology for people with speech impediments and loss — including a lawyer who now uses her voice clone in the courtroom.
“We’re hoping Lexi’s a trailblazer as the technology develops,” said Dr. Rohaid Ali, a neurosurgery resident at Brown University’s medical school and Rhode Island Hospital. Millions of people with debilitating strokes, throat cancer or neurogenerative diseases could benefit, he said.
“We should be conscious of the risks, but we can’t forget about the patient and the social good,” said Dr. Fatima Mirza, another resident working on the pilot. “We’re able to help give Lexi back her true voice and she’s able to speak in terms that are the most true to herself.”
Mirza and Ali, who are married, caught the attention of ChatGPT-maker OpenAI because of their previous research project at Lifespan using the AI chatbot to simplify medical consent forms for patients. The San Francisco company reached out while on the hunt earlier this year for promising medical applications for its new AI voice generator.
Bogan was still slowly recovering from surgery. The illness started last summer with headaches, blurry vision and a droopy face, alarming doctors at Hasbro Children’s Hospital in Providence. They discovered a vascular tumor the size of a golf ball pressing on her brain stem and entangled in blood vessels and cranial nerves.
“It was a battle to get control of the bleeding and get the tumor out,” said pediatric neurosurgeon Dr. Konstantina Svokos.
The 10-hour length of the surgery coupled with the tumor’s location and severity damaged Bogan’s tongue muscles and vocal cords, impeding her ability to eat and talk, Svokos said.
“It’s almost like a part of my identity was taken when I lost my voice,” Bogan said.
The feeding tube came out this year. Speech therapy continues, enabling her to speak intelligibly in a quiet room but with no sign she will recover the full lucidity of her natural voice.
“At some point, I was starting to forget what I sounded like,” Bogan said. “I’ve been getting so used to how I sound now.”
Whenever the phone rang at the family’s home in the Providence suburb of North Smithfield, she would push it over to her mother to take her calls. She felt she was burdening her friends whenever they went to a noisy restaurant. Her dad, who has hearing loss, struggled to understand her.
Back at the hospital, doctors were looking for a pilot patient to experiment with OpenAI’s technology.
“The first person that came to Dr. Svokos’ mind was Lexi,” Ali said. “We reached out to Lexi to see if she would be interested, not knowing what her response would be. She was game to try it out and see how it would work.”
Bogan had to go back a few years to find a suitable recording of her voice to “train” the AI system on how she spoke. It was a video in which she explained how to make a pasta salad.
Her doctors intentionally fed the AI system just a 15-second clip. Cooking sounds make other parts of the video imperfect. It was also all that OpenAI needed — an improvement over previous technology requiring much lengthier samples.
They also knew that getting something useful out of 15 seconds could be vital for any future patients who have no trace of their voice on the internet. A brief voicemail left for a relative might have to suffice.
When they tested it for the first time, everyone was stunned by the quality of the voice clone. Occasional glitches — a mispronounced word, a missing intonation — were mostly imperceptible. In April, doctors equipped Bogan with a custom-built phone app that only she can use.
“I get so emotional every time I hear her voice,” said her mother, Pamela Bogan, tears in her eyes.
“I think it’s awesome that I can have that sound again,” added Lexi Bogan, saying it helped “boost my confidence to somewhat where it was before all this happened.”
She now uses the app about 40 times a day and sends feedback she hopes will help future patients. One of her first experiments was to speak to the kids at the preschool where she works as a teaching assistant. She typed in “ha ha ha ha” expecting a robotic response. To her surprise, it sounded like her old laugh.
She’s used it at Target and Marshall’s to ask where to find items. It’s helped her reconnect with her dad. And it’s made it easier for her to order fast food.
Bogan’s doctors have started cloning the voices of other willing Rhode Island patients and hope to bring the technology to hospitals around the world. OpenAI said it is treading cautiously in expanding the use of Voice Engine, which is not yet publicly available.
A number of smaller AI startups already sell voice-cloning services to entertainment studios or make them more widely available. Most voice-generation vendors say they prohibit impersonation or abuse, but they vary in how they enforce their terms of use.
“We want to make sure that everyone whose voice is used in the service is consenting on an ongoing basis,” said Jeff Harris, OpenAI’s lead on the product. “We want to make sure that it’s not used in political contexts. So we’ve taken an approach of being very limited in who we’re giving the technology to.”
Harris said OpenAI’s next step involves developing a secure “voice authentication” tool so that users can replicate only their own voice. That might be “limiting for a patient like Lexi, who had sudden loss of her speech capabilities,” he said. “So we do think that we’ll need to have high-trust relationships, especially with medical providers, to give a little bit more unfettered access to the technology.”
Bogan has impressed her doctors with her focus on thinking about how the technology could help others with similar or more severe speech impediments.
“Part of what she has done throughout this entire process is think about ways to tweak and change this,” Mirza said. “She’s been a great inspiration for us.”
While for now she must fiddle with her phone to get the voice engine to talk, Bogan imagines an AI voice engine that improves upon older remedies for speech recovery — such as the robotic-sounding electrolarynx or a voice prosthesis — in melding with the human body or translating words in real time.
She’s less sure about what will happen as she grows older and her AI voice continues to sound like she did as a teenager. Maybe the technology could “age” her AI voice, she said.
For now, “even though I don’t have my voice fully back, I have something that helps me find my voice again,” she said.

Technology
Inside Mark Zuckerberg’s AI hiring spree

As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone.
For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.
Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang.
It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI (a deal Zuckerberg passed on). “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”
Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai.
I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies (although that is highly unlikely to happen). Meta’s internal coding tool for engineers, however, is already using Claude.
While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s $14.3 billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning.
Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on.
Apple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance.
After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.
Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.
The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course.
Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move.
I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants.
Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be.
AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year.
In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era.
I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.
- AI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will need (and want) to approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”
- Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off.
- Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent $3 billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”
If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.
As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.
Technology
AI tennis robot coach brings professional training to players

NEWYou can now listen to Fox News articles!
Finding a reliable tennis partner who matches your energy and skill level can be a challenge.
Now, with Tenniix, an artificial intelligence-powered tennis robot from T-Apex, players of all abilities have a new way to practice and improve.
Tenniix brings smart technology and adaptability to your training sessions, making it easier to get the most out of your time on the court.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join.
Tenniix, the AI-powered tennis robot (T-Apex)
What is Tenniix? Meet the AI tennis robot transforming practice sessions
Tenniix is an AI-powered tennis robot that is compact and weighs only 15 pounds, which is much lighter than traditional ball machines. Despite its small size, it serves balls at speeds of up to 75 mph, with spins reaching 5,000 RPM, and holds up to 100 balls at a time. The robot’s movable base allows it to deliver shots from different angles, keeping practice sessions dynamic and engaging.
TENNIS PRO ERIN ROUTLIFFE EXPLODES OVER LACK OF ‘ROBOTS’ AT AUSTRALIAN OPEN

A player lifting the Tenniix, an AI-powered tennis robot, out of the vehicle. (T-Apex)
NO TENNIS PARTNER? NO WORRIES WITH THIS AI ROBOT
AI tennis coaching: How Tenniix delivers realistic, pro-level practice
One of the standout features of Tenniix is its AI-driven coaching. The robot has been trained on over 8,000 hours of professional tennis data, allowing it to adjust its shots based on your position and playing style. This gives you a realistic and challenging experience every time you step on the court. Tenniix offers a wide variety of training modes, with more than 1,000 drills and three skill levels, so you can focus on everything from timing and footwork to shot accuracy.

Tenniix, the AI-powered tennis robot being carried (T-Apex)
WILL 3D TECH CHANGE SPORTS FOREVER?
Smart and simple: How to control Tenniix with voice, gestures or your phone
Controlling Tenniix is simple and intuitive. You can use voice commands or gestures to change spin, speed or shot type without interrupting your practice. Tenniix also features convenient app controls, letting you select training modes, adjust settings and review session data right from your smartphone for a fully customized and trackable experience. The robot’s modular design means you can start with the model that fits your needs and upgrade as your skills improve. With a built-in camera and AI chip, Tenniix analyzes your shots and provides instant feedback, helping you track your progress over time.

Tenniix, the AI-powered tennis robot (T-Apex)
SKYROCKET TO A HEALTHIER LIFESTYLE WITH THIS GEAR IN 2025
Advanced tracking and movement: How Tenniix adapts to your game in real time
Tenniix uses a combination of visual tracking and ultra-wideband sensors to know exactly where you and the ball are on the court. Its motorized base moves smoothly to deliver a wide range of shots, from high lobs to fast groundstrokes, at different speeds and spins. The battery lasts up to four hours, which is enough for a solid training session.

Tenniix, the AI-powered tennis robot (T-Apex)
BEST FATHER’S DAY GIFTS FOR EVERY DAD
Practice like the pros: Train against Nadal-style shots with Tenniix
Another feature that sets Tenniix apart is its ability to mimic the playing styles of tennis greats like Nadal and Federer. This helps you prepare for matches by practicing against shots and spins similar to those you’ll face in real competition. Coaches and players have noted how Tenniix creates realistic rallies and adapts to different skill levels, making training both efficient and enjoyable.

Tenniix, the AI-powered tennis robot (T-Apex)
Portable, smart and backed by support: Why tennis players love Tenniix
Tenniix is easy to carry and set up, making it convenient for players who want to practice anywhere. With thousands of shot combinations and drills, your workouts stay fresh and challenging. The smart technology, real-time tracking and instant feedback help make every session productive. Each robot comes with a one-year warranty and reliable customer service.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Tenniix, the AI-powered tennis robot (T-Apex)
Tenniix models and pricing: Which AI tennis robot is right for you?
There are three Tenniix models to choose from. The Basic model is priced at $699, the Pro at $999 and the Ultra at $1,499. Each model offers a different set of features, with the Ultra version including advanced options like the movable base and enhanced vision system. Tenniix was launched through a Kickstarter campaign, giving early supporters a chance to back the project and receive the robot at a special price.
SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES

Tenniix, the AI-powered tennis robot (T-Apex)
Kurt’s key takeaways
Tenniix feels less like a machine and more like a smart tennis partner who’s always ready to help you improve. Whether you want to polish your technique or get serious about your game, it offers a flexible and engaging way to train. If you’re looking for a training partner that adapts to you, Tenniix is worth checking out.
Would you rather challenge yourself playing against a robot like Tenniix, or do you prefer training with a human opponent? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Ask Kurt a question or let us know what stories you’d like us to cover.
Follow Kurt on his social channels:
Answers to the most-asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Google is shutting down Android Instant Apps over ‘low’ usage

Google has confirmed that it plans to shut down Android’s Instant Apps later this year, attributing the decision to “low” usage of the functionality.
Instant Apps were introduced in 2017, and allow developers to create mini versions of Android apps that load, well, instantly. Users can try apps and demo games from the click of a link, without having to fully install them. That makes the experience easier for users to navigate and provides developers with more ways to find new audiences.
Android Authority first reported that Google is moving on from the feature, which came to light after developer Leon Omelan spotted a warning about the change in Android Studio:
“Instant Apps support will be removed by Google Play in December 2025. Publishing and all Google Play Instant APIs will no longer work. Tooling support will be removed in Android Studio Otter Feature Drop.”
Google spokesperson Nia Carter confirmed the decision to The Verge, explaining that Instant Apps simply haven’t been popular enough to continue supporting.
“Usage and engagement of Instant Apps have been low, and developers are leveraging other tools for app discovery such as AI-powered app highlights and simultaneous app installs,” Carter says. “This change allows us to invest more in the tools that are working well for developers, and help direct users to full app downloads to foster deeper engagement.”
-
West1 week ago
Battle over Space Command HQ location heats up as lawmakers press new Air Force secretary
-
Technology1 week ago
iFixit says the Switch 2 is even harder to repair than the original
-
Business1 week ago
How Hard It Is to Make Trade Deals
-
Movie Reviews1 week ago
Predator: Killer of Killers (2025) Movie Review | FlickDirect
-
Politics1 week ago
A History of Trump and Elon Musk's Relationship in their Own Words
-
World1 week ago
US-backed GHF group extends closure of Gaza aid sites for second day
-
News1 week ago
Amid Trump, Musk blowup, canceling SpaceX contracts could cripple DoD launch program – Breaking Defense
-
World1 week ago
Most NATO members endorse Trump demand to up defence spending