At this point, it’s becoming easier to say which AI startups Mark Zuckerberg hasn’t looked at acquiring.
Technology
Meta held talks to buy Thinking Machines, Perplexity, and Safe Superintelligence
In addition to Ilya Sutskever’s Safe Superintelligence (SSI), sources tell me the Meta CEO recently discussed buying ex-OpenAI CTO Mira Murati’s Thinking Machines Lab and Perplexity, the AI-native Google rival. None of these talks progressed to the formal offer stage for various reasons, including disagreements over deal prices and strategy, but together they illustrate how aggressively Zuckerberg has been canvassing the industry to reboot his AI efforts.
Now, details about the team Zuckerberg is assembling are starting to come into view: SSI co-founder and CEO Daniel Gross, along with ex-Github CEO Nat Friedman, are poised to co-lead the Meta AI assistant. Both men will report to Alexandr Wang, the former Scale CEO Zuckerberg just paid over $14 billion to quickly hire. Wang told his Scale team goodbye last Friday and was in the Meta office on Monday. This week, he has been meeting with top Meta leaders (more on that below) and continuing to recruit for the new AI team Zuckerberg has tasked him with building. I expect the team to be unveiled as soon as next week.
Rather than join Meta, Sutskever, Murati, and Perplexity CEO Aravind Srinivas have all gone on to raise more money at higher valuations. Sutskever, a titan of the AI research community who co-founded OpenAI, recently raised a couple of billion dollars for SSI. Both Meta and Google are investors in his company, I’m told. Murati also just raised a couple of billion dollars. Neither she nor Sutskever is close to releasing a product. Srinivas, meanwhile, is in the process of raising around $500 million for Perplexity.
Spokespeople for all the companies involved either declined to comment or didn’t respond in time for publication. The Information and CNBC first reported Zuckerberg’s talks with Safe Superintelligence, while Bloomberg first reported the Perplexity talks.
While Zuckerberg’s recruiting drive is motivated by the urgency he feels to fix Meta’s AI strategy, the situation also highlights the fierce competition for top AI talent these days. In my conversations this week, those on the inside of the industry aren’t surprised by Zuckerberg making nine-figure — or even, yes, 10-figure — compensation offers for the best AI talent. There are certain senior people at OpenAI, for example, who are already compensated in that ballpark, thanks to the company’s meteoric increase in valuation over the last few years.
Speaking of OpenAI, it’s clear that CEO Sam Altman is at least a bit rattled by Zuckerberg’s hiring spree. His decision to appear on his brother’s podcast this week and say that “none of our best people” are leaving for Meta was probably meant to convey a position of strength, but in reality, it looks like he is throwing his former colleagues under the bus. I was confused by Altman’s suggestion that Meta paying a lot upfront for talent won’t “set up a great culture.” After all, didn’t OpenAI just pay $6.5 billion to hire Jony Ive and his small hardware team?
“We think that glasses are the best form factor for AI”
When I joined a Zoom call with Alex Himel, Meta’s VP of wearables, this week, he had just gotten off a call with Zuckerberg’s new AI chief, Alexandr Wang.
“There’s an increasing number of Alexes that I talk to on a regular basis,” Himel joked as we started our conversation about Meta’s new glasses release with Oakley. “I was just in my first meeting with him. There were like three people in a room with the camera real far away, and I was like, ‘Who is talking right now?’ And then I was like, ‘Oh, hey, it’s Alex.’”
The following Q&A has been edited for length and clarity:
How did your meeting with Alex just now go?
The meeting was about how to make AI as awesome as it can be for glasses. Obviously, there are some unique use cases in the glasses that aren’t stuff you do on a phone. The thing we’re trying to figure out is how to balance it all, because AI can be everything to everyone or it could be amazing for more specific use cases.
We’re trying to figure out how to strike the right balance because there’s a ton of stuff in the underlying Llama models and that whole pipeline that we don’t care about on glasses. Then there’s stuff we really, really care about, like egocentric view and trying to feed video into the models to help with some of the really aspirational use cases that we wouldn’t build otherwise.
You are referring to this new lineup with Oakley as “AI glasses.” Is that the new branding for this category? They are AI glasses, not smart glasses?
We refer to the category as AI glasses. You saw Orion. You used it for longer than anyone else in the demo, which I commend you for. We used to think that’s what you needed to hit scale for this new category. You needed the big field of view and display to overlay virtual content. Our opinion of that has definitely changed. We think we can hit scale faster, and AI is the reason we think that’s possible.
Right now, the top two use cases for the glasses are audio — phone calls, music, podcasts — and taking photos and videos. We look at participation rates of our active users, and those have been one and two since launch. Audio is one. A very close second is photos and videos.
AI has been number three from the start. As we’ve been launching more markets — we’re now in 18 — and we’ve been adding more features, AI is creeping up. Our biggest investment by a mile on the software side is AI functionality, because we think that glasses are the best form factor for AI. They are something you’re already wearing all the time. They can see what you see. They can hear what you hear. They’re super accessible.
Is your goal to have AI supersede audio and photo to be the most used feature for glasses, or is that not how you think about it?
From a math standpoint, at best, you could tie. We do want AI to be something that’s increasingly used by more people more frequently. We think there’s definitely room for the audio to get better. There’s definitely room for image quality to get better. The AI stuff has much more headroom.
How much of the AI is onboard the glasses versus the cloud? I imagine you have lots of physical constraints with this kind of device.
We’ve now got one billion-parameter models that can run on the frame. So, increasingly, there’s stuff there. Then we have stuff running on the phone.
If you were watching WWDC, Apple made a couple of announcements that we haven’t had a chance to test yet, but we’re excited about. One is the Wi-Fi Aware APIs. We should be able to transfer photos and videos without having people tap that annoying dialogue box every time. That’d be great. The second one was processor background access, which should allow us to do image processing when you transfer the media over. Syncing would work just like it does on Android.
Do you think the market for these new Oakley glasses will be as big as the Ray-Bans? Or is it more niche because they are more outdoors and athlete-focused?
We work with EssilorLuxottica, which is a great partner. Ray-Ban is their largest brand. Within that, the most popular style is Wayfair. When we launched the original Ray-Ban Meta glasses, we went with the most popular style for the most popular brand.
Their second biggest brand is Oakley. A lot of people wear them. The Holbrook is really popular. The HSTN, which is what we’re launching, is a really popular analog frame. We increasingly see people using the Ray-Ban Meta glasses for active use cases. This is our first step into the performance category. There’s more to come.
What’s your reaction to Google’s announcements at I/O for their XR glasses platform and eyewear partnerships?
We’ve been working with EssilorLuxottica for like five years now. That’s a long time for a partnership. It takes a while to get really in sync. I feel very good about the state of our partnership. We’re able to work quickly. The Oakley Meta glasses are the fastest program we’ve had by quite a bit. It took less than nine months.
I thought the demos they [Google] did were pretty good. I thought some of those were pretty compelling. They didn’t announce a product, so I can’t react specifically to what they’re doing. It’s flattering that people see the traction we’re getting and want to jump in as well.
On the AR glasses front, what have you been learning from Orion now that you’ve been showing it to the outside world?
We’ve been going full speed on that. We’ve actually hit some pretty good internal milestones for the next version of it, which is the one we plan to sell. The biggest learning from using them is that we feel increasingly good about the input and interaction model with eye tracking and the neural band. I wore mine during March Madness in the office. I was literally watching the games. Picture yourself sitting at a table with a virtual TV just above people’s heads. It was amazing.
- TikTok gets to keep operating illegally. As expected, President Trump extended his enforcement deadline for the law that has banned a China-owned TikTok in the US. It’s essential to understand what is really happening here: Trump is instructing his Attorney General not to enforce earth-shattering fines on Apple, Google, and every other American company that helps operate TikTok. The idea that he wouldn’t use this immense leverage to extract whatever he wants from these companies is naive, and this whole process makes a mockery of everyone involved, not to mention the US legal system.
- Amazon will hire fewer people because of AI. When you make an employee memo a press release, you’re trying to tell the whole world what’s coming. In this case, Amazon CEO Andy Jassy wants to make clear that he’s going to fully embrace AI to cut costs. Roughly 30 percent of Amazon’s code is already written by AI, and I’m sure Jassy is looking at human-intensive areas, such as sales and customer service, to further automate.
If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.
As always, I welcome your feedback, especially if you’ve also turned down Zuck. You can respond here or ping me securely on Signal.
Technology
YouTube made its video player easier to navigate on TVs
The YouTube watch screen has been given a new look on TVs. The redesign aims to provide a “more intuitive experience with easier navigation,” according to YouTube’s announcement, relocating the video title and several controls, and adding a new “Description” button to access creator information and other video features.
I’m already seeing the update on my own Nvidia Shield Pro streaming box and native Phillips TV OS, and I do think it makes it easier to find specific video features and controls. My colleague Thomas Ricker says he isn’t seeing the redesign in Apple TV’s YouTube player, however, so they may still be rolling out. These changes are pretty delayed, considering YouTube announced in April that they would arrive “this summer.”
Videos on the YouTube app for TV will now show the title in the top left corner of the screen instead of just above the video scrubber at the bottom of the page, and the title can no longer be clicked to open comments, metadata, and information about the creator. Instead, those controls are now available by clicking the new “Description” button. The channel thumbnail and subscribe function have also been separated into two buttons, with the creator’s thumbnail now taking users directly to their channel.
Controls have been reorganized into distinct groups under the video scrubber: Channel, Description, and Subscribe on the left, Previous, Pause/Play, and Next in the center, and Like, Dislike, Comment, Save, Closed Captions, and Settings placed into two groups on the right. YouTube says the Subscribe button will remain visible to subscribers, adapting to flag pay-gated content or alert users to new live streams. A “Multiview” control has also been added for live sports content, while Music and Premium subscribers will see a new “Display Mode” control.
Technology
Android Emergency Live Video gives 911 eyes on the scene
NEWYou can now listen to Fox News articles!
Holiday travel and winter storms create risky moments for drivers and families. Stress rises fast during emergencies, and describing the scene to 911 can feel overwhelming.
Now, a new Android feature closes that gap by providing live visual information that helps responders act with speed and accuracy.
If you use an iPhone, Apple offers a similar tool through its Emergency SOS Live Video feature. You can learn how it works right here.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
HOW ANDROID MALWARE LETS THIEVES ACCESS YOUR ATM CASH
Android Emergency Live Video gives 911 a secure live view of the scene, so responders understand what is happening right away. (Cyberguy.com)
What Android Emergency Live Video does
Google is rolling out Android Emergency Live Video to give dispatchers a secure view of the scene during an active call or text. A dispatcher can request a live video stream through your phone when it is safe for you to share it. With a single tap, you can stream real-time video that helps responders understand what is happening.
This can help during car accidents, medical emergencies or fast-moving hazards such as wildfire conditions. Live video can also help dispatchers guide you through steps that save lives, such as CPR, until responders arrive.
APPLE NOW LETS YOU ADD YOUR PASSPORT TO YOUR PHONE’S WALLET
How the Android Emergency Live Video feature works
Android designed this tool to work with no setup. When you call or text 911, the dispatcher reviews the situation. If they decide video would help, they will send a request to your phone. You see a clear prompt that lets you choose whether to start the secure stream. The feature uses encryption and gives you full control. You can stop sharing at any moment.
The feature works on Android phones running Android 8 or newer with Google Play services. It is rolling out across the U.S. and select regions in Germany and Mexico. Google plans to expand coverage with more public safety partners.
How to use Emergency Live Video on Android
You cannot turn this feature on in advance. It appears only during an active 911 call or text.
1) Call or text 911 on your Android phone. The dispatcher reviews your situation.
2) Watch for a request on your screen. If the dispatcher decides live video will help, they send a prompt to your device.
3) Tap the notification that appears. You will see a clear message asking if you want to share live video.
4) Choose Share video to start streaming. This opens your camera and begins a secure live feed.
5) Tap Stop sharing at any time. You stay in control the entire time and can end the video at any time.
With one tap, you can choose to share real-time video during a 911 call or text which gives dispatchers the clarity they need to guide you. (CyberGuy.com)
Why Emergency Live Video on Android matters now
Emergencies create confusion. Sharing details verbally takes time and can lead to miscommunication. Video removes guesswork. Responders gain clarity in seconds, which can speed up help and improve outcomes. This tool builds on Android’s safety features, including Satellite SOS, Fall Detection and Car Crash Detection.
NEW ANDROID ATTACK TRICKS YOU INTO GIVING DANGEROUS PERMISSIONS
Alastair Breeze, a Software Engineer for Android, tells CyberGuy that the team built this feature with one goal in mind. “Providing people peace of mind is at the core of Android’s safety mission. Android Emergency Live Video gives you the ability to securely share real-time video to provide dispatchers the critical eyes-on-scene context they need to assist in emergencies.”
What this means to you
If you carry an Android phone, this feature adds another layer of protection during moments that demand quick action. You stay in control of when the video is shared. You also get a simple way to show the situation when describing it feels impossible. Faster clarity can lead to faster help, which can shape how an emergency ends.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
The feature works on Android phones running Android 8 or newer and helps responders act faster during emergencies when seconds matter. (Tony Giberson/tgiberson@pnj.com / USA TODAY)
Kurt’s key takeaways
Android Emergency Live Video brings real-time awareness to moments when every second matters. It gives responders a clear view, so they can guide you through urgent steps if necessary. Most of all, it adds peace of mind during situations no one plans for.
Would you feel comfortable sharing live video during an emergency if it helped responders reach you faster? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
The Game Awards 2025: all the news and announcements
The Game Awards are back once again to showcase a metric ton of commercials, provide the gaming public with their monthly dose of Muppets, and validate gamers’ opinions on which title should be named the Game of the Year. I don’t wanna say it’s a foregone conclusion what this year’s GOTY will be — Silksong may surprise us — but it’s pretty obvious that Clair Obscur: Expedition 33 is the frontrunner and for good reason. It’s netted 12 nominations, the most out of this year’s contenders, including all five craft awards (Direction, Art, Music and Score, Narrative, and Audio Design).
On the announcements side, Crystal Dynamics and Amazon Games are planning something related to the Tomb Raider series. Keighley also probably had plans to reveal big news about Resident Evil: Requiem, but unfortunately it got spoiled early thanks to some leaked key art on the PlayStation Store. Here’s all the news, announcements, and trailers from The Game Awards 2025.
-
Alaska6 days agoHowling Mat-Su winds leave thousands without power
-
Politics1 week agoTrump rips Somali community as federal agents reportedly eye Minnesota enforcement sweep
-
Ohio1 week ago
Who do the Ohio State Buckeyes hire as the next offensive coordinator?
-
Texas6 days agoTexas Tech football vs BYU live updates, start time, TV channel for Big 12 title
-
News1 week agoTrump threatens strikes on any country he claims makes drugs for US
-
World1 week agoHonduras election council member accuses colleague of ‘intimidation’
-
Washington3 days agoLIVE UPDATES: Mudslide, road closures across Western Washington
-
Iowa5 days agoMatt Campbell reportedly bringing longtime Iowa State staffer to Penn State as 1st hire