At this point, it’s becoming easier to say which AI startups Mark Zuckerberg hasn’t looked at acquiring.
Technology
Meta held talks to buy Thinking Machines, Perplexity, and Safe Superintelligence
In addition to Ilya Sutskever’s Safe Superintelligence (SSI), sources tell me the Meta CEO recently discussed buying ex-OpenAI CTO Mira Murati’s Thinking Machines Lab and Perplexity, the AI-native Google rival. None of these talks progressed to the formal offer stage for various reasons, including disagreements over deal prices and strategy, but together they illustrate how aggressively Zuckerberg has been canvassing the industry to reboot his AI efforts.
Now, details about the team Zuckerberg is assembling are starting to come into view: SSI co-founder and CEO Daniel Gross, along with ex-Github CEO Nat Friedman, are poised to co-lead the Meta AI assistant. Both men will report to Alexandr Wang, the former Scale CEO Zuckerberg just paid over $14 billion to quickly hire. Wang told his Scale team goodbye last Friday and was in the Meta office on Monday. This week, he has been meeting with top Meta leaders (more on that below) and continuing to recruit for the new AI team Zuckerberg has tasked him with building. I expect the team to be unveiled as soon as next week.
Rather than join Meta, Sutskever, Murati, and Perplexity CEO Aravind Srinivas have all gone on to raise more money at higher valuations. Sutskever, a titan of the AI research community who co-founded OpenAI, recently raised a couple of billion dollars for SSI. Both Meta and Google are investors in his company, I’m told. Murati also just raised a couple of billion dollars. Neither she nor Sutskever is close to releasing a product. Srinivas, meanwhile, is in the process of raising around $500 million for Perplexity.
Spokespeople for all the companies involved either declined to comment or didn’t respond in time for publication. The Information and CNBC first reported Zuckerberg’s talks with Safe Superintelligence, while Bloomberg first reported the Perplexity talks.
While Zuckerberg’s recruiting drive is motivated by the urgency he feels to fix Meta’s AI strategy, the situation also highlights the fierce competition for top AI talent these days. In my conversations this week, those on the inside of the industry aren’t surprised by Zuckerberg making nine-figure — or even, yes, 10-figure — compensation offers for the best AI talent. There are certain senior people at OpenAI, for example, who are already compensated in that ballpark, thanks to the company’s meteoric increase in valuation over the last few years.
Speaking of OpenAI, it’s clear that CEO Sam Altman is at least a bit rattled by Zuckerberg’s hiring spree. His decision to appear on his brother’s podcast this week and say that “none of our best people” are leaving for Meta was probably meant to convey a position of strength, but in reality, it looks like he is throwing his former colleagues under the bus. I was confused by Altman’s suggestion that Meta paying a lot upfront for talent won’t “set up a great culture.” After all, didn’t OpenAI just pay $6.5 billion to hire Jony Ive and his small hardware team?
“We think that glasses are the best form factor for AI”
When I joined a Zoom call with Alex Himel, Meta’s VP of wearables, this week, he had just gotten off a call with Zuckerberg’s new AI chief, Alexandr Wang.
“There’s an increasing number of Alexes that I talk to on a regular basis,” Himel joked as we started our conversation about Meta’s new glasses release with Oakley. “I was just in my first meeting with him. There were like three people in a room with the camera real far away, and I was like, ‘Who is talking right now?’ And then I was like, ‘Oh, hey, it’s Alex.’”
The following Q&A has been edited for length and clarity:
How did your meeting with Alex just now go?
The meeting was about how to make AI as awesome as it can be for glasses. Obviously, there are some unique use cases in the glasses that aren’t stuff you do on a phone. The thing we’re trying to figure out is how to balance it all, because AI can be everything to everyone or it could be amazing for more specific use cases.
We’re trying to figure out how to strike the right balance because there’s a ton of stuff in the underlying Llama models and that whole pipeline that we don’t care about on glasses. Then there’s stuff we really, really care about, like egocentric view and trying to feed video into the models to help with some of the really aspirational use cases that we wouldn’t build otherwise.
You are referring to this new lineup with Oakley as “AI glasses.” Is that the new branding for this category? They are AI glasses, not smart glasses?
We refer to the category as AI glasses. You saw Orion. You used it for longer than anyone else in the demo, which I commend you for. We used to think that’s what you needed to hit scale for this new category. You needed the big field of view and display to overlay virtual content. Our opinion of that has definitely changed. We think we can hit scale faster, and AI is the reason we think that’s possible.
Right now, the top two use cases for the glasses are audio — phone calls, music, podcasts — and taking photos and videos. We look at participation rates of our active users, and those have been one and two since launch. Audio is one. A very close second is photos and videos.
AI has been number three from the start. As we’ve been launching more markets — we’re now in 18 — and we’ve been adding more features, AI is creeping up. Our biggest investment by a mile on the software side is AI functionality, because we think that glasses are the best form factor for AI. They are something you’re already wearing all the time. They can see what you see. They can hear what you hear. They’re super accessible.
Is your goal to have AI supersede audio and photo to be the most used feature for glasses, or is that not how you think about it?
From a math standpoint, at best, you could tie. We do want AI to be something that’s increasingly used by more people more frequently. We think there’s definitely room for the audio to get better. There’s definitely room for image quality to get better. The AI stuff has much more headroom.
How much of the AI is onboard the glasses versus the cloud? I imagine you have lots of physical constraints with this kind of device.
We’ve now got one billion-parameter models that can run on the frame. So, increasingly, there’s stuff there. Then we have stuff running on the phone.
If you were watching WWDC, Apple made a couple of announcements that we haven’t had a chance to test yet, but we’re excited about. One is the Wi-Fi Aware APIs. We should be able to transfer photos and videos without having people tap that annoying dialogue box every time. That’d be great. The second one was processor background access, which should allow us to do image processing when you transfer the media over. Syncing would work just like it does on Android.
Do you think the market for these new Oakley glasses will be as big as the Ray-Bans? Or is it more niche because they are more outdoors and athlete-focused?
We work with EssilorLuxottica, which is a great partner. Ray-Ban is their largest brand. Within that, the most popular style is Wayfair. When we launched the original Ray-Ban Meta glasses, we went with the most popular style for the most popular brand.
Their second biggest brand is Oakley. A lot of people wear them. The Holbrook is really popular. The HSTN, which is what we’re launching, is a really popular analog frame. We increasingly see people using the Ray-Ban Meta glasses for active use cases. This is our first step into the performance category. There’s more to come.
What’s your reaction to Google’s announcements at I/O for their XR glasses platform and eyewear partnerships?
We’ve been working with EssilorLuxottica for like five years now. That’s a long time for a partnership. It takes a while to get really in sync. I feel very good about the state of our partnership. We’re able to work quickly. The Oakley Meta glasses are the fastest program we’ve had by quite a bit. It took less than nine months.
I thought the demos they [Google] did were pretty good. I thought some of those were pretty compelling. They didn’t announce a product, so I can’t react specifically to what they’re doing. It’s flattering that people see the traction we’re getting and want to jump in as well.
On the AR glasses front, what have you been learning from Orion now that you’ve been showing it to the outside world?
We’ve been going full speed on that. We’ve actually hit some pretty good internal milestones for the next version of it, which is the one we plan to sell. The biggest learning from using them is that we feel increasingly good about the input and interaction model with eye tracking and the neural band. I wore mine during March Madness in the office. I was literally watching the games. Picture yourself sitting at a table with a virtual TV just above people’s heads. It was amazing.
- TikTok gets to keep operating illegally. As expected, President Trump extended his enforcement deadline for the law that has banned a China-owned TikTok in the US. It’s essential to understand what is really happening here: Trump is instructing his Attorney General not to enforce earth-shattering fines on Apple, Google, and every other American company that helps operate TikTok. The idea that he wouldn’t use this immense leverage to extract whatever he wants from these companies is naive, and this whole process makes a mockery of everyone involved, not to mention the US legal system.
- Amazon will hire fewer people because of AI. When you make an employee memo a press release, you’re trying to tell the whole world what’s coming. In this case, Amazon CEO Andy Jassy wants to make clear that he’s going to fully embrace AI to cut costs. Roughly 30 percent of Amazon’s code is already written by AI, and I’m sure Jassy is looking at human-intensive areas, such as sales and customer service, to further automate.
If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.
As always, I welcome your feedback, especially if you’ve also turned down Zuck. You can respond here or ping me securely on Signal.
Technology
This magazine plays Tetris — here’s how
Tetris has been immortalized in a playable McDonald’s plastic chicken nugget, a playable fake 7-Eleven Slurpee cup, and a playable wristwatch. But the most intriguing way to play Tetris yet is encased in paper.
Last year the Tetris Company partnered with Red Bull for a gaming tournament that culminated in the 150-meter-tall Dubai Frame landmark being turned into the world’s largest playable Tetris installation using over 2,000 drones that functioned as pixels. Although the timing was a coincidence, Red Bull also published a 180-page gaming edition of its The Red Bulletin lifestyle magazine around the same time as the event, with a limited number of copies wrapped in a less grandiose, but no less technically impressive, version of Alexey Pajitnov’s iconic puzzle game.
To create a playable gaming magazine, Red Bull Media House (the company’s media wing) enlisted the help of Kevin Bates, who in 2014 wowed the internet by creating an ultra-thin Tetris-playing business card. In 2015, he launched the $39 Arduboy, a credit card-sized, open-source handheld that attracted a thriving community of developers. Over the course of a decade, Bates also created a pair of equally pocketable Tetris-playing handhelds that cost less than $30, and the shrunken-down USB-C Arduboy Mini.
The GamePop GP-1 Playable Magazine System (as it’s officially called) is the latest evolution of Bates’ mission to use existing, accessible, and affordable technologies to reimagine what a portable gaming device can be. It took “most of last year” to develop, Bates revealed during a call with The Verge. He wouldn’t divulge the exact details of how his collaboration with Red Bull came to be. But if you’re looking to make an officially licensed version of Tetris that’s thin enough to flex, Bates has the experience, and he shared with us some of the technical details that make this creation work.
While OLED display technology has given us tablet-sized devices that fold into smartphones, they’re still expensive and fragile. To make a display that can survive being embedded in a flexible magazine cover without reinforcement, Bates created a custom matrix of 180 2mm RGB LEDs mounted to a flexible circuit board just 0.1mm thick. While the display and coin-cell batteries make it thicker in a few places — nearly 5mm at its thickest point — you genuinely feel like you’re playing a handheld made of paper. The flexible circuits are bonded between two sheets of paper to create the sleeve that wraps around the book-sized magazine, and it feels satisfyingly thin and flexible.
Flexible circuits aren’t a new idea. They’ve been used in electronics for decades. You can find them in flip phones old enough they now feel like antiques, and nearly every laptop. They’re also frequently used to miniaturize devices that don’t fold or flex at all, connecting internal components where space is extremely limited. But it’s only in the past five or six years that the technology has become available to smaller makers, and Bates says he’s been “messing around with the flexible circuits for about as much time.” This collaboration was an opportunity to use what he’s learned to create a device that would live outside his workshop.
The GamePop GP-1’s display resolution pales in comparison to the OLED screens used in folding phones, but Bates’ creation is far more durable. The game has not only undergone the typical safety tests, but Bates even “hit it with a hammer a few times” to test its durability. His display survived, but don’t try that with a folding phone. They’re still far less durable.

Instead of buttons, the game uses seven capacitive touch sensors that are directly “printed in the copper layer of the board,” Bates says. There’s no true mechanical feedback when pressed, but the paper’s flex helps them feel a bit like a button when you press down. Bates says the responsiveness of the sensors was specifically tuned to account for the thickness of the paper stock and the glues used in the final print run. You’re not going to be chasing Tetris world records on the cover of a magazine, but the controls are satisfyingly responsive and the game is surprisingly much easier to play than other Tetris devices I’ve tested.

How much does a flexible Tetris game cost to manufacture? Neither Bates nor Red Bull would divulge the total price tag for all the off-the-shelf and custom components you’ll find sandwiched inside the magazine’s cover. But to help keep costs down, not all components are flexible. Inside the edge of the cover, next to the magazine’s spine, you’ll find a long but thin rigid PCB where an ARM-based 32-bit microprocessor is located, along with four rechargeable LIR2016 3V coin cell batteries.

Like most devices now, the game can be recharged using a USB-C cable, but it’s not immediately obvious where. Hidden along the bottom edge of the magazine’s cover is a deconstructed USB-C port. Instead of a metal ring, its socket is a small paper pocket containing a pin-covered head inside. It doesn’t feel quite as durable as the charging port on your phone, but it’s a welcome alternative to making the game disposable when the batteries die.
Bates did have to cut some corners. The GamePop GP-1 saves high scores, but modern Tetris gameplay features, like previews of upcoming pieces and being able to save tetrominoes for later, aren’t included. There’s sound effects, but when starting a game you only hear a small snippet of the iconic Tetris theme. The game’s piezo speaker “uses about as much energy as it does to run the rest of the system,” Bates says, so this helps prolong the life of the small rechargeable batteries. He tells us you can play for an hour or two that way, and the battery should last many months when not in use.
Red Bull made around 1,000 copies of the magazine. It’s only available online in Europe, but can also be found in some stores and newsstands, including Iconic Magazines in New York and Rare Mags outside Manchester in the UK. However, only 150 copies with the playable cover were produced, and none were made available to the public. They were distributed to Tetris competitors, those featured in the magazine, influencers, and select media.
The playable cover isn’t going to revolutionize the print industry, or pave the way for smartphones we can roll up and stick in our back pockets. The goal was to use existing tech in a way that gamers haven’t seen before.
Photography by Andrew Liszewski / The Verge
Technology
Waymo’s cheaper robotaxi tech could help expand rides fast
NEWYou can now listen to Fox News articles!
If you live in cities like San Francisco, Phoenix, Los Angeles, Austin or Atlanta, you may have already seen or even taken a ride in a driverless Waymo operating without a human behind the wheel. In newer markets like Miami, service is rolling out, while other cities, including Dallas, Houston, San Antonio and Orlando, are part of Waymo’s expansion plans.
For everyone else, not so much. At least not yet. For most of us, that still feels like something happening somewhere else, not something that pulls up when you request a ride.
However, that could start to change very soon. Waymo just unveiled its sixth-generation Waymo Driver hardware, and the headline is simple: it costs less and fits into more vehicles. That combination could help driverless rides reach a lot more cities, faster than you and I might expect.
THE ROBOTAXI PRICE WAR HAS STARTED. HERE’S EVERYTHING YOU NEED TO KNOW.
Waymo’s new sixth-generation hardware will first roll out in the Zeekr-built Ojai minivan before expanding to more vehicles and cities. (Waymo)
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter
Why Waymo’s cheaper robotaxi hardware changes the game
Until recently, if you spotted a Waymo on the road, it was usually a Jaguar I-Pace. Nice car. Not exactly built for a massive robotaxi rollout. The sixth-generation system changes that. The first vehicle to carry the new hardware is the Zeekr-built Ojai electric minivan. Zeekr is owned by Geely. Waymo employees in Los Angeles and San Francisco will begin fully autonomous rides in it soon, with public access expected to follow. In these new deployments, Waymo says the vehicles will operate without safety drivers behind the wheel. After that, the hardware will also power versions of the Hyundai Ioniq 5.
Here is where this really matters. When Waymo can install the same system across multiple vehicle types and produce it at a lower cost, expansion becomes much easier. The company says it plans to move into 20 additional cities this year and is ramping up its Metro Phoenix facility to build tens of thousands of Driver kits annually.
Waymo says it has shifted more processing power into its own custom silicon chips, allowing it to use fewer cameras while improving performance and reducing overall system cost. More vehicles and lower costs mean one thing: a better chance that driverless rides show up in your city sooner rather than later.
How the Waymo Driver actually sees the road
If you have never been in a robotaxi, this is the part you are probably wondering about. The sixth-generation Waymo Driver uses 16 high-resolution 17 megapixel cameras, short-range lidar, radar and external audio receivers. Waymo says the updated cameras offer improved dynamic range compared to the previous 29-camera setup. That helps the vehicle perform better at night and in bright glare.
Short-range lidar delivers centimeter-level accuracy to detect pedestrians, cyclists, and other road users. Radar adds another layer of awareness. Waymo says its upgraded imaging radar can track distance, speed and object size even in rain or snow, giving the system more time to react. External audio receivers can detect sirens or trains by sound.
Unlike Tesla, which has emphasized camera-based systems, Waymo relies on multiple overlapping technologies. If one sensor struggles, another can support it. There is also a cleaning system for key sensors. Snow, dirt, or road spray should not easily block visibility.
Waymo says this version is designed to operate in more extreme weather, including heavy winter conditions, which could open the door to colder U.S. cities that were previously harder to support.
The Waymo Driver blends high-resolution cameras, lidar and radar to create a 360-degree view of the road, even at night or in bad weather. (Waymo)
Why you probably haven’t seen a Waymo robotaxi yet
Right now, Waymo has about 1,500 vehicles on the road. That sounds like a lot until you compare it to the millions of cars in the U.S. The company wants to grow that number to around 3,500 this year and eventually into the tens of thousands. Still, service is limited to certain parts of certain cities. If you do not live in one of those areas, you are simply not going to see one.
That is why this new hardware matters. When the system costs less and fits into more vehicles, Waymo can put more cars on the road in more places. This is not about adding flashy features or cool upgrades. It is about getting from a small footprint to something that feels normal in everyday life.
What about safety and past incidents?
Whenever driverless cars expand, safety questions come right with them. Waymo says its system is built with multiple layers of redundancy. The sixth-generation Driver combines cameras, lidar, radar and audio detection so the vehicle is not relying on a single sensor. That layered setup is designed to reduce risk if one system has trouble. The company says this latest system builds on nearly 200 million fully autonomous miles driven across more than 10 major cities, including dense urban cores and freeways.
Even so, incidents have happened. Earlier this year, a Waymo vehicle was involved in an accident that injured a child, which raised fresh concerns about how autonomous vehicles respond in complex real-world situations. Regulators continue to monitor autonomous vehicle performance closely, especially in states like California, where reporting requirements are strict.
WAYMO UNDER FEDERAL INVESTIGATION AFTER CHILD STRUCK
Waymo has also released data suggesting its vehicles experience fewer injury-causing crashes per mile compared to human drivers in similar areas. Supporters argue that reducing human error could improve road safety over time. Critics say expanding too quickly could introduce new risks.
Both things can be true. The technology is advancing, but public trust will depend on transparency, accountability and long-term safety performance.
What this means to you
If Waymo expands into your city, you may soon open a rideshare app and see a new option. No driver. No conversation. Just a vehicle that navigates using software and sensors.
More vehicles could mean shorter wait times in busy areas. Increased competition may also affect pricing in the rideshare market. At the same time, comfort levels vary. Many riders may hesitate before stepping into a car with an empty front seat. This shift is about more than technology. It changes how people commute, travel and move around urban areas.
Take my quiz: How safe is your online security?
With lower costs and broader vehicle compatibility, Waymo hopes to put many more driverless cars on real city streets soon. (Waymo)
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
Waymo’s sixth-generation Driver is really about one thing: getting more driverless cars on the road, in more cities, at a lower cost. When the hardware becomes cheaper and easier to install in different vehicles, expansion gets easier. That does not automatically mean everyone will be comfortable hopping in. For many people, sitting in a car with no driver might still feel a bit scary. The technology is moving forward whether we are ready or not. The bigger question is simple: will we feel confident enough to get in?
If you had to choose today, would you book the driverless ride or wait for a human behind the wheel? Let us know by writing to us at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Arturia’s FX Collection 6 adds two new effects and a $99 intro version
Arturia launched a new version of its flagship effects suite, FX Collection, which includes two new plugins, EFX Ambient and Pitch Shifter-910. FX Collection 6 also marks the introduction of an Intro version with a selection of six effects covering the basics for $99. That pales in comparison to the 39 effects in the full FX Collection Pro, but that also costs $499.
Pitch Shifter-910 is based on the iconic Eventide H910 Harmonizer from 1974, an early digital pitchshifter and delay with a very unique character. Arturia does an admirable job preserving its glitchy quirks. Pitch Shifter-910 is not a transparent effect that lets you create natural-sounding harmonies with yourself. Instead, it relishes in its weirdness, delivering chipmunk vocals at the higher ranges. There is also a more modern mode that cleans up some artifacts while preserving what makes the 910 so special. Though if you ask me, it also takes some of the fun and unpredictability out.
EFX Ambient is the other new addition to Arturia’s lineup, and it’s a weird one. While it does what it says on the tin, it doesn’t always do it in predictable ways. Sure, there’s plenty of big ethereal reverbs and shimmer, but there’s also resonators, glitch processing, and reverse delays. It has six distinct modes with unique characteristics, which it feeds through a big washy reverb. And there’s an X/Y control in the middle for adding movement to your sound.
Neither of the brand-new effects made the cut for the Intro version. FX Collection 6 Intro includes Efx Motions, Efx Fragments, Mix Drums, Tape Mello-Fi, Rev Plate-140, and Delay Tape-201. That offers excellent versatility covering delay, reverb, tape-like lo-fi, modulation, and even granular processing. Primarily, what you miss out on are some of the saturation and mixing effects like bus and compression, as well as the more specialty flavors of delay and reverb like Rev LX-24, based on the Lexicon 224 from 1978.
$499 for the full FX Collection 6 Pro might seem steep, but as the company has grown the lineup from 15 effects in 2020 to 39 in 2026, it’s become a more attractive value proposition. And, while it’s not quite as highly regarded as Arturia’s V Collection of soft synths, it’s building a reputation for high-quality effects.
-
Montana3 days ago2026 MHSA Montana Wrestling State Championship Brackets And Results – FloWrestling
-
Culture1 week agoRomance Glossary: An A-Z Guide of Tropes and Themes to Find Your Next Book
-
Oklahoma5 days agoWildfires rage in Oklahoma as thousands urged to evacuate a small city
-
Science1 week agoWhat a Speech Reveals About Trump’s Plans for Nuclear Weapons
-
Culture1 week agoVideo: How Much Do You Know About Romance Books?
-
News1 week ago
Second US aircraft carrier is being sent to the Middle East, AP source says, as Iran tensions high
-
Politics1 week agoSchumer’s ‘E. coli’ burger photo resurfaces after another Dem’s grilling skills get torched: ‘What is that?’
-
Politics1 week agoTim Walz demands federal government ‘pay for what they broke’ after Homan announces Minnesota drawdown