Connect with us

Technology

Meta held talks to buy Thinking Machines, Perplexity, and Safe Superintelligence

Published

on

Meta held talks to buy Thinking Machines, Perplexity, and Safe Superintelligence

At this point, it’s becoming easier to say which AI startups Mark Zuckerberg hasn’t looked at acquiring.

In addition to Ilya Sutskever’s Safe Superintelligence (SSI), sources tell me the Meta CEO recently discussed buying ex-OpenAI CTO Mira Murati’s Thinking Machines Lab and Perplexity, the AI-native Google rival. None of these talks progressed to the formal offer stage for various reasons, including disagreements over deal prices and strategy, but together they illustrate how aggressively Zuckerberg has been canvassing the industry to reboot his AI efforts.

Now, details about the team Zuckerberg is assembling are starting to come into view: SSI co-founder and CEO Daniel Gross, along with ex-Github CEO Nat Friedman, are poised to co-lead the Meta AI assistant. Both men will report to Alexandr Wang, the former Scale CEO Zuckerberg just paid over $14 billion to quickly hire. Wang told his Scale team goodbye last Friday and was in the Meta office on Monday. This week, he has been meeting with top Meta leaders (more on that below) and continuing to recruit for the new AI team Zuckerberg has tasked him with building. I expect the team to be unveiled as soon as next week.

Rather than join Meta, Sutskever, Murati, and Perplexity CEO Aravind Srinivas have all gone on to raise more money at higher valuations. Sutskever, a titan of the AI research community who co-founded OpenAI, recently raised a couple of billion dollars for SSI. Both Meta and Google are investors in his company, I’m told. Murati also just raised a couple of billion dollars. Neither she nor Sutskever is close to releasing a product. Srinivas, meanwhile, is in the process of raising around $500 million for Perplexity.

Spokespeople for all the companies involved either declined to comment or didn’t respond in time for publication. The Information and CNBC first reported Zuckerberg’s talks with Safe Superintelligence, while Bloomberg first reported the Perplexity talks.

Advertisement

While Zuckerberg’s recruiting drive is motivated by the urgency he feels to fix Meta’s AI strategy, the situation also highlights the fierce competition for top AI talent these days. In my conversations this week, those on the inside of the industry aren’t surprised by Zuckerberg making nine-figure — or even, yes, 10-figure — compensation offers for the best AI talent. There are certain senior people at OpenAI, for example, who are already compensated in that ballpark, thanks to the company’s meteoric increase in valuation over the last few years.

Speaking of OpenAI, it’s clear that CEO Sam Altman is at least a bit rattled by Zuckerberg’s hiring spree. His decision to appear on his brother’s podcast this week and say that “none of our best people” are leaving for Meta was probably meant to convey a position of strength, but in reality, it looks like he is throwing his former colleagues under the bus. I was confused by Altman’s suggestion that Meta paying a lot upfront for talent won’t “set up a great culture.” After all, didn’t OpenAI just pay $6.5 billion to hire Jony Ive and his small hardware team?

Alex Himel.

“We think that glasses are the best form factor for AI”

When I joined a Zoom call with Alex Himel, Meta’s VP of wearables, this week, he had just gotten off a call with Zuckerberg’s new AI chief, Alexandr Wang.

Advertisement

“There’s an increasing number of Alexes that I talk to on a regular basis,” Himel joked as we started our conversation about Meta’s new glasses release with Oakley. “I was just in my first meeting with him. There were like three people in a room with the camera real far away, and I was like, ‘Who is talking right now?’ And then I was like, ‘Oh, hey, it’s Alex.’”

The following Q&A has been edited for length and clarity:

How did your meeting with Alex just now go?

The meeting was about how to make AI as awesome as it can be for glasses. Obviously, there are some unique use cases in the glasses that aren’t stuff you do on a phone. The thing we’re trying to figure out is how to balance it all, because AI can be everything to everyone or it could be amazing for more specific use cases.

We’re trying to figure out how to strike the right balance because there’s a ton of stuff in the underlying Llama models and that whole pipeline that we don’t care about on glasses. Then there’s stuff we really, really care about, like egocentric view and trying to feed video into the models to help with some of the really aspirational use cases that we wouldn’t build otherwise.

Advertisement

You are referring to this new lineup with Oakley as “AI glasses.” Is that the new branding for this category? They are AI glasses, not smart glasses?

We refer to the category as AI glasses. You saw Orion. You used it for longer than anyone else in the demo, which I commend you for. We used to think that’s what you needed to hit scale for this new category. You needed the big field of view and display to overlay virtual content. Our opinion of that has definitely changed. We think we can hit scale faster, and AI is the reason we think that’s possible.

Right now, the top two use cases for the glasses are audio — phone calls, music, podcasts — and taking photos and videos. We look at participation rates of our active users, and those have been one and two since launch. Audio is one. A very close second is photos and videos.

AI has been number three from the start. As we’ve been launching more markets — we’re now in 18 — and we’ve been adding more features, AI is creeping up. Our biggest investment by a mile on the software side is AI functionality, because we think that glasses are the best form factor for AI. They are something you’re already wearing all the time. They can see what you see. They can hear what you hear. They’re super accessible.

Is your goal to have AI supersede audio and photo to be the most used feature for glasses, or is that not how you think about it?

Advertisement

From a math standpoint, at best, you could tie. We do want AI to be something that’s increasingly used by more people more frequently. We think there’s definitely room for the audio to get better. There’s definitely room for image quality to get better. The AI stuff has much more headroom.

How much of the AI is onboard the glasses versus the cloud? I imagine you have lots of physical constraints with this kind of device.

We’ve now got one billion-parameter models that can run on the frame. So, increasingly, there’s stuff there. Then we have stuff running on the phone.

If you were watching WWDC, Apple made a couple of announcements that we haven’t had a chance to test yet, but we’re excited about. One is the Wi-Fi Aware APIs. We should be able to transfer photos and videos without having people tap that annoying dialogue box every time. That’d be great. The second one was processor background access, which should allow us to do image processing when you transfer the media over. Syncing would work just like it does on Android.

Do you think the market for these new Oakley glasses will be as big as the Ray-Bans? Or is it more niche because they are more outdoors and athlete-focused?

Advertisement

We work with EssilorLuxottica, which is a great partner. Ray-Ban is their largest brand. Within that, the most popular style is Wayfair. When we launched the original Ray-Ban Meta glasses, we went with the most popular style for the most popular brand.

Their second biggest brand is Oakley. A lot of people wear them. The Holbrook is really popular. The HSTN, which is what we’re launching, is a really popular analog frame. We increasingly see people using the Ray-Ban Meta glasses for active use cases. This is our first step into the performance category. There’s more to come.

What’s your reaction to Google’s announcements at I/O for their XR glasses platform and eyewear partnerships?

We’ve been working with EssilorLuxottica for like five years now. That’s a long time for a partnership. It takes a while to get really in sync. I feel very good about the state of our partnership. We’re able to work quickly. The Oakley Meta glasses are the fastest program we’ve had by quite a bit. It took less than nine months.

I thought the demos they [Google] did were pretty good. I thought some of those were pretty compelling. They didn’t announce a product, so I can’t react specifically to what they’re doing. It’s flattering that people see the traction we’re getting and want to jump in as well.

Advertisement

On the AR glasses front, what have you been learning from Orion now that you’ve been showing it to the outside world?

We’ve been going full speed on that. We’ve actually hit some pretty good internal milestones for the next version of it, which is the one we plan to sell. The biggest learning from using them is that we feel increasingly good about the input and interaction model with eye tracking and the neural band. I wore mine during March Madness in the office. I was literally watching the games. Picture yourself sitting at a table with a virtual TV just above people’s heads. It was amazing.

  • TikTok gets to keep operating illegally. As expected, President Trump extended his enforcement deadline for the law that has banned a China-owned TikTok in the US. It’s essential to understand what is really happening here: Trump is instructing his Attorney General not to enforce earth-shattering fines on Apple, Google, and every other American company that helps operate TikTok. The idea that he wouldn’t use this immense leverage to extract whatever he wants from these companies is naive, and this whole process makes a mockery of everyone involved, not to mention the US legal system.
  • Amazon will hire fewer people because of AI. When you make an employee memo a press release, you’re trying to tell the whole world what’s coming. In this case, Amazon CEO Andy Jassy wants to make clear that he’s going to fully embrace AI to cut costs. Roughly 30 percent of Amazon’s code is already written by AI, and I’m sure Jassy is looking at human-intensive areas, such as sales and customer service, to further automate.

If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.

As always, I welcome your feedback, especially if you’ve also turned down Zuck. You can respond here or ping me securely on Signal.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

xAI explains the Grok Nazi meltdown as Tesla puts Elon’s bot in its cars

Published

on

xAI explains the Grok Nazi meltdown as Tesla puts Elon’s bot in its cars

Several days after temporarily shutting down the Grok AI bot that was producing antisemitic posts and praising Hitler in response to user prompts, Elon Musk’s AI company tried to explain why that happened. In a series of posts on X, it said that “…we discovered the root cause was an update to a code path upstream of the @grok bot. This is independent of the underlying language model that powers @grok.”

On the same day, Tesla announced a new 2025.26 update rolling out “shortly” to its electric cars, which adds the Grok assistant to vehicles equipped with AMD-powered infotainment systems, which have been available since mid-2021. According to Tesla, “Grok is currently in Beta & does not issue commands to your car – existing voice commands remain unchanged.” As Electrek notes, this should mean that whenever the update does reach customer-owned Teslas, it won’t be much different than using the bot as an app on a connected phone.

This isn’t the first time the Grok bot has had these kinds of problems or similarly explained them. In February, it blamed a change made by an unnamed ex-OpenAI employee for the bot disregarding sources that accused Elon Musk or Donald Trump of spreading misinformation. Then, in May, it began inserting allegations of white genocide in South Africa into posts about almost any topic. The company again blamed an “unauthorized modification,” and said it would start publishing Grok’s system prompts publicly.

xAI claims that a change on Monday, July 7th, “triggered an unintended action” that added an older series of instructions to its system prompts telling it to be “maximally based,” and “not afraid to offend people who are politically correct.”

The prompts are separate from the ones we noted were added to the bot a day earlier, and both sets are different from the ones the company says are currently in operation for the new Grok 4 assistant.

Advertisement

These are the prompts specifically cited as connected to the problems:

“You tell it like it is and you are not afraid to offend people who are politically correct.”

* Understand the tone, context and language of the post. Reflect that in your response.”

* “Reply to the post just like a human, keep it engaging, dont repeat the information which is already present in the original post.”

The xAI explanation says those lines caused the Grok AI bot to break from other instructions that are supposed to prevent these types of responses, and instead produce “unethical or controversial opinions to engage the user,” as well as “reinforce any previously user-triggered leanings, including any hate speech in the same X thread,” and prioritize sticking to earlier posts from the thread.

Advertisement
Continue Reading

Technology

Solar-powered robot zaps weeds without chemicals

Published

on

Solar-powered robot zaps weeds without chemicals

NEWYou can now listen to Fox News articles!

Out in the California sun, a new kind of farmhand is hard at work. Powered by solar energy and guided by artificial intelligence, the solar-powered weeding robot for cotton fields is offering farmers a smarter and more sustainable way to tackle weeds. 

This technology is arriving just in time, as growers across the country face a shortage of available workers and weeds that are becoming increasingly resistant to herbicides.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

JOB-KILLING ROBOT LEARNS AT WORK, AND IT’S COMING TO THE FACTORY FLOOR

Advertisement

Solar-powered Element robot  (Aigen)

Why farmers need alternatives to herbicides and manual labor

Farmers everywhere are facing a tough reality. There simply aren’t enough people willing to do the backbreaking work of weeding fields, and the weeds themselves are getting harder to kill with chemicals. Many farmers would rather avoid using herbicides, but until now, they haven’t had a practical alternative. Kenny Lee, CEO of Aigen, puts it plainly: farmers don’t love chemicals, but they use them because it’s often the only tool available. Aigen’s mission is to give them a better choice.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

How Aigen’s solar-powered weeding robot uses AI to fight weeds

Aigen’s Element robot is designed to meet the real-world needs of modern agriculture. It runs entirely on solar power, which means farmers can save money on fuel while also reducing their environmental impact. The robot uses advanced AI and onboard cameras to spot and remove weeds with impressive accuracy, all without damaging the crops. Its rugged design allows it to handle rough terrain and changing weather, and it can work alongside other robots, communicating wirelessly to cover large fields efficiently. The Element robot isn’t limited to cotton; it’s also being used in soy and sugar beet fields, showing just how versatile this technology can be.

solar robot 2

Solar-powered Element robot  (Aigen)

Real-world results: Aigen’s robot at work on California cotton farms

At Bowles Farm in California’s Central Valley, Element robots are already proving their worth. These robots are keeping cotton fields weed-free without the need for chemicals, freeing up workers to focus on more skilled tasks and helping farmers manage their operations more efficiently. The technology is not just a promise for the future. It’s delivering real results today. 

Advertisement

Top benefits of solar-powered weeding robots for sustainable farming

Switching to solar-powered, AI-driven robots brings a host of benefits. Farmers no longer need to rely on herbicides, which leads to cleaner crops and healthier soil. Labor costs can drop since workers can shift from manual weeding to supervising and maintaining the robots. The robots also collect valuable data on crop health, pests and diseases, giving farmers better information to make decisions. And because the robots run on solar power, farms can reduce their carbon footprint while saving money on energy.

solar robot 3

Solar-powered Element robots  (Aigen)

Kurt’s key takeaways

Aigen’s Element robot goes beyond being just another cool piece of technology. It really shows what can happen when farming and innovation come together. As more growers start using solar-powered robots like this, chemical-free fields are moving from wishful thinking to something we can actually achieve.

Would you feel comfortable trusting a robot to handle important tasks and help shape the future of how we grow our food? Let us know by writing to us at Cyberguy.com/Contact 

Advertisement

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

Copyright 2025 CyberGuy.com.  All rights reserved.  

Advertisement
Continue Reading

Technology

It’s the final day of Prime Day 2025, and the deals are still live

Published

on

It’s the final day of Prime Day 2025, and the deals are still live

Editor’s note: That’s a wrap, folks! As Prime Day 2025 draws to a close, we’ll no longer be updating this article with additional deals and insights. Plenty of great deals remain, however, so be sure to check out all of our Prime Day coverage for anything you may have missed.

There are mere hours left of Amazon’s extended Prime Day extravaganza. And, yeah, we’re a little exhausted, but after days of lightning deals and all-time low prices, these discounts won’t be around for much longer. So, if you’ve been hesitant to jump on these laptop deals before heading back to school, now’s your time to act. Typically, Prime Day is your last opportunity to take advantage of bottom-dollar prices until Black Friday / Cyber Monday, so it may be a while before you see prices plummet on a gadget you’re interested in buying.

Really, there’s an overwhelming amount of Prime Day deals, so to make things easier to navigate, we’ve organized all of our favorites by category below. That will allow you to quickly find exactly what you’re looking for — or even uncover a deal on something you didn’t know you wanted.

Tablet and e-reader deals

Soundbar and Bluetooth speaker deals

Advertisement

Verge favorites and other miscellaneous deals

Update, July 11th: Added several more deals, including those for Final Fantasy VII Rebirth, Razer’s Kishi Ultra mobile controller, and Amazon’s Fire TV Soundbar Plus.

Continue Reading
Advertisement

Trending