Connect with us

Technology

The humble screenshot might be the key to great AI assistants

Published

on

The humble screenshot might be the key to great AI assistants

If you want to make the most out of a world increasingly filled with AI tools, here’s a habit to develop: start taking screenshots. Lots of screenshots. Of anything and everything. Because for all the talk of voice modes, omnipresent cameras, and the multimodal future of everything, there might be no more valuable digital behavior than to press the buttons and save what you’re looking at.

Screenshots are the most universal method of capturing digital information. You can capture anything — well, almost anything, thanks a lot, Netflix! — with a few clicks, and save and share it to almost any device, app, or person. “It’s this portable data format,” says Johnny Bree, the founder of the digital storage app Fabric. “There’s nothing else that’s quite so portable that you can move between any piece of software.”

A screenshot contains a lot of information, like its source, contents, and even the time of the day in the corner of the screen. Most of all, it sends a crucial and complex signal; it says I care about this. We have countless new AI tools that aim to watch the world, our lives, and everything, and try to make sense of it all for us. These tools are mostly crap for lots of reasons but mostly because AI is pretty good at knowing what things are, but it’s rubbish at knowing whether they matter. A screenshot assigns value and tells the system it needs to pay attention.

Screenshots also put you, the user, in control in an important way. “If I give you access to all of my emails, all my WhatsApps, everything, there’s a lot of noise,” says Mattias Deserti, the head of smartphone marketing at Nothing. There’s simply no reason to save every email you receive or every webpage you visit — and that’s to say nothing of the privacy implications. “So what if, instead, you were able to start training the system yourself, feeding the system the information you want the system to know about you?” Rather than a tool like Microsoft Recall, which asks for unlimited access to everything, starting with screenshots lets you pick what you share.

Until now, screenshots have been a fairly blunt instrument. You snap one, and it gets saved to your camera roll, where it probably languishes, forgotten, until the end of time. (And don’t get me started on all the screenshots I take by accident, mostly of my lockscreen.) At best, you might be able to search for some text inside the image. But it’s more likely that you’ll just have to s scroll until you find it again.

Advertisement

The first step in making screenshots more useful is to figure out what’s actually in them

The first step in making screenshots more useful is to figure out what’s actually in them. This is, at first blush, not terribly complicated: optical character recognition technology has long done a good job of spotting text on a page. AI models take that one step further, so you can either search the title or just “movies” to find all your digital snaps of posters, Fandango results, TikTok recommendations, and more. “We use an OCR model,” says Shenaz Zack, a product manager at Google and part of the team behind the Pixel Screenshots app. “Then we use an entity-detection model, and then Gemini to understand the actual context of the screen.”

See, there’s far more to a screenshot than just the text inside. The right AI model should be able to tell that it came from WhatsApp, just by the specific green color. It should be able to identify a website by its header logo or understand when you’re saving a Spotify song name, a Yelp handyman review, or an Amazon listing. Armed with this information, a screenshot app might begin to automatically organize all those images for you. And even that is just the beginning.

With everything I’ve described so far, all we’ve really created is a very good app for looking at your screenshots, which no one really thinks is a good idea because it would be just one more thing to check — or forget to check. Where it gets vastly more interesting is when your device or app can actually start to use the screenshots on your behalf, to help you actually remember what you captured or even use that information to get stuff done.

In Nothing’s new Essential Space app, for instance, the app can generate reminders based on stuff you save. If you take a screenshot of a concert you’d like to go to, it can remind you that it’s coming up automatically. Pixel Screenshots is pushing the idea even further: if you save a concert listing, your Pixel phone can prompt you to listen to that band the next time you open Spotify. If you screenshot an ID card or a boarding pass, it might ask you to put it in the Wallet app. The idea, Zack says, is to think of screenshots as an input system for everything else.

Advertisement

It’s one thing to screenshot a band you like. It’s another to be able to find them again later.
Image: David Pierce / The Verge

Mike Choi, an indie developer, built an app called Camp in part to help him make use of his own screenshots. He began to work on turning every screenshot into a “card,” with the salient information stored alongside the picture. “You have a screenshot, and at the bottom there’s a button, and it flips the card over,” he says. “It shows you a map, if it was a location; a preview of a song, if it’s a song. The idea was, given an infinite pool of different types of screenshots, can AI just generate the perfect UI for that category on the fly?”

If all this sounds familiar, it’s because there’s another term for what’s going on here: it’s called agentic AI. Every company in tech seems to be working on ways to use AI to accomplish things on your behalf. It’s just that, in this case, you don’t have to write long prompts or chat back and forth with an assistant. You just take a screenshot and let the system go to work. “You’re building a knowledge base, when today that knowledge base is confined to your gallery and nothing happens with it,” Deserti says. He’s excited to get to the point where you screenshot a concert date, and Essential Space automatically prompts you to buy tickets when they go on sale.

Making sense of screenshots isn’t always so straightforward

Making sense of screenshots isn’t always so straightforward, though. Some you want to keep forever, like the ID card you might need often; other things, like a concert poster or a parking pass, have extremely limited shelf lives. For that matter, how is an app supposed to distinguish between the parking pass you use every day at work and the one you used once at the airport and never need again? Some of the screenshots on my phone were sent to me on WhatsApp; others I grabbed from Instagram memes to send to friends. No one’s camera roll should ever be fully held against them, and the same goes for screenshots. Lots of these screenshot apps are looking for ways to prompt you to add a note, or organize things yourself, in order to provide some additional helpful information to the system. But it’s hard work to do that without ruining what makes screenshots so seamless and easy in the first place.

Advertisement

One way to begin to solve this problem, to make screenshots even more automatically useful, is to collect some additional context from your device. This is where companies like Google and Nothing have an advantage: because they make the device, they can see everything that’s happening when you take a screenshot. If you grab a screenshot from your web browser, they can also store the link you were looking at. They can also see your physical location or note the time and the weather. Sometimes this is all useful, but sometimes it’s nonsense; the more data they collect, the more these apps risk running into the same noise problem that screenshots helped solve in the first place.

But the input system works. We all take screenshots, all the time, and we’re used to taking them as a way to put a marker on so many kinds of useful information. Getting access to that kind of relevant, personalized data is the hardest thing about building a great AI assistant. The future of computing is certainly multimodal, including cameras, microphones, and sensors of all kinds. But the first best way to use AI might be one screenshot at a time.

Technology

Apple says Jon Prosser ‘has not indicated’ when he may respond to lawsuit

Published

on

Apple says Jon Prosser ‘has not indicated’ when he may respond to lawsuit

Earlier this week, Jon Prosser, who is being sued by Apple for allegedly stealing trade secrets, told The Verge that he has been “in active communications with Apple since the beginning stages of this case.” But Apple, in a new filing on Thursday that was reported on by MacRumors, said that while Prosser has “publicly acknowledged” Apple’s complaint, he “has not indicated whether he will file a response to it or, if so, by when.”

Prosser didn’t immediately reply to a request for comment from The Verge. Apple sued Prosser, who posted videos earlier this year showing off features that would debut in iOS 26 ahead of their official announcement, and another defendant, Michael Ramacciotti, in July. The company alleged that Prosser and Ramacciotti had “a coordinated scheme to break into an Apple development iPhone, steal Apple’s trade secrets, and profit from the theft.”

A clerk already entered a default against Prosser last week, which means he hasn’t responded to the lawsuit and that the case can move forward. In Thursday’s filing, Apple said it “intends to file a default judgment seeking damages and an injunction against him.”

Thursday’s filing also includes statements from Ramacciotti. While Ramacciotti “admits to” providing information about iOS 26 to Prosser, “no underlying plan, conspiracy, or scheme was formed” between them, Ramacciotti said. He also claimed that he “had no intent to monetize this information when he contacted Mr. Prosser, nor was there any arrangement at the time the information was conveyed that he would be compensation [sic].”

Apple and Ramacciotti have also “informally discussed settlement,” according to the filing.

Advertisement
Continue Reading

Technology

Scientists spot skyscraper-sized asteroid racing through solar system

Published

on

Scientists spot skyscraper-sized asteroid racing through solar system

NEWYou can now listen to Fox News articles!

Astronomers have reportedly discovered a skyscraper-sized asteroid moving through our solar system at a near record-breaking pace.

The asteroid, named 2025 SC79, circles the sun once every 128 days, making it the second-fastest known asteroid orbiting in the solar system.

It was first observed by Carnegie Science astronomer Scott S. Sheppard Sept. 27, according to a statement from Carnegie Science.

UFO MANIA GRIPS SMALL TOWN AFTER MYSTERIOUS GLOWING OBJECT SIGHTING GOES VIRAL

Advertisement

A skyscraper-size asteroid, named 2025 SC79, was discovered in September, hidden in the sun’s glare. (Carnegie Science)

The asteroid is the second known object with an orbit inside Venus, the statement said. It crosses Mercury’s orbit during its 128-day trip around the sun.

“Many of the solar system’s asteroids inhabit one of two belts of space rocks, but perturbations can send objects careening into closer orbits where they can be more challenging to spot,” Sheppard said. “Understanding how they arrived at these locations can help us protect our planet and also help us learn more about solar system history.”

The celestial body is now traveling behind the sun and will be invisible to telescopes for several months.

HARVARD PHYSICIST SAYS MYSTERIOUS INTERSTELLAR OBJECT COULD BE NUCLEAR-POWERED SPACESHIP

Advertisement

Sheppard’s search for so-called “twilight” asteroids helps identify objects that could pose a risk of crashing into Earth, the statement said.

The work, which is partially funded by NASA, uses the Dark Energy Camera on the National Science Foundation’s Blanco 4-meter telescope to look for “planet killer” asteroids in the glare of the sun that could pose a danger to Earth.

The NSF’s Gemini telescope and Carnegie Science’s Magellan telescopes were used to confirm the sighting of 2025 SC79, Carnegie Science said. 

The fastest known asteroid was also discovered by Sheppard, who studies solar system objects including moons, dwarf planets and asteroids. and his colleagues in 2021.

Advertisement

That one takes 133 days to orbit the sun.

Continue Reading

Technology

Intel’s tick-tock isn’t coming back, and everything else I just learned

Published

on

Intel’s tick-tock isn’t coming back, and everything else I just learned

Today on the company’s Q3 2025 earnings call, where Intel saw its first profit in nearly two years due primarily to those lifelines, CEO Lip-Bu Tan and CFO David Zinsner explained how the company doesn’t yet have enough chips. It’s currently seeing shortages that it expects to peak in the first quarter of next year — in the meantime, leaders say they’re going to prioritize AI server chips over some consumer processors as it deals with supply and demand.

“We expect CCG [Intel’s consumer chips] to be down modestly and DCAI [Intel’s server chips] to be up strongly as we prioritize capacity for server shipments over entry level client parts,” Intel says. Tan revealed today that Intel will also release new AI GPUs each and every year, following Nvidia and AMD in shaking up their traditional cadence to address the huge demand for AI servers. It’s not clear what that might mean for those hoping for more Intel gaming GPUs.

While all eyes are on Intel’s hot new Panther Lake and its 18A process to show the world it can still make the most potent consumer PC chips and make them in-house, the company reiterated it’s only launching one SKU of Panther Lake this year and slowly rolling out others in 2026. Here’s another possible reason why: Zinsner hinted today that Panther Lake will be a “pretty expensive” product to start with, and Intel’s going to have to push its existing Lunar Lake chips instead “in at least the first half of the year.”

While Intel has repeatedly pushed back against the idea that its 18A process had poor yields, the company admitted to investors and analysts today that it’s not ready to be a huge financial success either: yields are “adequate to address the supply but not where we need them to be to drive the appropriate level of margins,” says Zinsner, suggesting that it might be 2026, or even 2027 for an “acceptable level of yields” in that regard.

For now, Intel will be “working closely with customers to maximize our available output, including adjusting pricing and mix, to shift demand towards products where we have supply and they have demand” — which sounds like playing with the prices it charges PC makers to stick Intel inside their computers and pointing them at Lunar Lake parts instead of hot new ones. Tan reiterated today that he’s not going to invest in more capacity unless there’s “committed external demand,” and Zinsner says investments in capacity next year won’t “significantly change expectations”.

Advertisement

Intel says that 18A will be a “long-lived node” that will power “at least the next three generations of client and server products.” If you were hoping for a return to the “tick-tock” days where Intel would alternate between shrinking its chips and releasing new architectures every generation, that’s not happening here.

But that doesn’t mean Intel will cancel its next node, Intel 14A, as it warned it might. Tan suggested today that customers have stepped in to save 14A, and Intel, that the company is “delighted and more confident” in it, and Zinsner says it’s not only “off to a good start,” but better than 18A was at this point “in terms of performance and yields.”

Continue Reading

Trending