Connect with us

Technology

How this ex-Apple guy’s AI glasses invention works

Published

on

How this ex-Apple guy’s AI glasses invention works

Imagine being able to see the world through the eyes of an AI. You would be able to instantly recognize any object, person or place, translate any language, get nutritional facts, search the web and even generate images with just your voice. Sounds pretty cool, right?

Well, now, thanks to a smart ex-Apple guy and his company called Brilliant Labs, you can now experience this amazing technology with their new product: Frame glasses. Frame glasses are the world’s first glasses with an integrated multimodal AI assistant. They are more than just stylish. They are designed to give you AI superpowers.

CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS AND EASY HOW-TO’S TO MAKE YOU SMARTER

Frame (Brilliant Labs) (Kurt “CyberGuy” Knutsson)

What are Frame AI glasses?

Frame glasses are a pair of smart glasses that look like regular glasses, but have a powerful neural engine CPU that can run multiple generative AI models at the same time. They also have a color micro OLED display that projects information directly on the lens, a camera, a microphone and a battery.

Advertisement

Frame glasses are compatible with Brilliant Labs’ app, called Noa. Noa is an AI assistant that uses OpenAI for visual analysis, Whisper for translation and Perplexity for web search. Noa learns and adapts to both the user and the tasks it receives.

How this ex-Apple guy’s AI glasses invention works

Frame (Brilliant Labs) (Kurt “CyberGuy” Knutsson)

MORE: ZUCKERBERG’S META AI RAY-BAN GLASSES EVOLVE INTO CREEPY LIVE-STREAM CAM

What can Frame AI glasses do?

Frame glasses can do a lot of things that will make your life easier and more fun. Here are some examples.

Visual recognition: You can ask Frame glasses to identify any object, person or place that you see. For example, you can say, “What is this?” and Frame glasses will tell you what it is, how it works or where it comes from. You can also say, “Where is this?” and Frame glasses will tell you the location, history or culture of the place you are seeing.

How this ex-Apple guy’s AI glasses invention works

Frame (Brilliant Labs) (Kurt “CyberGuy” Knutsson)

Translation: You can ask Frame glasses to translate any language that you hear or read. For example, you can say, “Translate this,” and Frame glasses will translate the speech or text that you are listening to or looking at.

Advertisement
How this ex-Apple guy’s AI glasses invention works

Frame (Brilliant Labs) (Kurt “CyberGuy” Knutsson)

Nutrition: You can ask Frame glasses to give you nutritional facts about any food that you eat. For example, you can say, “What is this?” and Frame glasses will tell you the calories, ingredients or health benefits of the food you are eating.

How this ex-Apple guy’s AI glasses invention works

Frame (Brilliant Labs) (Kurt “CyberGuy” Knutsson)

GET MORE OF MY SECURITY ALERTS, QUICK TIPS & EASY VIDEO TUTORIALS WITH THE FREE CYBERGUY NEWSLETTER – CLICK HERE

Web search: You can ask Frame glasses to search the web for any information that you need. For example, you can say, “Search this,” and Frame glasses will search the web for the topic, keyword or question that you are interested in. You can also say, “Show me this,” and Frame glasses will show you the results, images or videos that match your query.

How this ex-Apple guy’s AI glasses invention works

Frame (Brilliant Labs) (Kurt “CyberGuy” Knutsson)

Image generation: You can ask Frame glasses to generate images for you based on your description, request or imagination. You can also say, “Create this,” and Frame glasses will create the scene, story or artwork that you request.

How this ex-Apple guy’s AI glasses invention works

Frame (Brilliant Labs) (Kurt “CyberGuy” Knutsson)

MORE: THESE HIGH-TECH GLASSES WILL SUBTITLE REAL-LIFE CONVERSATIONS

Advertisement

Frame AI glasses by the numbers

Frame glasses feature a 640×400-pixel color micro OLED that projects light through a prism in front of users’ eyes. It offers a roughly 20-degree diagonal field of view. Frame glasses also come with a 1280×720 camera, microphone and a 222mAh battery. They run a Lua-based custom operating system that is fully open source with very few dependencies and is powered by an nRF52840 Cortex-M4F CPU.

How this ex-Apple guy’s AI glasses invention works

Frame (Brilliant Labs) (Kurt “CyberGuy” Knutsson)

MORE: TIPS TO FOLLOW FROM ONE INCREDIBLY COSTLY CONVERSATION WITH CYBERCROOKS

How can you get Frame AI glasses?

Frame glasses are available for preorder for $349. You can choose from three colors to suit your personality: black, gray or clear. If you need prescription lenses, don’t worry, Frame has partnered with Adoptics to handle that part. Frame glasses start shipping on April 15.

How this ex-Apple guy’s AI glasses invention works

Frame (Brilliant Labs) (Kurt “CyberGuy” Knutsson)

However, if you want to enjoy the full potential of Frame glasses, you will need to pay for a subscription to the Noa app. Although you can use Noa for free, it is subject to a daily cap. The startup is planning to offer a paid tier through Noa, but there is still no information on how much it might cost. You won’t have to pay to use the hardware by itself, though, as Brilliant Labs notes that there is no paywall or subscription and that you can freely use the eyewear with other apps.

Kurt’s key takeaways

Frame glasses are absolutely a new way to experience the world differently with AI glasses. They are a way to sort of unleash your AI superpowers. Whether you want to learn, explore, create or have fun, Frame glasses can definitely help you do that. I wonder what the feedback will be once people start testing them out. Should be interesting.

Advertisement

How do you think Frame glasses will change the way you interact with the world? Which of the features of Frame glasses are you most excited about and why? Let us know by writing us at Cyberguy.com/Contact.

For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Ask Kurt a question or let us know what stories you’d like us to cover.

Answers to the most asked CyberGuy questions:

Copyright 2024 CyberGuy.com. All rights reserved.

Advertisement

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Facebook is starting to feed its Meta AI with private, unpublished photos

Published

on

Facebook is starting to feed its Meta AI with private, unpublished photos

For years, Meta trained its AI programs using the billions of public images uploaded by users onto Facebook and Instagram’s servers. Now, it’s also hoping to access the billions of images that users haven’t uploaded to those servers. Meta tells The Verge that it’s not currently training its AI models on those photos, but it would not answer our questions about whether it might do so in future, or what rights it will hold over your camera roll images.

On Friday, TechCrunch reported that Facebook users trying to post something on the Story feature have encountered pop-up messages asking if they’d like to opt into “cloud processing”, which would allow Facebook to “select media from your camera roll and upload it to our cloud on a regular basis”, to generate “ideas like collages, recaps, AI restyling or themes like birthdays or graduations.”

By allowing this feature, the message continues, users are agreeing to Meta AI terms, which allows their AI to analyze “media and facial features” of those unpublished photos, as well as the date said photos were taken, and the presence of other people or objects in them. You further grant Meta the right to “retain and use” that personal information.

Meta recently acknowledged that it scraped the data from all the content that’s been published on Facebook and Instagram since 2007 to train its generative AI models. Though the company stated that it’s only used public posts uploaded from adult users over the age of 18, it has long been vague about exactly what “public” entails, as well as what counted as an “adult user” in 2007.

Meta tells The Verge that, for now, it’s not training on your unpublished photos with this new feature. “[The Verge’s headline] implies we are currently training our AI models with these photos, which we aren’t. This test doesn’t use people’s photos to improve or train our AI models,” Meta public affairs manager Ryan Daniels tells The Verge.

Advertisement

Meta’s public stance is that the feature is “very early,” innocuous and entirely opt-in: “We’re exploring ways to make content sharing easier for people on Facebook by testing suggestions of ready-to-share and curated content from a person’s camera roll. These suggestions are opt-in only and only shown to you – unless you decide to share them – and can be turned off at any time. Camera roll media may be used to improve these suggestions, but are not used to improve AI models in this test,” reads a statement from Meta comms manager Maria Cubeta.

On its face, that might sound not altogether different from Google Photos, which similarly might suggest AI tweaks to your images after you opt into Google Gemini. But unlike Google, which explicitly states that it does not train generative AI models with personal data gleaned from Google Photos, Meta’s current AI usage terms, which have been in place since June 23, 2024, do not provide any clarity as to whether unpublished photos accessed through “cloud processing” are exempt from being used as training data — and Meta would not clear that up for us going forward.

And while Daniels and Cubeta tell The Verge that opting in only gives Meta permission to retrieve 30 days worth of your unpublished camera roll at a time, it appears that Meta is retaining some data longer than that. “Camera roll suggestions based on themes, such as pets, weddings and graduations, may include media that is older than 30 days,” Meta writes.

Thankfully, Facebook users do have an option to turn off camera roll cloud processing in their settings, which, once activated, will also start removing unpublished photos from the cloud after 30 days.

The feature suggests a new incursion into our previously private data, one that bypasses the point of friction known as conscientiously deciding to post a photo for public consumption. And according to Reddit posts found by TechCrunch, Meta’s already offering AI restyling suggestions on previously-uploaded photos, even if users hadn’t been aware of the feature: one user reported that Facebook had Studio Ghiblified her wedding photos without her knowledge.

Advertisement

Correction, June 27th: An earlier version of this story implied Meta was already training AI on these photos, but Meta now states that the current test does not yet do so. Also added statement and additional details from Meta.

Continue Reading

Technology

A $300M luxury doomsday bunker has AI doctors and robotic staff

Published

on

A 0M luxury doomsday bunker has AI doctors and robotic staff

NEWYou can now listen to Fox News articles!

As global uncertainty grows, many of the world’s wealthiest individuals are looking beyond gated communities and private security teams. 

Instead, they’re turning to luxury doomsday bunkers that promise not just safety, but comfort and peace of mind. Traditional alarm systems and exclusive neighborhoods no longer feel sufficient. 

For this group, the goal is to find a solution that covers every angle, offering privacy, advanced protection and a sense of normalcy, no matter what’s happening above ground.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join.

Advertisement

THIS $1M ‘FLINTSTONES’-STYLE HOME BUILT INTO ANCIENT RED ROCK COULD BE YOURS

Luxury doomsday bunker  (SAFE)

Introducing Aerie: SAFE’s bold new underground retreat

In the summer of 2026, SAFE, short for Strategically Armored & Fortified Environments, will open the doors to Aerie, a $300 million underground sanctuary near Washington, D.C. SAFE has made a name for itself by creating some of the world’s most extravagant security features for private homes and yachts, but Aerie takes things to a whole new level. This isn’t just a bunker. It’s a private club where luxury and security come together, offering members a safe haven that doesn’t compromise on style or amenities.

Doomsday bunker 2

Luxury doomsday bunker  (SAFE)

$750,000 APOCALYPSE SUV COMES WITH ITS OWN GAS MASK

The vision: A global network of secure luxury

Aerie is just the beginning. SAFE plans to expand this concept to 50 cities across the United States, with an eye on 1,000 affiliate locations worldwide. The idea is that members will always have access to a familiar, secure place to stay, no matter where their travels take them. It’s about making sure that, even far from home, members can count on a consistent level of safety, privacy and comfort.

Advertisement

HOW TO REMOVE YOUR PERSONAL INFO FROM PEOPLE SEARCH SITES

doomsday bunker 3

Luxury doomsday bunker  (SAFE)

SOLAR-POWERED ROOF WRAPS NEW MEGA MODERN OPERA HOUSE

What’s life like inside Aerie?

Walking into Aerie, you’ll find an environment that feels nothing like the cold, concrete bunkers you might expect. The residences are surrounded by fortified rock and protected by layers of biometric security, tactical mantras and SCIF-compliant spaces for total privacy. 

The only above-ground feature is a rooftop penthouse, while everything else is tucked deep underground. Thanks to interactive walls and creative lighting, it feels like you’re enjoying panoramic city views, even though you’re far below the surface. Living spaces start at 2,000 square feet and can stretch to more than 20,000 square feet, with every detail customizable to the owner’s preferences.

HOW SECURE IS MY PASSWORD? USE THIS TEST TO FIND OUT

Advertisement
doomsday bunker 4

Luxury doomsday bunker  (SAFE)

TESLA’S CYBERTRUCK GETS A MILITARY MAKEOVER WITH A TACTICAL TWIST

Wellness and longevity: Health and medical amenities

Aerie isn’t just about staying safe; it’s about staying well. Each location features AI-powered medical suites called MediShield, which connect members to medical specialists around the clock and provide intensive care if needed. Naomi Corbi, SAFE’s director of medical preparedness, says wellness is a top priority for many clients. That’s why Aerie includes hyperbaric chambers for oxygen therapy, ice plunge rooms to help with recovery, IV therapy for hydration and immune support and even AI-powered massage rooms to keep residents feeling their best during extended stays underground.

doomsday bunker 5

Luxury doomsday bunker (SAFE)

Advanced technology and total privacy

Technology is at the heart of Aerie’s approach to privacy and security. SAFE’s systems give owners complete control over their environment, from tracking assets to spotting potential problems, even in homes with large, ever-changing staff. Every part of Aerie is designed to keep members’ information and safety protected, so they can do business or unwind without worry.

doomsday bunker 6

Luxury doomsday bunker  (SAFE)

What inspired Aerie?

Al Corbi, SAFE’s founder, says the idea for Aerie came from clients who, even with fortified homes and yachts, felt exposed when traveling. The solution is a global network of bunkers where members can always find a secure, luxurious place to stay, no matter where they are. With features like blast-resistant walls, ballistic glass and rapid evacuation elevators, Aerie is built to handle everything from civil unrest to the most extreme emergencies.

Advertisement
doomsday bunker 7

Luxury doomsday bunker  (SAFE)

The cost of membership and customization

Joining Aerie is not for the faint of wallet. Residences in the complex can cost up to $20 million each, depending on size and customization. The membership-based club offers a range of options, from individual suites to sprawling multi-level penthouses. The $300 million price tag for the first location reflects the scale and ambition of the project, and SAFE’s plans for a global network mean that members are buying into a lifestyle and a worldwide safety net, not just a single property.

SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES

doomsday bunker 8

Luxury doomsday bunker  (SAFE)

Kurt’s key takeaways

Aerie is blending high-end living with advanced wellness and top-tier security. As concerns about safety and stability continue to grow, Aerie’s approach is likely to appeal to those who want to be ready for anything. For the ultra-wealthy, Aerie could soon become the ultimate address for peace of mind, exclusivity and a new kind of luxury living.

If price wasn’t an issue, would you be open to living in a luxury underground bunker with all the comforts and security you could want? Let us know by writing to us at Cyberguy.com/Contact

Advertisement

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter

Ask Kurt a question or let us know what stories you’d like us to cover

Follow Kurt on his social channels

Answers to the most asked CyberGuy questions:

New from Kurt:

Advertisement

Copyright 2025 CyberGuy.com.  All rights reserved.  

Continue Reading

Technology

Graphic artists in China push back on AI and its averaging effect

Published

on

Graphic artists in China push back on AI and its averaging effect

Sendi Jia, a designer running her own studio between Beijing, China, and London, England, says she mainly uses AI generators like DALL-E to make fake photos for background panels or websites when her clients don’t have access to real ones. That’s helped clients with limited budgets, but it’s also exposed just how much of the creative process AI can replace. Recently, a potential client working in a university contacted Jia about creating the logo for a new project. Then, they changed their mind. They had used AI to make it, they said.

Chinese graphic artists are rapidly experiencing the impact of image generators on their day-to-day work: the technology enables copycats and profoundly shifts clients’ perception of their work, specifically in terms of how much that work costs and how much time it takes to produce. Freelance artists or designers working in industries with clients that invest in stylized, eye-catching graphics, like advertising, are particularly at risk.

Long before AI image generators became popular, graphic designers at major tech companies and in-house designers for large corporate clients were often instructed by managers to crib aesthetics from competitors or from social media, according to one employee at a major online shopping platform in China, who asked to remain anonymous for fear of retaliation from their employer.

Where a human would need to understand and reverse engineer a distinctive style to recreate it, AI image generators simply create randomized mutations of it. Often, the results will look like obvious copies and include errors, but other graphic designers can then edit them into a final product.

“I think it’d be easier to replace me if I didn’t embrace [AI],” the shopping platform employee says. Early on, as tools like Stable Diffusion and Midjourney became more popular, their colleagues who spoke English well were selected to study AI image generators to increase in-house expertise on how to write successful prompts and identify what types of tasks AI was useful for. Ultimately, it was useful for copying styles from popular artists that, in the past, would take more time to study.

Advertisement

“I think it forces both designers and clients to rethink the value of designers,” Jia says. “Is it just about producing a design? Or is it about consultation, creativity, strategy, direction, and aesthetic?”

“You might get a good result, but there will inevitably be dozens or even hundreds of poor ones … Personally, I see [AI image generators] as more of a toy than a tool.”

At ad agencies, for example, graphic designers work on comprehensive strategies for campaigns, aiming to create iconic, recognizable visual identities across a variety of formats. As such, AI image generators are less useful because they don’t produce anything particularly unique, according to Erbing, a graphic designer in Beijing who has worked with several ad agencies and asked to be called by his nickname.

“Each project faces different problems, and designers are there to solve specific problems, not to create identical visuals,” he says. “Sometimes, the process of thinking through a project takes longer than actually creating the visuals.”

When faced with more complex tasks, AI’s utility dwindles. Image generators are capable of creating many images, but that does not replace the work of understanding what an ad campaign needs to establish a visual identity and communicate what it is the client is selling and why people should buy it. Then, translating those concepts to the AI productively is its own challenge. Among graphic designers in China, there’s a joke that using an AI image generator is like gacha, referring to addictive games where users spend money to receive randomized items and find out what they won.

Advertisement

“You might get a good result, but there will inevitably be dozens or even hundreds of poor ones,” Erbing says. “Personally, I see [AI image generators] as more of a toy than a tool.”

Across the board, though, artists and designers say that AI hype has negatively impacted clients’ view of their work’s value. Now, clients expect a graphic designer to produce work on a shorter timeframe and for less money, which also has its own averaging impact, lowering the ceiling for what designers can deliver. As clients lower budgets and squish timelines, the quality of the designers’ output decreases.

“There is now a significant misperception about the workload of designers,” Erbing says. “Some clients think that since AI must have improved efficiency, they can halve their budget.”

But this perception runs contrary to what designers spend the majority of their time doing, which is not necessarily just making any image, Erbing says.

Erbing, like other designers, hopes AI image generators can become more useful to graphic designers in the future, and notes that people’s perception of their usefulness outpaces their actual application. In the meanwhile, it is twisting the clients’ view of the usefulness of the artists themselves.

Advertisement

Continue Reading

Trending