As winter descended on San Francisco in late 2022, OpenAI quietly pushed a new service dubbed ChatGPT live with a blog post and a single tweet from CEO Sam Altman. The team labeled it a “low-key research preview” — they had good reason to set expectations low.
Technology
Inside the launch — and future — of ChatGPT
“It couldn’t even do arithmetic,” Liam Fedus, OpenAI’s head of post-training says. It was also prone to hallucinating or making things up, adds Christina Kim, a researcher on the mid-training team.
Ultimately, ChatGPT would become anything but low-key.
While the OpenAI researchers slept, users in Japan flooded ChatGPT’s servers, crashing the site only hours after launch. That was just the beginning.
“The dashboards at that time were just always red,” recalls Kim. The launch coincided with NeurIPS, the world’s premier AI conference, and soon ChatGPT was the only thing anyone there could talk about. ChatGPT’s error page — “ChatGPT is at capacity right now” — would become a familiar sight.
“We had the initial launch meeting in this small room, and it wasn’t like the world just lit on fire all of a sudden,” Fedus says during a recent interview from OpenAI’s headquarters. “We’re like, ‘Okay, cool. I guess it’s out there now.’ But it was the next day when we realized — oh, wait, this is big.”
“The dashboards at that time were just always red.”
Two years later, ChatGPT still hasn’t cracked advanced arithmetic or become factually reliable. It hasn’t mattered. The chatbot has evolved from a prototype to a $4 billion revenue engine with 300 million weekly active users. It has shaken the foundations of the tech industry, even as OpenAI loses money (and cofounders) hand over fist while competitors like Anthropic threaten its lead.
Whether used as praise or pejorative, “ChatGPT” has become almost synonymous with generative AI. Over a series of recent video calls, I sat down with Fedus, Kim, ChatGPT head of product Nick Turley, and ChatGPT engineering lead Sulman Choudhry to talk about ChatGPT’s origins and where it’s going next.
A “weird” name and a scrappy start
ChatGPT was effectively born in December 2021 with an OpenAI project dubbed WebGPT: an AI tool that could search the internet and write answers. The team took inspiration from WebGPT’s conversational interface and began plugging a similar interface into GPT-3.5, a successor to the GPT-3 text model released in 2020. They gave it the clunky name “Chat with GPT-3.5” until, in what Turley recalls as a split-second decision, they simplified it to ChatGPT.
The name could have been the even more straightforward “Chat,” and in retrospect, he thinks perhaps it should have been. “The entire world got used to this odd, weird name, we’re probably stuck with it. But obviously, knowing what I know now, I wish we picked a slightly easier to pronounce name,” he says. (It was recently revealed that OpenAI purchased the domain chat.com for more than $10 million of cash and stock in mid-2023.)
As the team discovered the model’s obvious limitations, they debated whether to narrow its focus by launching a tool for help with meetings, writing, or coding. But OpenAI cofounder John Schulman (who has since left for Anthropic) advocated for keeping the focus broad.
The team describes it as a risky bet at the time; chatbots were viewed as an unremarkable backwater of machine learning, they thought, with no successful precedents. Adding to their concerns, Facebook’s Galactica AI bot had just spectacularly flamed out and been pulled offline after generating false research.
The team grappled with timing. GPT-4 was already in development with advanced features like Code Interpreter and web browsing, so it would make sense to wait to release ChatGPT atop the more capable model. Kim and Fedus also recall people wanting to wait and launch something more polished, especially after seeing other companies’ undercooked bots fail.
Despite early concerns about chatbots being a dead end, The New York Times has reported that other team members worried competitors would beat OpenAI to market with a fresh wave of bots. The deciding vote was Schulman, Fedus and Kim say. He pushed for an early release, alongside Altman, both believing it was important to get AI into peoples’ hands quickly.
OpenAI had demoed a chatbot at Microsoft Build earlier that year and generated virtually no buzz. On top of that, many of ChatGPT’s early users didn’t seem to be actually using it that much. The team shared their prototype with about 50 friends and family members. Turley “personally emailed every single one of them” every day to check in. While Fedus couldn’t recall exact figures, he recalls that about 10 percent of that early test group used it every day.
Image: Cath Virginia / The Verge, Getty Images
Later, the team would see this as an indication they’d created something with potential staying power.
“We had two friends who basically were on it from the start of their work day — and they were founders,” Kim recalls. “They were on it basically for 12 to 16 hours a day, just talking to it all day.” With just two weeks before the end of November, Schulman made the final call: OpenAI would launch ChatGPT on the last day of that month.
The team canceled their Thanksgiving plans and began a two-week sprint to public release. Much of the system was built at this point, Kim says, but its security vulnerabilities were untested. So they focused heavily on red teaming, or stress testing the system for potential safety problems.
“If I had known it was going to be a big deal, I would certainly not want to ship it right before a winter holiday week before we were all going to go home,” Turley says. “I remember working very hard, but I also remember thinking, ‘Okay, let’s get this thing out, and then we’ll come back after the holiday to look at the learnings, to see what people want out of an AI assistant.’”
In an internal Slack poll, OpenAI employees guessed how many users they would get. Most predictions ranged from a mere 10,000 to 50,000. When someone suggested it might reach a million users, others jumped in to say that was wildly optimistic.
On launch day, they realized they’d all been incredibly wrong.
After Japan crashed their servers, and red dashboards and error messages abounded, the team was anxiously picking up the pieces and refreshing Twitter to gauge public reaction, Kim says. They believed the reaction to ChatGPT could only go one of two ways: total indifference or active contempt. They worried people might discover problematic ways to use it (like attempting to jailbreak it), and the uncertainty of how the public would receive their creation kept them in a state of nervous anticipation.
The launch was met with mixed emotions. ChatGPT quickly started facing criticism over accuracy issues and bias. Many schools ran to immediately ban it over cheating concerns. Some users on Reddit likened it to the early days of Google (and were shocked it was free). For its part, Google dubbed the chatbot a “code red” threat.
OpenAI would wind up surpassing its most ambitious 1-million-user target within five days of launch. Two months after its debut, ChatGPT garnered more than 30 million users.
When someone suggested it might reach a million users, others jumped in to say that was wildly optimistic.
Within weeks of ChatGPT’s November 30th launch, the team started rolling out updates incorporating user feedback (like its tendency to give overly verbose answers). The initial chaos had settled, user numbers were still climbing, and the team had a sobering realization: if they wanted to keep this momentum, things would have to change. The small group that launched a “low-key research preview” — a term that would become a running joke at OpenAI — would need to get a lot bigger.
Over the coming months and years, ChatGPT’s team would grow enormously and shift priorities — sometimes to the chagrin of many early staffers. Top researcher Jan Leike, who played a crucial role in refining ChatGPT’s conversational abilities and ensuring its outputs aligned with user expectations, quit this year to join Anthropic after claiming that “safety culture and processes have taken a backseat to shiny products” at OpenAI.
These days, OpenAI is focused on figuring out what the future of ChatGPT looks like.
“I’d be very surprised if a year from now this thing still looks like a chatbot,” Turley says, adding that current chat-based interactions would soon feel as outdated as ’90s instant messaging. “We’ve gotten pretty sidetracked by just making the chatbot great, but really, it’s not what we meant to build. We meant to build something much more useful than that.”
Increasingly powerful and expensive
I talk with Turley over a video call as he sits in a vast conference room in OpenAI’s San Francisco headquarters that epitomizes the company’s transformation. The office is all sweeping curves and polished minimalism, a far cry from its original office that was often described as a drab, historic warehouse.
With roughly 2,000 employees, OpenAI has evolved from a scrappy research lab into a $150 billion tech powerhouse. The team is spread across numerous projects, including building underlying foundation models and developing non-text tools like the video generator, Sora. ChatGPT is still OpenAI’s highest-profile product by far. Its popularity has come with a lot of headaches.
“I’d be very surprised if a year from now this thing still looks like a chatbot”
ChatGPT still spins elaborate lies with unwavering confidence, but now they’re being cited in court filings and political discourse. It has allowed for an impressive amount of experimentation and creativity, but some of its most distinctive use cases turned out to be spam, scams, and AI-written college term papers.
While some publications (include The Verge’s parent company, Vox Media) are choosing to partner with OpenAI, others like The New York Times are opting to sue it for copyright infringement. And OpenAI is burning through cash at a staggering rate to keep the lights on.
Turley acknowledges that ChatGPT’s hallucinations are still a problem. “Our early adopters were very comfortable with the limitations of ChatGPT,” he says. “It’s okay that you’re going to double check what it said. You’re going to know how to prompt around it. But the vast majority of the world, they’re not engineers, and they shouldn’t have to be. They should just use this thing and rely on it like any other tool, and we’re not there yet.”
Accuracy is one of the ChatGPT team’s three focus areas for 2025. The others are speed and presentation (i.e., aesthetics).
“I think we have a long way to go in making ChatGPT more accurate and better at citing its sources and iterating on the quality of this product,” Turley says.
OpenAI is also still figuring out how to monetize ChatGPT. Despite deploying increasingly powerful and costly AI models, the company has maintained a limited free tier and a $20 monthly ChatGPT Plus service since February 2023.
When I ask Turley about rumors of a future $2,000 subscription, or if advertising will be baked into ChatGPT, he says there is “no current plan to raise prices.” As for ads: “We don’t care about how much time you spend on ChatGPT.”
“They should just use this thing and rely on it like any other tool, and we’re not there yet.”
“I’m really proud of the fact that we have incentives that are incredibly aligned with our users,” he says. Those who “use our product a lot pay us money, which is a very, very, upfront and direct transaction. I’m proud of that. Maybe we’ll have a technology that’s much more expensive to serve and we’re going to have to rethink that model. You gotta remain humble about where the technology is going to go.”
Only days after Turley tells me this, ChatGPT did get a new $200 price tag for a pro tier that includes access to a specialized reasoning model. Its main $20 Plus tier is sticking around but it’s clearly not the ceiling for what OpenAI thinks people will pay.
ChatGPT and other OpenAI services require vast amounts of computing power and data storage to keep its services running smoothly. On top of the user base OpenAI has gained through its own products, it’s poised to reach millions of more people through an Apple partnership that integrates ChatGPT with iOS and macOS.
That’s a lot of infrastructure pressure for a relatively young tech company, says ChatGPT engineering lead Sulman Choudhry. “Just keeping it up and running is a very, very big feat,” he says. People love features like ChatGPT’s advanced voice mode. But scaling limitations mean there’s often a significant gap between the the technology’s capabilities and what people can experience. “There’s a very, very big delta there, and that delta is sort of how you scale the technology and how you scale infrastructure.”
Even as OpenAI grapples with these problems, it’s trying to work itself deeper into users’ lives. The company is racing to build agents, or AI tools that can perform complex, multistep tasks autonomously. In the AI world, these are called tasks with a longer “time horizon,” requiring the AI to maintain coherence over a longer period while handling multiple steps. For instance, earlier this year at the company’s Dev Day conference, OpenAI showcased AI agents that could make phone calls to place food orders and make hotel reservations in multiple languages.
For Turley and others, this is where the stakes will get particularly steep. Agents could make AI far more useful by moving what it can do outside the chatbot interface. The shift could also grant these tools an alarming level of access to the rest of your digital life.
“I’m really excited to see where things go in a more agentic direction with AI,” Kim tells me. “Right now, you go to the model with your question but I’m excited to see the model more integrated into your life and doing things proactively, and taking actions on your behalf”
The goal of ChatGPT isn’t to be just a chatbot, says Fedus. As it exists today, ChatGPT is “pretty constrained” by its interface and compute. He says the goal is to create an entity that you can talk to, call, and trust to work for you. Fedus thinks systems like OpenAI’s “reasoning” line of models, which create a trail of checkable steps explaining their logic, could make it more reliable for these kinds of tasks.
Turley says that, contrary to some reports, “I don’t think there’s going to be such a thing as an OpenAI agent.” What you will see is “increasingly agentic functionality inside of ChatGPT,” though. “Our focus is going to be to release this stuff as gradually as possible. The last thing I want is a big bang release where this stuff can suddenly go out and do things over hours of time with all your stuff.”
“The last thing I want is a big bang release”
By ChatGPT’s third anniversary next year, OpenAI will probably look a lot different than it does today. The company will likely raise billions more dollars in 2025, release its next big “Orion” model, face growing competition, and have to navigate the complexity of a new US president and his AI czar.
Turley hopes 2024’s version of ChatGPT will soon feel as quaint as AOL Instant Messenger. A year from now, we’ll probably laugh at how basic it was, he says. “Remember when all we could do was ask it questions?”
Technology
Google brings its AI videomaker to Workspace users
Google is expanding access to its AI videomaking tool. Launched last May, Flow was initially only available to Google AI Pro and AI Ultra subscribers, but now, those with Business, Enterprise, and Education Workspace plans can access it, too.
Flow uses Google’s AI video generation model Veo 3.1 to generate eight-second clips based on a text prompt or images. You can stitch together the clips to create longer scenes, as well as access a bunch of other tools that allow you to change the lighting, adjust the “camera” angle, and insert or remove objects in scenes. Earlier this week, Google added vertical video support inside Flow.
Google brought audio support to more features within Flow late last year, allowing you to generate audio whether you prompt the app based on reference images, ask it to create transitions between scenes, or have the tool extend a clip. The company also integrated its AI-powered image generator Nano Banana Pro into Flow, which you can use to create characters or starting points for your clips.
Technology
January scams surge: Why fraud spikes at the start of the year
NEWYou can now listen to Fox News articles!
Every January, I hear from people who say the same thing: “I just got an email that looked official, and I almost fell for it.” That’s not a coincidence. January is one of the busiest months of the year for scammers. While most of us are focused on taxes, benefits, subscriptions, and getting our finances in order, criminals are doing their own kind of cleanup, refreshing scam lists and going after people with newly updated personal data. If you’ve ever received a message claiming your account needs to be “verified,” your benefits are at risk, or your tax information is incomplete, this article is for you.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
10 SIMPLE CYBERSECURITY RESOLUTIONS FOR A SAFER 2026
Scam messages often look urgent and official, pushing you to act before you have time to think. That pressure is exactly what criminals rely on. (Kurt “CyberGuy” Knutsson)
Why January is prime time for scammers
January is when scammers have everything they need. According to YouMail’s Robocall Index, U.S. consumers received just over 4.7 billion robocalls in January 2025, a roughly 9% increase from December 2024. This year, we can expect the same pattern from scammers.
They know:
But the biggest reason scams spike now? Your personal data is easier to find than you think. Data brokers quietly collect and update profiles year after year. By January, those profiles are often more complete than ever, and scammers know it.
The “account verification” scam you’ll see everywhere
One of the most common January scams looks harmless at first. You get a message saying:
- “Your Social Security account needs verification”
- “Your Medicare information has to be updated”
- “Your benefits could be delayed without action”
The message sounds official. Sometimes it even uses your real name or location. That’s where people get tricked. Government agencies don’t ask for sensitive information through random emails or texts. Scammers rely on urgency and familiarity to push you into reacting before thinking.
My rule: If you didn’t initiate the request, don’t respond to it. Always go directly to the agency’s official website or phone number, never through a link sent to you.
MAKE 2026 YOUR MOST PRIVATE YEAR YET BY REMOVING BROKER DATA
January is a prime time for fraud because people are dealing with taxes, benefits and account updates. Scammers know these messages feel expected and familiar. (Kurt “CyberGuy” Knutsson)
Fake tax and benefits notices ramp up in January
Another favorite scam this time of year involves taxes and refunds.
You may see:
- Emails claiming you owe back taxes
- Messages saying you’re due a refund
- Notices asking you to “confirm” banking information.
These scams work because they arrive at exactly the moment people expect to hear from tax agencies or benefits programs.
Scammers don’t need much to sound convincing. A name, an email address or an old address is often enough. If you get a tax-related message out of the blue, slow down. Real agencies don’t pressure you to act immediately.
Subscription “problems” that aren’t real
January is also when subscription scams explode. Fake messages claim:
Scammers know most people have subscriptions, so they play the odds. Instead of clicking, open the app or website directly. If there’s a real problem, you’ll see it there.
Why these scams feel so personal
People often tell me, “But they used my name, how did they know?” Here’s the uncomfortable truth: They probably bought it. Data brokers compile massive profiles that include:
- Address histories
- Phone numbers and emails
- Family connections
- Shopping behavior.
That data is sold, shared and leaked. Once scammers have it, they can tailor messages that feel real, because they’re built on real information.
10 WAYS TO PROTECT SENIORS FROM EMAIL SCAMS
The more personal data scammers have, the more convincing their messages become. Removing your information from data broker sites can help reduce targeted scams over time. (Kurt “CyberGuy” Knutsson)
What you should do right now
Before January gets any busier, take these steps to reduce your exposure to scams and fraud:
1) Remove your personal data from broker sites
Deleting emails or blocking numbers helps, but it does not stop scams at the source. Scammers rely on data broker sites that quietly collect, update and sell your personal information. Removing your data from those sites reduces scam calls, phishing emails and targeted texts over time. It also makes it harder for criminals to personalize messages using your real name, address or family connections. You have two ways to do this:
Do it yourself:
You can visit individual data broker websites, search for your profile and submit opt-out requests.This method works, but it takes time. Each site has its own rules, identity verification steps, and response timelines. Many brokers also re-add data later, which means you have to repeat the process regularly.
Use a data removal service:
A data removal service automates the opt-out process by contacting hundreds of data brokers on your behalf and monitoring for re-listings. This option saves time and provides ongoing protection, especially if you want long-term results without constant follow-ups.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services, and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com
2) Don’t click links in unexpected messages
If you did not initiate the request, do not click. Scam messages are designed to create urgency, especially around taxes, benefits and account issues. Instead, go directly to the official website by typing the address yourself or using a saved bookmark. This single habit prevents most phishing attacks.
3) Turn on two-factor authentication wherever possible
Two-factor authentication (2FA) adds a critical second layer of protection. Even if someone gets your password, they still cannot access your account without the second verification code. Start with email, financial accounts, social media and government services.
4) Check accounts only through official apps or websites
If you receive a warning about an account problem, do not trust the message itself. Open the official app or website, and check there. If something is wrong, you will see it immediately. If not, you just avoided a scam.
5) Watch for account alerts and login activity
Enable login alerts and security notifications on important accounts. These alerts can warn you if someone tries to sign in from a new device or location. Early warnings give you time to act before real damage occurs.
6) Use strong, unique passwords and a password manager
Reusing passwords makes it easy for scammers to take over multiple accounts at once. If one service is compromised, attackers try the same login on email, banking, and social media accounts. A password manager helps you create and store strong, unique passwords for every account without needing to remember them. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
January scams aren’t random. They’re targeted, timed and fueled by personal data that shouldn’t be public in the first place. The longer your information stays online, the easier it is for scammers to use it against you. If you want a quieter inbox, fewer scam calls and less risk this year, take action early, before criminals finish rebuilding their lists. Protect your data now, and you’ll be safer all year long.
Have you noticed more scam emails, texts or calls since the new year started? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report. Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Casting is dead. Long live casting!
This is Lowpass by Janko Roettgers, a newsletter on the ever-evolving intersection of tech and entertainment, syndicated just for The Verge subscribers once a week.
Last month, Netflix made the surprising decision to kill off a key feature: With no prior warning, the company removed the ability to cast videos from its mobile apps to a wide range of smart TVs and streaming devices. Casting is now only supported on older Chromecast streaming adapters that didn’t ship with a remote, Nest Hub smart displays, and select Vizio and Compal smart TVs.
That’s a stunning departure for the company. Prior to those changes, Netflix allowed casting to a wide range of devices that officially supported Google’s casting technology, including Android TVs made by companies like Philips, Polaroid, Sharp, Skyworth, Soniq, Sony, Toshiba, and Vizio, according to an archived version of Netflix’s website.
But the streaming service didn’t stop there. Prior to last month’s changes, Netflix also offered what the company called “Netflix 2nd Screen” casting functionality on a wide range of additional devices, including Sony’s PlayStation, TVs made by LG and Samsung, Roku TVs and streaming adapters, and many other devices. Basically, if a smart TV or streaming device was running the Netflix app, it most likely also supported casting.
That’s because Netflix actually laid the groundwork for this technology 15 years ago. Back in 2011, some of the company’s engineers were exploring ways to more tightly integrate people’s phones with their TVs. “At about the same time, we learned that the YouTube team was interested in much the same thing — they had already started to do some work on [second] screen use cases,” said Scott Mirer, director of product management at Netflix at the time, in 2013.
The two companies started to collaborate and enlist help from TV makers like Sony and Samsung. The result was DIAL (short for “Discovery and Launch”) — an open second-screen protocol that formalized casting.
In 2012, Netflix was the first major streaming service to add a casting feature to its mobile app, which at the time allowed PlayStation 3 owners to launch video playback from their phones. A year later, Google launched its very first Chromecast dongle, which took ideas from DIAL and incorporated them into Google’s own proprietary casting technology.
For a while, casting was extremely popular. Google sold over 100 million Chromecast adapters, and Vizio even built a whole TV around casting, which shipped with a tablet instead of a remote. (It flopped. Turns out people still love physical remotes.)
But as smart TVs became more capable, and streaming services invested more heavily into native apps on those TVs, the need for casting gradually decreased. At CES, a streaming service operator told me that casting used to be absolutely essential for his service. Nowadays, even among the service’s Android users, only about 10 percent are casting.
As for Netflix, it’s unlikely the company will change its tune on casting. Netflix declined to comment when asked about discontinuing the feature. My best guess is that casting was sacrificed in favor of new features like cloud gaming and interactive voting. Gaming in particular already involves multidevice connectivity, as Netflix uses phones as game controllers. Adding casting to that mix simply might have proven too complex.
However, not everyone has given up on casting. In fact, the technology is still gaining new supporters. Last month, Apple added Google Cast support to its Apple TV app on Android for the first time. And over the past two years, both Samsung and LG incorporated Google’s casting tech into some of their TV sets.
“Google Cast continues to be a key experience that we’re invested in — bringing the convenience of seamless content sharing from phones to TVs, whether you’re at home or staying in a hotel,” says Google’s Android platform PM Neha Dixit. “Stay tuned for more to come this year.”
Google’s efforts are getting some competition from the Connectivity Standards Alliance, the group behind the Matter smart home standard, which developed its own Matter Casting protocol. Matter Casting promises to be a more open approach toward casting and in theory allows streaming services and device makers to bring second-screen use cases to their apps and devices without having to strike deals with Google.
“We are a longtime advocate of using open technology standards to give customers more choice when it comes to using their devices and services,” says Amazon Device Software & Services VP Tapas Roy, whose company is a major backer of Matter and its casting tech. “We welcome and support media developers that want to build to an open standard with the implementation of Matter Casting.”
Thus far, support has been limited though. Fire TVs and Echo Show displays remain the only devices to support Matter Casting, and Amazon’s own apps were long the only ones to make use of the feature. Last month, Tubi jumped on board as well by incorporating Matter Casting into its mobile apps.
Connectivity Standards Alliance technology strategist Christopher LaPré acknowledges that Matter Casting has yet to turn into a breakthrough hit. “To be honest, I have Fire TVs, and I’ve never used it,” he says.
Besides a lack of available content, LaPré also believes Matter Casting is a victim of brand confusion. The problem: TV makers have begun to incorporate Matter into their devices to let consumers control smart lights and thermostats from the couch. Because of that, a TV that dons the Matter logo doesn’t necessarily support Matter Casting.
However, LaPré also believes that Matter Casting could get a boost from two new developments: Matter recently added support for cameras, which adds a new kind of homegrown content people may want to cast. And the consortium is also still working on taking casting beyond screens.
“Audio casting is something that we’re working on,” LaPré confirms. “A lot of speaker companies are interested in that.” The plan is to launch Matter audio casting later this year, at which point device makers, publishers, and consumers could also give video casting another look.
-
Montana6 days agoService door of Crans-Montana bar where 40 died in fire was locked from inside, owner says
-
Delaware1 week agoMERR responds to dead humpback whale washed up near Bethany Beach
-
Dallas, TX1 week agoAnti-ICE protest outside Dallas City Hall follows deadly shooting in Minneapolis
-
Virginia6 days agoVirginia Tech gains commitment from ACC transfer QB
-
Iowa1 week agoPat McAfee praises Audi Crooks, plays hype song for Iowa State star
-
Montana7 days ago‘It was apocalyptic’, woman tells Crans-Montana memorial service, as bar owner detained
-
Minnesota6 days agoICE arrests in Minnesota surge include numerous convicted child rapists, killers
-
Oklahoma5 days agoMissing 12-year-old Oklahoma boy found safe