Connect with us

Technology

Inside the launch — and future — of ChatGPT

Published

on

Inside the launch — and future — of ChatGPT

As winter descended on San Francisco in late 2022, OpenAI quietly pushed a new service dubbed ChatGPT live with a blog post and a single tweet from CEO Sam Altman. The team labeled it a “low-key research preview” — they had good reason to set expectations low. 

“It couldn’t even do arithmetic,” Liam Fedus, OpenAI’s head of post-training says. It was also prone to hallucinating or making things up, adds Christina Kim, a researcher on the mid-training team.

Ultimately, ChatGPT would become anything but low-key.

While the OpenAI researchers slept, users in Japan flooded ChatGPT’s servers, crashing the site only hours after launch. That was just the beginning.

“The dashboards at that time were just always red,” recalls Kim. The launch coincided with NeurIPS, the world’s premier AI conference, and soon ChatGPT was the only thing anyone there could talk about. ChatGPT’s error page — “ChatGPT is at capacity right now” — would become a familiar sight.

Advertisement

“We had the initial launch meeting in this small room, and it wasn’t like the world just lit on fire all of a sudden,” Fedus says during a recent interview from OpenAI’s headquarters. “We’re like, ‘Okay, cool. I guess it’s out there now.’ But it was the next day when we realized — oh, wait, this is big.”

“The dashboards at that time were just always red.”

Two years later, ChatGPT still hasn’t cracked advanced arithmetic or become factually reliable. It hasn’t mattered. The chatbot has evolved from a prototype to a $4 billion revenue engine with 300 million weekly active users. It has shaken the foundations of the tech industry, even as OpenAI loses money (and cofounders) hand over fist while competitors like Anthropic threaten its lead.

Whether used as praise or pejorative, “ChatGPT” has become almost synonymous with generative AI. Over a series of recent video calls, I sat down with Fedus, Kim, ChatGPT head of product Nick Turley, and ChatGPT engineering lead Sulman Choudhry to talk about ChatGPT’s origins and where it’s going next.

A “weird” name and a scrappy start

Advertisement

ChatGPT was effectively born in December 2021 with an OpenAI project dubbed WebGPT: an AI tool that could search the internet and write answers. The team took inspiration from WebGPT’s conversational interface and began plugging a similar interface into GPT-3.5, a successor to the GPT-3 text model released in 2020. They gave it the clunky name “Chat with GPT-3.5” until, in what Turley recalls as a split-second decision, they simplified it to ChatGPT. 

The name could have been the even more straightforward “Chat,” and in retrospect, he thinks perhaps it should have been. “The entire world got used to this odd, weird name, we’re probably stuck with it. But obviously, knowing what I know now, I wish we picked a slightly easier to pronounce name,” he says. (It was recently revealed that OpenAI purchased the domain chat.com for more than $10 million of cash and stock in mid-2023.)

As the team discovered the model’s obvious limitations, they debated whether to narrow its focus by launching a tool for help with meetings, writing, or coding. But OpenAI cofounder John Schulman (who has since left for Anthropic) advocated for keeping the focus broad.

The team describes it as a risky bet at the time; chatbots were viewed as an unremarkable backwater of machine learning, they thought, with no successful precedents. Adding to their concerns, Facebook’s Galactica AI bot had just spectacularly flamed out and been pulled offline after generating false research.

The team grappled with timing. GPT-4 was already in development with advanced features like Code Interpreter and web browsing, so it would make sense to wait to release ChatGPT atop the more capable model. Kim and Fedus also recall people wanting to wait and launch something more polished, especially after seeing other companies’ undercooked bots fail.

Advertisement

Despite early concerns about chatbots being a dead end, The New York Times has reported that other team members worried competitors would beat OpenAI to market with a fresh wave of bots. The deciding vote was Schulman, Fedus and Kim say. He pushed for an early release, alongside Altman, both believing it was important to get AI into peoples’ hands quickly.

OpenAI had demoed a chatbot at Microsoft Build earlier that year and generated virtually no buzz. On top of that, many of ChatGPT’s early users didn’t seem to be actually using it that much. The team shared their prototype with about 50 friends and family members. Turley “personally emailed every single one of them” every day to check in. While Fedus couldn’t recall exact figures, he recalls that about 10 percent of that early test group used it every day.

Image: Cath Virginia / The Verge, Getty Images

Later, the team would see this as an indication they’d created something with potential staying power.

“We had two friends who basically were on it from the start of their work day — and they were founders,” Kim recalls. “They were on it basically for 12 to 16 hours a day, just talking to it all day.” With just two weeks before the end of November, Schulman made the final call: OpenAI would launch ChatGPT on the last day of that month.

Advertisement

The team canceled their Thanksgiving plans and began a two-week sprint to public release. Much of the system was built at this point, Kim says, but its security vulnerabilities were untested. So they focused heavily on red teaming, or stress testing the system for potential safety problems. 

“If I had known it was going to be a big deal, I would certainly not want to ship it right before a winter holiday week before we were all going to go home,” Turley says. “I remember working very hard, but I also remember thinking, ‘Okay, let’s get this thing out, and then we’ll come back after the holiday to look at the learnings, to see what people want out of an AI assistant.’”

In an internal Slack poll, OpenAI employees guessed how many users they would get. Most predictions ranged from a mere 10,000 to 50,000. When someone suggested it might reach a million users, others jumped in to say that was wildly optimistic.

On launch day, they realized they’d all been incredibly wrong.

After Japan crashed their servers, and red dashboards and error messages abounded, the team was anxiously picking up the pieces and refreshing Twitter to gauge public reaction, Kim says. They believed the reaction to ChatGPT could only go one of two ways: total indifference or active contempt. They worried people might discover problematic ways to use it (like attempting to jailbreak it), and the uncertainty of how the public would receive their creation kept them in a state of nervous anticipation.

Advertisement

The launch was met with mixed emotions. ChatGPT quickly started facing criticism over accuracy issues and bias. Many schools ran to immediately ban it over cheating concerns. Some users on Reddit likened it to the early days of Google (and were shocked it was free). For its part, Google dubbed the chatbot a “code red” threat.

OpenAI would wind up surpassing its most ambitious 1-million-user target within five days of launch. Two months after its debut, ChatGPT garnered more than 30 million users.

When someone suggested it might reach a million users, others jumped in to say that was wildly optimistic.

Within weeks of ChatGPT’s November 30th launch, the team started rolling out updates incorporating user feedback (like its tendency to give overly verbose answers). The initial chaos had settled, user numbers were still climbing, and the team had a sobering realization: if they wanted to keep this momentum, things would have to change. The small group that launched a “low-key research preview” — a term that would become a running joke at OpenAI — would need to get a lot bigger.

Over the coming months and years, ChatGPT’s team would grow enormously and shift priorities — sometimes to the chagrin of many early staffers. Top researcher Jan Leike, who played a crucial role in refining ChatGPT’s conversational abilities and ensuring its outputs aligned with user expectations, quit this year to join Anthropic after claiming that “safety culture and processes have taken a backseat to shiny products” at OpenAI.

Advertisement

These days, OpenAI is focused on figuring out what the future of ChatGPT looks like.

“I’d be very surprised if a year from now this thing still looks like a chatbot,” Turley says, adding that current chat-based interactions would soon feel as outdated as ’90s instant messaging. “We’ve gotten pretty sidetracked by just making the chatbot great, but really, it’s not what we meant to build. We meant to build something much more useful than that.”

Increasingly powerful and expensive 

I talk with Turley over a video call as he sits in a vast conference room in OpenAI’s San Francisco headquarters that epitomizes the company’s transformation. The office is all sweeping curves and polished minimalism, a far cry from its original office that was often described as a drab, historic warehouse.

With roughly 2,000 employees, OpenAI has evolved from a scrappy research lab into a $150 billion tech powerhouse. The team is spread across numerous projects, including building underlying foundation models and developing non-text tools like the video generator, Sora. ChatGPT is still OpenAI’s highest-profile product by far. Its popularity has come with a lot of headaches. 

Advertisement

“I’d be very surprised if a year from now this thing still looks like a chatbot”

ChatGPT still spins elaborate lies with unwavering confidence, but now they’re being cited in court filings and political discourse. It has allowed for an impressive amount of experimentation and creativity, but some of its most distinctive use cases turned out to be spam, scams, and AI-written college term papers.

While some publications (include The Verge’s parent company, Vox Media) are choosing to partner with OpenAI, others like The New York Times are opting to sue it for copyright infringement. And OpenAI is burning through cash at a staggering rate to keep the lights on.

Turley acknowledges that ChatGPT’s hallucinations are still a problem. “Our early adopters were very comfortable with the limitations of ChatGPT,” he says. “It’s okay that you’re going to double check what it said. You’re going to know how to prompt around it. But the vast majority of the world, they’re not engineers, and they shouldn’t have to be. They should just use this thing and rely on it like any other tool, and we’re not there yet.”

Accuracy is one of the ChatGPT team’s three focus areas for 2025. The others are speed and presentation (i.e., aesthetics).

Advertisement

“I think we have a long way to go in making ChatGPT more accurate and better at citing its sources and iterating on the quality of this product,” Turley says.

OpenAI is also still figuring out how to monetize ChatGPT. Despite deploying increasingly powerful and costly AI models, the company has maintained a limited free tier and a $20 monthly ChatGPT Plus service since February 2023.

When I ask Turley about rumors of a future $2,000 subscription, or if advertising will be baked into ChatGPT, he says there is “no current plan to raise prices.” As for ads: “We don’t care about how much time you spend on ChatGPT.” 

“They should just use this thing and rely on it like any other tool, and we’re not there yet.”

“I’m really proud of the fact that we have incentives that are incredibly aligned with our users,” he says. Those who “use our product a lot pay us money, which is a very, very, upfront and direct transaction. I’m proud of that. Maybe we’ll have a technology that’s much more expensive to serve and we’re going to have to rethink that model. You gotta remain humble about where the technology is going to go.”

Advertisement

Only days after Turley tells me this, ChatGPT did get a new $200 price tag for a pro tier that includes access to a specialized reasoning model. Its main $20 Plus tier is sticking around but it’s clearly not the ceiling for what OpenAI thinks people will pay.

ChatGPT and other OpenAI services require vast amounts of computing power and data storage to keep its services running smoothly. On top of the user base OpenAI has gained through its own products, it’s poised to reach millions of more people through an Apple partnership that integrates ChatGPT with iOS and macOS.

That’s a lot of infrastructure pressure for a relatively young tech company, says ChatGPT engineering lead Sulman Choudhry. “Just keeping it up and running is a very, very big feat,” he says. People love features like ChatGPT’s advanced voice mode. But scaling limitations mean there’s often a significant gap between the the technology’s capabilities and what people can experience. “There’s a very, very big delta there, and that delta is sort of how you scale the technology and how you scale infrastructure.”

Even as OpenAI grapples with these problems, it’s trying to work itself deeper into users’ lives. The company is racing to build agents, or AI tools that can perform complex, multistep tasks autonomously. In the AI world, these are called tasks with a longer “time horizon,” requiring the AI to maintain coherence over a longer period while handling multiple steps. For instance, earlier this year at the company’s Dev Day conference, OpenAI showcased AI agents that could make phone calls to place food orders and make hotel reservations in multiple languages.

For Turley and others, this is where the stakes will get particularly steep. Agents could make AI far more useful by moving what it can do outside the chatbot interface. The shift could also grant these tools an alarming level of access to the rest of your digital life.

Advertisement

“I’m really excited to see where things go in a more agentic direction with AI,” Kim tells me. “Right now, you go to the model with your question but I’m excited to see the model more integrated into your life and doing things proactively, and taking actions on your behalf”

The goal of ChatGPT isn’t to be just a chatbot, says Fedus. As it exists today, ChatGPT is “pretty constrained” by its interface and compute. He says the goal is to create an entity that you can talk to, call, and trust to work for you. Fedus thinks systems like OpenAI’s “reasoning” line of models, which create a trail of checkable steps explaining their logic, could make it more reliable for these kinds of tasks.

Turley says that, contrary to some reports, “I don’t think there’s going to be such a thing as an OpenAI agent.” What you will see is “increasingly agentic functionality inside of ChatGPT,” though. “Our focus is going to be to release this stuff as gradually as possible. The last thing I want is a big bang release where this stuff can suddenly go out and do things over hours of time with all your stuff.”

“The last thing I want is a big bang release”

By ChatGPT’s third anniversary next year, OpenAI will probably look a lot different than it does today. The company will likely raise billions more dollars in 2025, release its next big “Orion” model, face growing competition, and have to navigate the complexity of a new US president and his AI czar.

Advertisement

Turley hopes 2024’s version of ChatGPT will soon feel as quaint as AOL Instant Messenger. A year from now, we’ll probably laugh at how basic it was, he says. “Remember when all we could do was ask it questions?”

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Apple’s latest AirPods are already on sale for $99 before Prime Day

Published

on

Apple’s latest AirPods are already on sale for  before Prime Day

Amazon Prime Day kicks off tomorrow, July 8th, but you don’t have to wait until then to pick up Apple’s latest pair of AirPods at a discount. Right now, the AirPods 4 are available for around $99 ($30 off) at Amazon, Best Buy, and Walmart, while the AirPods 4 with noise cancellation are going for around $149 ($30 off) at Amazon, Best Buy, and Walmart. That’s within $10 of the lowest price we’ve seen on the ANC model and matches the lowest price to date on the base pair.

Both versions of Apple’s current-gen earbuds feature shorter stems and larger buds than previous models, allowing them to accommodate a broader range of ear shapes. The open-style earbuds use a hard plastic body that doesn’t create a tight seal inside your ear, which means they sacrifice some bass response compared to gummy-tipped earbuds. Hardshell earbuds won’t create pressure in your ear, though, which can feel uncomfortable after listening to music for a few hours.

Overall, the fourth-gen AirPods sound better than previous models due to a custom amplifier and new acoustic architecture. Audio quality is somewhat subjective and largely depends on how the music was recorded, mixed, and mastered; however, former Verge staffer Chris Welch noted in his review that he was pleased with the sound of Apple’s latest pair of wireless earbuds. If you’re upgrading from an older pair, you’ll notice a difference.

The AirPods 4 run on Apple’s H2 chip, which is required for Voice Isolation, a feature that reduces background noise and amplifies the volume of your voice during calls. If you’re using an iPhone, you can say “Hey Siri” to evoke Apple’s smart assistant to place calls, hear and return messages, and play music. You can also locate the earbuds using the Find My app on Apple devices if they’re misplaced.

The entry-level model can last up to five hours on a single charge and can be fully charged five times using the included USB-C charging case (the ANC model also offers wireless charging). Both pairs of earbuds are also IP54-rated for dust, sweat, and water resistance, ensuring you can wear them safely during workouts. Needless to say, the AirPods 4 are excellent earbuds at their current price, whether you opt for the model with active noise cancellation or not.

Advertisement

Three more deals worth your time

Continue Reading

Technology

How micro-robots may soon treat your sinus infections

Published

on

How micro-robots may soon treat your sinus infections

NEWYou can now listen to Fox News articles!

A breakthrough in medical technology could soon change how sinus infections are treated. Scientists have created micro-robots for sinus infection treatment that can enter the nasal cavity, eliminate bacteria directly at the source, and exit without harming surrounding tissue. This drug-free, targeted approach may reduce our dependence on antibiotics.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER.

A woman with a sinus infection. (Kurt “CyberGuy” Knutsson)

What are micro-robots for sinus infection treatment?

These microscopic robots are smaller than a speck of dust. They are made of magnetic particles enhanced with copper atoms. Doctors insert them through a narrow duct in the nostril. Once inside, the micro-robots are guided by magnetic fields to reach the infected area.

Advertisement

At that point, a fiber optic light heats the particles and triggers a chemical reaction. This reaction breaks through thick mucus and destroys harmful bacteria at the infection site. As a result, treatment becomes faster, more precise, and far less invasive.

This latest advancement comes from a collaboration of researchers at the Chinese University of Hong Kong, along with universities in Guangxi, Shenzhen, Jiangsu, Yangzhou, and Macau. Their work, published in “Science Robotics,” has helped move micro-robotic medical technology closer to real-world applications. 

Why use micro-robots instead of antibiotics?

Traditional antibiotics circulate throughout the entire body. In contrast, micro-robots target only the infected area. This reduces side effects and lowers the risk of antibiotic resistance. Furthermore, patients may recover faster because the treatment goes straight to the source.

A woman with a sinus infection.

A woman with a sinus infection. (Kurt “CyberGuy” Knutsson)

Are micro-robots safe?

So far, animal trials have shown promising results. Micro-robots successfully cleared infections in pig sinuses and live rabbits, without causing tissue damage. However, scientists still need to ensure that every robot exits the body after treatment. Leftover particles could pose long-term risks.

In addition, public acceptance remains a challenge. The idea of tiny machines inside the body makes some people uncomfortable. Nevertheless, experts believe those fears will fade over time.

Advertisement

What other uses are possible?

Researchers are already exploring how micro-robots could treat infections in the bladder, stomach, intestines, and bloodstream. Several teams around the world are working to make the technology more advanced and adaptable for deep internal use. If successful, these innovations could revolutionize the way we fight bacteria in the human body.

A doctor examining a woman with a sinus infection.

A doctor examining a woman with a sinus infection. (Kurt “CyberGuy” Knutsson)

Kurt’s key takeaways

The rise of micro-robots for sinus infection treatment marks a major shift in medical care. By offering precise, non-invasive therapy without antibiotics, this method could redefine how infections are treated. With continued research and testing, these tiny tools may soon become powerful allies in modern medicine.

Would you let microscopic robots crawl through your sinuses if it meant never needing antibiotics again? Let us know by writing to us at Cyberguy.com/Contact.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER.

Copyright 2025 CyberGuy.com. All rights reserved.

Advertisement

Continue Reading

Technology

Cyberpunk Edgerunners 2 will be even sadder and bloodier

Published

on

Cyberpunk Edgerunners 2 will be even sadder and bloodier

The new season will be directed by Kai Ikarashi, who also directed episode six in the first season, “Girl on Fire.” There’s no word yet on when Cyberpunk: Edgerunners 2 will premiere, but they did show off this new poster artwork. A trailer will be shown later tonight during a panel at 8:30PM PT for the animation studio, Trigger.

Showrunner and writer Bartosz Sztybor said during Friday’s panel that for season one, “I just wanted to make the whole world sad… when people are sad, I’m a bit happy,” and that this new 10-episode season will be “…of course, sadder, but it will be also darker, more bloody, and more raw.”

A brief summary of the follow-up series tells fans what to expect following the end of David’s story in season one:

Cyberpunk: Edgerunners 2 presents a new standalone 10-episode story from the world of Cyberpunk 2077— a raw chronicle of redemption and revenge. In a city that thrives in the spotlight of violence, one question remains: when the world is blinded by spectacle, what extremes do you have to go to make your story matter?

Continue Reading

Trending