Connect with us

Business

Commentary: A leading roboticist punctures the hype about self-driving cars, AI chatbots and humanoid robots

Published

on

Commentary: A leading roboticist punctures the hype about self-driving cars, AI chatbots and humanoid robots

It may come to your attention that we are inundated with technological hype. Self-driving cars, human-like robots and AI chatbots all have been the subject of sometimes outlandishly exaggerated predictions and promises.

So we should be thankful for Rodney Brooks, an Australian-born technologist who has made it one of his missions in life to deflate the hyperbole about these and other supposedly world-changing technologies offered by promoters, marketers and true believers.

As I’ve written before, Brooks is nothing like a Luddite. Quite the contrary: He was a co-founder of IRobot, the maker of the Roomba robotic vacuum cleaner, though he stepped down as the company’s chief technology officer in 2008 and left its board in 2011. He’s a co-founder and chief technology officer of RobustAI, which makes robots for factories and warehouses, and former director of computer science and artificial intelligence labs at Massachusetts Institute of Technology.

Having ideas is easy. Turning them into reality is hard. Turning them into being deployed at scale is even harder.

— Rodney Brooks

Advertisement

In 2018, Brooks published a post of dated predictions about the course of major technologies and promised to revisit them annually for 32 years, when he would be 95. He focused on technologies that were then — and still are — the cynosures of public discussion, including self-driving cars, human space travel, AI bots and humanoid robots.

“Having ideas is easy,” he wrote in that introductory post. “Turning them into reality is hard. Turning them into being deployed at scale is even harder.”

Brooks slotted his predictions into three pigeonholes: NIML, for “not in my lifetime,” NET, for “no earlier than” some specified date, and “by some [specified] date.”

On Jan. 1 he published his eighth annual predictions scorecard. He found that over the years “my predictions held up pretty well, though overall I was a little too optimistic.”

Advertisement

For example in 2018 he predicted “a robot that can provide physical assistance to the elderly over multiple tasks [e.g., getting into and out of bed, washing, using the toilet, etc.]” wouldn’t appear earlier than 2028; as of New Year’s Day, he writes, “no general purpose solution is in sight.”

The first “permanent” human colony on Mars would come no earlier than 2036, he wrote then, which he now calls “way too optimistic.” He now envisions a human landing on Mars no earlier than 2040, and the settlement no earlier than 2050.

A robot that seems “as intelligent, as attentive, and as faithful, as a dog” — no earlier than 2048, he conjectured in 2018. “This is so much harder than most people imagine it to be,” he writes now. “Many think we are already there; I say we are not at all there.” His verdict on a robot that has “any real idea about its own existence, or the existence of humans in the way that a 6-year-old understands humans” — “Not in my lifetime.”

Brooks points out that one way high-tech promoters finesse their exaggerated promises is through subtle redefinition. That has been the case with “self-driving cars,” he writes. Originally the term referred to “any sort of car that could operate without a driver on board, and without a remote driver offering control inputs … where no person needed to drive, but simply communicated to the car where it should take them.”

Waymo, the largest purveyor of self-driven transport, says on its website that its robotaxis are “the embodiment of fully autonomous technology that is always in control from pickup to destination.” Passengers “can sit in the back seat, relax, and enjoy the ride with the Waymo Driver getting them to their destination safely.”

Advertisement

Brooks challenges this claim. One hole in the fabric of full autonomy, he observes, became clear Dec. 20, when a power blackout blanketing San Francisco stranded much of Waymo’s robotaxi fleet on the streets. Waymos, which can read traffic lights, clogged intersections because traffic lights went dark.

The company later acknowledged its vehicles occasionally “require a confirmation check” from humans when they encounter blacked-out traffic signals or other confounding situations. The Dec. 20 blackout, Waymo said, “created a concentrated spike in these requests,” resulting in “a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.”

It’s also known that Waymo pays humans to physically deal with vehicles immobilized by — for example — a passenger’s failure to fully close a car door when exiting. They can be summoned via the third-party app Honk, which chiefly is used by tow truck operators to find stranded customers.

“Current generation Waymos need a lot of human help to operate as they do, from people in the remote operations center to intervene and provide human advice for when something goes wrong, to Honk gig workers scampering around the city,” Brooks observes.

Waymo told me its claim of “fully autonomous” operation is based on the fact that the onboard technology is always in control of its vehicles. In confusing situations the car will call on Waymo’s “fleet response” team of humans, asking them to choose which of several optional paths is the best one. “Control of the vehicle is always with the Waymo Driver” — that is, the onboard technology, spokesman Mark Lewis told me. “A human cannot tele-operate a Waymo vehicle.”

Advertisement

As a pioneering robot designer, Brooks is particularly skeptical about the tech industry’s fascination with humanoid robots. He writes from experience: In 1998 he was building humanoid robots with his graduate students at MIT. Back then he asserted that people would be naturally comfortable with “robots with humanoid form that act like humans; the interface is hardwired in our brains,” and that “humans and robots can cooperate on tasks in close quarters in ways heretofore imaginable only in science fiction.”

Since then it has become clear that general-purpose robots that look and act like humans are chimerical. In fact in many contexts they’re dangerous. Among the unsolved problems in robot design is that no one has created a robot with “human-like dexterity,” he writes. Robotics companies promoting their designs haven’t shown that their proposed products have “multi-fingered dexterity where humans can and do grasp things that are unseen, and grasp and simultaneously manipulate multiple small objects with one hand.”

Two-legged robots have a tendency to fall over and “need human intervention to get back up,” like tortoises fallen on their backs. Because they’re heavy and unstable, they are “currently unsafe for humans to be close to when they are walking.”

(Brooks doesn’t mention this, but even in the 1960s the creators of “The Jetsons” understood that domestic robots wouldn’t rely on legs — their robot maid, Rosie, tooled around their household on wheels, a perception that came as second nature to animators 60 years ago but seems to have been forgotten by today’s engineers.)

As Brooks observes, “even children aged 3 or 4 can navigate around cluttered houses without damaging them. … By age 4 they can open doors with door handles and mechanisms they have never seen before, and safely close those doors behind them. They can do this when they enter a particular house for the first time. They can wander around and up and down and find their way.

Advertisement

“But wait, you say, ‘I’ve seen them dance and somersault, and even bounce off walls.’ Yes, you have seen humanoid robot theater. “

Brooks’ experience with artificial intelligence gives him important insights into the shortcomings of today’s crop of large language models — that’s the technology underlying contemporary chatbots — what they can and can’t do, and why.

“The underlying mechanism for Large Language Models does not answer questions directly,” he writes. “Instead, it gives something that sounds like an answer to the question. That is very different from saying something that is accurate. What they have learned is not facts about the world but instead a probability distribution of what word is most likely to come next given the question and the words so far produced in response. Thus the results of using them, uncaged, is lots and lots of confabulations that sound like real things, whether they are or not.”

The solution is not to “train” LLM bots with more and more data, in the hope that eventually they will have databases large enough to make their fabrications unnecessary. Brooks thinks this is the wrong approach. The better option is to purpose-build LLMs to fulfill specific needs in specific fields. Bots specialized for software coding, for instance, or hardware design.

“We need guardrails around LLMs to make them useful, and that is where there will be lot of action over the next 10 years,” he writes. “They cannot be simply released into the wild as they come straight from training. … More training doesn’t make things better necessarily. Boxing things in does.”

Advertisement

Brooks’ all-encompassing theme is that we tend to overestimate what new technologies can do and underestimate how long it takes for any new technology to scale up to usefulness. The hardest problems are almost always the last ones to be solved; people tend to think that new technologies will continue to develop at the speed that they did in their earliest stages.

That’s why the march to full self-driving cars has stalled. It’s one thing to equip cars with lane-change warnings or cruise control that can adjust to the presence of a slower car in front; the road to Level 5 autonomy as defined by the Society of Automotive Engineers — in which the vehicle can drive itself in all conditions without a human ever required to take the wheel — may be decades away at least. No Level 5 vehicles are in general use today.

Believing the claims of technology promoters that one or another nirvana is just around the corner is a mug’s game. “It always takes longer than you think,” Brooks wrote in his original prediction post. “It just does.”

Advertisement

Business

Commentary: In two new court cases, judges find that AI does not have human intelligence

Published

on

Commentary: In two new court cases, judges find that AI does not have human intelligence

It’s becoming clearer with every passing day that the only people making a serious effort to come to grips with the implications of artificial intelligence for society aren’t legislators, or business leaders, or AI promoters themselves. They’re judges.

Indeed, in recent weeks, judges in two federal cases have drawn a line that seems to have eluded many others contemplating AI. The cases relate to copyright law and attorney-client privilege.

In both cases, the judges have effectively declared that AI bots are not human. They don’t have rights reserved for people, and their outputs don’t deserve to be treated as though they come from human intelligence or have any special high-tech standing.

Must invention remain exclusively human, or can autonomous computational systems genuinely originate ideas?

— Artist and computer scientist Stephen Thaler

Advertisement

There’s more to those cases than that. Both cases, including one that got as far as the Supreme Court, underscore the determination of AI promoters and uses to infiltrate the new technology deeper into society.

Start with the more recent case. On Monday, the Supreme Court declined to take up a lawsuit in which artist and computer scientist Stephen Thaler tried to copyright an artwork that he acknowledged had been created by an AI bot of his own invention. That left in place a ruling last year by the District of Columbia Court of Appeals, which held that art created by non-humans can’t be copyrighted.

The case revolved around a 2012 painting titled “A Recent Entrance to Paradise,” depicting train tracks running under a bridge and disappearing into vegetation. Thaler wrote in his application for a copyright that the “author” of the work was his “Creativity Machine,” an AI tool, and that the work was “created autonomously by machine.”

Advertisement

The appellate ruling didn’t engage in artistic criticism, but the work’s artificial origin might be manifest to the discerning eye — its landscape is busy yet indistinct, sort of a melange of green and purple, and the framing doesn’t have any artistic logic — the eye doesn’t know what it’s supposed to be following. But Thaler says it’s the AI bot’s creation and wasn’t generated in response to any user prompt.

In any event, for Judge Patricia A. Millett, who wrote the opinion for a unanimous three-judge panel, the case wasn’t a close one. She cited longstanding regulations of the Copyright Office requiring that “for a work to be copyrightable, it must owe its origin to a human being.”

Millett noted that Thaler hadn’t bothered to conceal the non-human origin of “A Recent Entrance,” acknowledging in court papers that the painting “lacks human authorship.” She rejected Thaler’s argument, as had the federal trial judge who first heard the case, that the Copyright Office’s insistence that the author of a work must be human was unconstitutional. The Supreme Court evidently agreed.

Thaler told me he didn’t see the Supreme Court’s turndown as a “legal defeat.” In a LinkedIn post about the case, he wrote that the decision “represents a philosophical milestone — one that exposes how deeply our intellectual property system struggles to confront autonomous machine creativity.”

As that suggests, Thaler believes we shouldn’t distinguish how we view human creations from machine outputs. “Intelligence, creativity, and invention are not limited to human products,” he told me by email. Autonomous computational systems such as his AI program, he said, “can generate these functions independently.”

Advertisement

Millett’s ruling actually opened the door to admitting AI into the copyright world — but only when it’s used as a tool by a human author. What set Thaler’s case apart from those, she wrote, was his insistence that his AI bot was the “sole author of the work” (emphasis hers), “and it is undeniably a machine, not a human being.”

That brings us to the second case, which involved the question of whether an AI bot’s work should be protected under attorney-client privilege. Federal Judge Jed S. Rakoff of New York ruled, concisely, “The answer is no.”

As I’ve written in the past, Rakoff is one of our most percipient jurists about the impact of new technologies on the law. In his occasional essays for the New York Review of Books, he’s examined how a secret AI algorithm has skewed the sentencing of criminal defendants (especially Black defendants), how cryptocurrency advocates have made a tangle of existing laws on fraud, and how the misuse of cognitive neuroscience has resulted in convictions based on false memories.

In other words, Rakoff isn’t a judge you should try snowing with technological flapdoodle.

The case involved one Bradley Heppner, who was indicted by a federal grand jury for allegedly looting $150 million from a financial services company he chaired. Heppner pleaded innocent and was released on $25-million bail. The case is pending.

Advertisement

According to a ruling Rakoff issued on Feb. 17, the issue before him concerned exchanges that Heppner had with Claude, the chatbot developed by the AI firm Anthropic, written versions of which were seized by the FBI when it executed a search warrant of Heppner’s property.

Knowing that an indictment was in the offing, Heppner had consulted Claude for help on a defense strategy. His lawyers asserted that those exchanges, which were set forth in written memos, were tantamount to consultations with Heppner’s lawyers; therefore, his lawyers said, they were confidential according to attorney-client privilege and couldn’t be used against Heppner in court. (They also cited the related attorney work product doctrine, which grants confidentiality to lawyers’ notes and other similar material.)

That was a nontrivial point. Heppner had given Claude information he had learned from his lawyers, and shared Claude’s responses with his lawyers.

Rakoff made short work of this argument. First, he ruled, the AI documents weren’t communications between Heppner and his attorneys, since Claude isn’t an attorney. All such privileges, he noted, “require, among other things, ‘a trusting human relationship,’” say between a client and a licensed professional subject to ethical rules and duties.

“No such relationship exists, or could exist, between an AI user and a platform such as Claude,” Rakoff observed.

Advertisement

Second, he wrote, the exchanges between Heppner and Claude weren’t confidential. In its terms of use, Anthropic claims the right to collect both a user’s queries and Claude’s responses, use them to “train” Claude, and disclose them to others.

Finally, he wasn’t asking Claude for legal advice, but for information he could pass on to his own lawyers, or not. Indeed, when prosecutors tested Claude by asking whether it could give legal advice, the bot advised them to “consult with a qualified attorney.”

In his ruling, Rakoff did make an effort to address the broader questions judges face in dealing with AI. “Only three years after its release,” he wrote, “one prominent AI platform is being used by more than 800 million people worldwide every week. Yet the implications of AI for the law are only beginning to be explored.”

He concluded that “generative artificial intelligence “presents a new frontier in the ongoing dialogue between technology and the law….But AI’s novelty does not mean that its use is not subject to longstanding legal principles, such as those governing the attorney-client privilege and the work product doctrine.”

In this case and elsewhere, Rakoff has shown a superb grasp of technology issues. In his 2021 essay about the AI algorithm capable of sending people to jail, he put his finger on the factor that makes the very term “artificial intelligence” a misnomer.

Advertisement

The term, he wrote, tends to “conceal the importance of the human designer….It is the designer who determines what kinds of data will be input into the system and from what sources they will be drawn. It is the designer who determines what weights will be given to different inputs and how the program will adjust to them. And it is the designer who determines how all this will be applied to whatever the algorithm is meant to analyze.”

He’s right. That why judges have had so much trouble determining whether the AI engineers feeding information into chatbots to make it seem like they’re “creative” and even “sentient” are infringing the copyrights of the original creators of that information, or creating something new.

The problem is that they’re asking the wrong question. Everything an AI bot spews out is, at more than a fundamental level, the product of human creativity. The AI bots are machines, and portraying them as though they’re thinking creatures like artists or attorneys doesn’t change that, and shouldn’t.

Advertisement
Continue Reading

Business

As gas prices rise, California gets punched harder at the pump than other states

Published

on

As gas prices rise, California gets punched harder at the pump than other states

Californians are feeling more pain at the pump than any other state as the conflict with Iran pushes up prices.

Spencer Shearer was filling up his Nissan Sentra on Friday morning at the Chevron station in Brentwood near San Vicente and Montana avenues and paying a rate higher than almost anywhere else in the country: $5.55 per gallon.

“It sucks,” Shearer said as he watched his bill on the pump click toward $50.

With the continued conflict in and around Iran, gas prices are rising. In the Los Angeles area and a few places around the San Francisco Bay Area, the cost of gas has cracked $5-per-gallon again and is even tipping toward $6 in a few places.

The spreading conflict in the Persian Gulf has had a predictable but unwelcome impact on California drivers. Californians usually pay far more for gas than people in other states.

Advertisement

Its pole position on prices is continuing with the latest surge.

The average cost of a gallon of regular gas in California is the most expensive in the country at $4.91, up 6% from a week ago and 11% from a month ago, according to AAA. The nationwide average is $3.32 per gallon.

The conflict with Iran has strangled movement through the Persian Gulf and catapulted the price of a barrel of oil.

The prices in California are higher than in other states because of higher taxes and stricter requirements for cleaner, more expensive gas that pollutes less. This has been a festering issue not only for the industry but also for consumers.

Fuel marketers, gas station owners and some voters have blamed Gov. Gavin Newsom’s policies.

Advertisement

Gas prices at a Shell station on Foothill Boulevard.

(Robert Gauthier / Los Angeles Times)

Newsom told regulators in 2021 to stop issuing fracking permits and phase out oil extraction by 2045. He also signed a bill allowing local governments to block the construction of oil and gas wells. He seemed to ease his stance last year and signed a bill allowing up to 2,000 new oil wells per year through 2036 in Kern County, which produces about three-fourths of the state’s crude oil.

As a result of the policies that seem aimed at punishing oil producers, California has seen a steady decline in crude oil production, making it more reliant on oil and gasoline supplies outside the state.

Advertisement

In 2024, only 23% of the crude oil refined in the state was pumped in California, with 13% from Alaska and 63% from elsewhere in the world, including about 30% from the Middle East, according to the Western States Petroleum Assn.

The primary reason gas prices in California are high is that refinery closures are reducing local supply while demand has remained high, said Zachary Leary, chief lobbyist at the Western States Petroleum Assn.

“Geopolitical events … show and highlight how fragile it is here in California,” he said.

California’s special gasoline blends are increasingly imported from overseas and can require more than a month to transport, he added.

Supply bottlenecks have been exacerbated by recent refinery closures, including the Phillips 66 refinery in Wilmington in October and the idling and planned closure of the Valero refinery in Benicia, which reduced refining capacity in the state by close to 20%.

Advertisement

It is hard to predict how long this spike in prices will stay, said Severin Borenstein, faculty director of the Energy Institute at UC Berkeley’s Haas School of Business.

“We don’t know whether the war will widen or end quickly,” said Borenstein. “Those things will drive the price of crude.”

At the Brentwood gas station, product manager Conner Uretsky, 30, waited as his partner refueled her Toyota Prius ahead of a trip to Palm Springs. Lately, he said, surging fuel costs have made him think twice about going on road trips.

Uretsky, who moved to Los Angeles from the East Coast about six years ago, said he was initially shocked by the region’s high cost of living.

“Gas prices are crazy,” he said.

Advertisement

Paula, a writer who declined to share her last name, said she was “furious” at President Trump’s decision to start a war with Iran, as well as his recent actions in Venezuela and threats against Greenland and Cuba.

“If you look at who’s paying for this war, we are,” she said, pointing to the fuel price flip sign as she waited for her Volvo hybrid SUV to refuel.

Shearer says he has to be more careful with his gas budget. The business analyst tries to find the least expensive gas near his home in Los Angeles. Still, he’s gotten used to California’s high prices.

“It feels almost normal to be paying this amount,” he said.

Times staff writer Laurence Darmiento contributed to this report.

Advertisement
Continue Reading

Business

Labubu maker Pop Mart is opening U.S. headquarters in Culver City

Published

on

Labubu maker Pop Mart is opening U.S. headquarters in Culver City

Pop Mart, the Chinese toymaker known for its collectible Labubu dolls, reportedly plans to open a new office building in Culver City as it seeks to expand its North American presence.

The 22,000-square-foot office will serve as Pop Mart’s new U.S. headquarters, according to real estate data provider CoStar, which earlier reported the deal.

Pop Mart, founded in 2010 in Beijing, is credited with fueling the frenzy over “blind boxes” — small, collectible toys sold in packaging that keeps the exact figure inside a surprise until it is unsealed.

The toymaker, which is publicly traded on the Hong Kong Stock Exchange, has nearly 600 physical stores across 18 countries, according to its September 2025 half-year financial report.

Advertisement

Much of its recent growth has concentrated in the U.S. In the first half of last year, the company opened 40 new stores, including 19 in the Americas. In Southern California, it now has stores in Westfield Century City, Glendale Galleria, and Westfield UTC Mall in La Jolla.

The office building Pop Mart is moving into, named “Slash,” features leaning glass windows and a distinguishable jagged design. The 1999 building was designed by the Los Angeles architect Eric Owen Moss.

Pop Mart’s decision to root itself in L.A.’s Westside comes amid Culver City’s transformation from a sleepy suburb known for being the home to Sony Pictures Studios — to an urban hub, driven, in part, by the Expo Line station that opened in 2012.

Ikea recently announced plans to open a 40,000-square-foot store in Culver City’s historic Helms Bakery complex — its first in L.A.’s Westside — later this spring.

Big tech has played an important role in Culver City’s recent evolution. Recent additions include Apple, which has opened a studio and has been building a larger office campus; Amazon, which in 2022 unveiled a massive virtual production stage, and Tiktok, which in 2020 opened a five-floor office featuring a content creation studio. Pinterest has a new office in Culver City as of last month, according to the company’s LinkedIn account.

Advertisement
Continue Reading
Advertisement

Trending