After Anthropic’s weeks-long standoff with the Pentagon, the company won one milestone: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to reverse its government blacklisting while the judicial process plays out.
Technology
Meet the humanoid robot that learns from natural language, mimics human emotions
Imagine what it would be like to have a robot friend that can do things like take selfies, toss a ball, eat popcorn and play air guitar?
Well, you might not have to wait too long.
Researchers at the University of Tokyo have created a robot that can do all that and more, thanks to the power of GPT-4, the latest and most advanced large language model (LLM) in the world.
CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS, AND EASY HOW-TO’S TO MAKE YOU SMARTER
A researcher gives Alter3, a humanoid robot, verbal instructions. (University of Tokyo)
What is the Alter3 humanoid robot, how does it work?
Alter3 is a humanoid robot that was first introduced in 2016 as a platform for exploring the concept of life in artificial systems. It has a realistic appearance and can move its upper body, head and facial muscles with 43 axes controlled by air actuators. It also has a camera in each eye that allows it to see and interact with humans and the environment.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
Alter3 interacts with a human. (University of Tokyo)
But what makes Alter3 really special is that it can now use GPT-4, a deep learning model that can generate natural language texts from any given prompt, to control its movements and behaviors. This means that instead of having to program every single action for the robot, the researchers can simply give it verbal instructions and let GPT-4 generate the corresponding Python code that runs the Android engine.
CALIFORNIA LEGISLATIVE SESSION TO BE DOMINATED BY AI REGULATIONS AND STATE’S STRUGGLING BUDGET
For example, to make Alter3 take a selfie, the researchers can say something like:
“Create a big, joyful smile and widen your eyes to show excitement. Swiftly turn the upper body slightly to the left, adopting a dynamic posture. Raise the right hand high, simulating a phone. Flex the right elbow, bringing the phone closer to the face. Tilt the head slightly to the right, giving a playful vibe.”
And GPT-4 will produce the code that makes Alter3 do exactly that.
Alter3 mimics taking a selfie. (University of Tokyo)
MORE: HUMANOID ROBOTS ARE NOW DOING THE WORK OF HUMANS IN A SPANX WAREHOUSE
What can the Alter3 humanoid robot do with GPT-4?
The researchers have tested Alter3 with GPT-4 in various scenarios, such as tossing a ball, eating popcorn, and playing air guitar. They have also experimented with different types of feedback, such as linguistic, visual, and emotional, to improve the robot’s performance and adaptability.
Alter3 mimics playing a guitar. (University of Tokyo)
One of the most interesting aspects of Alter3’s behavior is that it can learn from its own memory and from human responses. For instance, if the robot does something that makes a human laugh or smile, it will remember that and try to repeat it in the future. This is similar to how newborn babies imitate their parents’ expressions and gestures.
Alter3 mimics jogging. (University of Tokyo)
MORE: THE NEXT GENERATION OF TESLA’S HUMANOID ROBOT MAKES ITS DEBUT
The researchers have also added some humor and personality to Alter3’s actions. In one case, the robot pretends to eat a bag of popcorn, only to realize that it belongs to the person sitting next to it. It then shows a surprised and embarrassed expression and apologizes with its arms.
Alter3, the humanoid robot (University of Tokyo)
Why is this humanoid robot AI important and what are the implications?
The research team behind Alter3 believes that this is a breakthrough in the field of robotics and artificial intelligence, as it shows how large language models can be used to bridge the gap between natural language and robot control. This opens up new possibilities for human-robot collaboration and communication, as well as for creating more intelligent, adaptable, and personable robotic entities.
Alter3 mimics seeing a pretend snake. (University of Tokyo)
MORE: HOW THIS ROBOT HELPS YOU PROTECT AND CONNECT YOUR HOME
The paper, titled “From Text to Motion: Grounding GPT-4 in a Humanoid Robot ‘Alter3,’” was written by Takahide Yoshida, Atsushi Masumori and Takashi Ikegami and is available on the preprint server arXiv. The authors hope that their work will inspire more research and development in this direction and that one day we might be able to have robot friends that can understand us and share our interests and emotions.
Kurt’s key takeaways
Alter3 is an example of how natural language processing and robotics can work together to create pretty incredible interactions. By using GPT-4, the robot can perform a variety of tasks and behaviors based on verbal commands, without requiring extensive programming or manual control. This also allows the robot to learn from its own experience and from human feedback and to express some humor and personality. Alter3 demonstrates the potential of large language models to improve the field of robotics and artificial intelligence as well as bring us closer to having robot friends that can relate to us and entertain us.
What do you think of Alter3 and its abilities? Would you like to have a robot like that in your life? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Ask Kurt a question or let us know what stories you’d like us to cover.
Answers to the most asked CyberGuy questions:
Ideas for using those Holiday Gift cards:
Copyright 2024 CyberGuy.com. All rights reserved.
Technology
The White House has an app now, and Trump wants you to report people to ICE on it
A new official White House app on Android and iOS takes the content from the White House website and copies it into app format. A tweet announcing the app on Friday morning appeared alongside a video joking about missile launches that also appears to feature an iPhone, rather than the elusive Trump Phone. There’s no word about exclusive features or tie-ins with the phone or Trump Mobile services.
A handful of tabs in the app mostly replicate pages that exist on the Trump Administration’s version of the White House website, including news, livestreams, social feeds, and a gallery. A prominent “Get in Touch” button on the social feeds tab includes an option for users to submit a tip to ICE, which takes them to a tip form on the ICE website. It also includes options for texting the president, contacting the White House, or signing up for a newsletter — we could suggest some better ones.
Technology
Fox News AI Newsletter: Family turns down $26M from AI giant to keep farmland
Stacks from the Hugh L. Spurlock Generating Station are seen on June 12, 2025, in Maysville, Kentucky. (Jeff Swensen/Getty Images)
NEWYou can now listen to Fox News articles!
Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.
IN TODAY’S NEWSLETTER:
– Kentucky family turns down $26M from AI giant to keep farmland that ‘fed a nation’
– Trump names David Sacks co-chair of tech advisory council, expanding AI, crypto role
– Hollywood union praises Trump’s AI policy as ‘protections for human creativity’
MOOVE ALONG: A Kentucky family reportedly rejected a massive $26 million offer from a major artificial intelligence company. The family chose instead to preserve their historic farmland, citing its legacy of helping feed the nation over corporate tech expansion.
A train sits in front of houses on the banks of the Ohio River in Maysville, Kentucky, Sept. 13, 2017. (REUTERS/Brian Snyder)
GROWING INFLUENCE: President Donald Trump has appointed David Sacks as the co-chair of his technology advisory council. This strategic move signals an expanded focus on shaping both artificial intelligence and cryptocurrency policies under the current administration’s economic and political agenda.
‘STRONGLY SUPPORT’: A major Hollywood union is offering praise for President Trump’s approach to artificial intelligence policy. The union specifically highlighted the administration’s efforts to implement protections for human creativity in the face of rapidly evolving generative AI tools in the entertainment industry.
First lady Melania Trump arrives, accompanied by a robot, to attend the “Fostering the Future Together Global Coalition Summit,” with other first spouses, at the White House, Wednesday, March 25, 2026, in Washington. (Jacquelyn Martin/AP Photo)
FUTURE FORWARD: First lady Melania Trump welcomed a humanoid robot during a historic artificial intelligence summit hosted at the White House. The event underscores the administration’s active engagement with rapidly advancing emerging technologies.
WASTE WATCH: Vice President JD Vance’s anti-fraud task force intensifies its efforts to identify and root out fraudulent activities nationwide. The ramped-up initiative follows a major enforcement action that resulted in the suspension of 70 providers in Los Angeles.
TECH SHOWDOWN: House Speaker Mike Johnson outlined two specific conditions that he argues must be met for the United States to successfully win the highly competitive global artificial intelligence race.
SIDELINING PROGRESS: Sen. John Fetterman sharply criticized a proposed moratorium on the construction of AI data centers. Fetterman argues that pausing infrastructure development would place the United States at a severe disadvantage, characterizing the proposal as a “China first” policy.
Nevada Big Blind center. (Zanskar)
EARTH’S EDGE: Fox News’ Bret Baier explores the intersection of political energy strategy and next-generation technology, reporting on how artificial intelligence is playing a crucial role in unlocking new potential for geothermal energy development across the country.
POWER PLAY: Palantir CTO Shyam Sankar addresses what he calls America’s “undeclared emergency.” The sweeping cultural and geopolitical conversation covers the threat posed by Iran, the development of deadly new U.S. weapons systems and strategic maneuvers required to avoid World War III.
CAUTION ADVISED: Apple co-founder Steve Wozniak expressed skepticism about the current state of artificial intelligence. Weighing in on the tech industry’s latest obsession, Wozniak stated plainly that he is not a fan of the technology’s current trajectory.
MONEY MATTERS: BlackRock CEO Larry Fink warned about the financial disparities potentially exacerbated by technological advancements. Fink emphasized that expanding market participation is absolutely necessary to address the growing wealth gap amid the current artificial intelligence boom.
FOLLOW FOX NEWS ON SOCIAL MEDIA
Facebook
Instagram
YouTube
X
LinkedIn
SIGN UP FOR OUR OTHER NEWSLETTERS
Fox News First
Fox News Opinion
Fox News Lifestyle
Fox News Health
DOWNLOAD OUR APPS
Fox News
Fox Business
Fox Weather
Fox Sports
Tubi
WATCH FOX NEWS ONLINE
Fox News Go
STREAM FOX NATION
Fox Nation
Stay up to date on the latest AI technology advancements, and learn about the challenges and opportunities AI presents now and for the future with Fox News here.
Technology
Judge sides with Anthropic to temporarily block the Pentagon’s ban
“The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press,’” Judge Rita F. Lin, a district judge in the northern district of California, wrote in the order, which will go into effect in seven days. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”
A final verdict could be weeks or months out.
Anthropic spokesperson Danielle Cohen said in a Thursday statement, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
“I do think this case touches on an important debate,” Judge Lin said during the Tuesday hearing. “On the one hand, Anthropic is saying that its AI product, Claude, is not safe to use for autonomous lethal weapons and domestic mass surveillance. Anthropic’s position is that if the government wants to use its technology, the government has to agree not to use it for those purposes. On the other hand the Department of War is saying that military commanders have to decide what is safe for its AI to do.”
On Tuesday, Judge Lin went on to say, “It’s not my role to decide who’s right in that debate… The Department of War decides what AI product it wants to use and buy. And everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor.” She added, “I see the question in this case as being … whether the government violated the law when it went beyond that.”
It all started with a memo sent by Defense Secretary Pete Hegseth on Jan. 9, calling for “any lawful use” language to be written into any AI services procurement contract within 180 days, which would include existing contracts with companies like Anthropic, OpenAI, xAI, and Google. Anthropic’s negotiations with the Pentagon stretched on for weeks, hinging on two “red lines” that the company did not want the military to use its AI for: domestic mass surveillance and lethal autonomous weapons (or AI systems with the power to kill targets with no human involvement in the decision-making process). The rollercoaster series of events that followed has included a barrage of social media insults, a formal “supply chain risk” designation with the potential to significantly handicap Anthropic’s business, competing AI companies swooping in to make deals, and an ensuing lawsuit.
With its lawsuit, Anthropic argues that it was punished for speech protected under the First Amendment, and it’s seeking to reverse the supply chain risk designation.
It’s rare, and potentially even unheard of until now, for a US company to be named a supply chain risk, a designation typically reserved for non-US companies potentially linked to foreign adversaries. Anthropic’s designation as such raised eyebrows nationwide and caused bipartisan controversy due to concerns that disagreeing with a presidential administration could potentially lead to outsized retribution for a business in any sector.
Anthropic’s own business has been significantly affected by the designation, according to its court filings, which say that it has “received outreach from numerous outside partners … expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic” and that “dozens of companies have contacted Anthropic” for guidance or information about their rights to terminate usage. Depending on the level to which the government prohibits its contractors’ work with Anthropic, the company alleged that revenue adding up to between hundreds of millions and multiple billions could be at risk.
During Tuesday’s hearing, both companies had a chance to respond to Judge Lin’s questions, which were released in a document the day prior and hinged on matters like whether Hegseth lacked authority to issue certain directives and why Anthropic was named a supply chain risk. The judge also asked, in her pre-released questions, about the circumstances under which a government contractor could face termination for using Anthropic’s technology in their work — for instance, “if a contractor for the Department uses Claude Code as a tool to write software for the Department’s national security systems, would that contractor face termination as a result?”
On Tuesday, the judge also seemed to admonish the Department of War for Hegseth’s X post that caused a lot of widespread confusion per Anthropic’s earlier court filings, stating that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
“You’re standing here saying, ‘We said it but we didn’t really mean it,’” Judge Lin said during the hearing, later pressing on the question of why Hegseth wrote the above barring contractors from working with Anthropic instead of just simply designating Anthropic as a supply chain risk.
In a series of questions on Tuesday, Judge Lin asked whether the Department of War plans to terminate contractors on the basis of their work with Anthropic if it’s separate from their work with the department, and a representative for the Department of War responded, “That is my understanding.”
Judge Lin asked, “Let’s say I’m a military contractor. I don’t provide IT to the military. I provide toilet paper to the military. I’m not going to be terminated for using Anthropic — is that accurate?” The representative for the Department of War responded, “For non-DoW work, that is my understanding.” But when the judge asked whether a military contractor providing IT services to the Department of War, but not for national security systems, could be terminated for using Anthropic, the representative for the Department of War did not give a concrete answer.
During the hearing, Judge Lin cited one of the amicus briefs, which she said used the term “attempted corporate murder.” She said, “I don’t know if it’s ‘murder,’ but it looks like an attempt to cripple Anthropic.”
“We are continuing to be irreparably injured by this directive,” a lawyer for Anthropic said during the hearing, citing Hegseth’s nine-paragraph X post.
In a recent court filing, the Department of Defense alleged that Anthropic could ostensibly “attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations” in the event it felt the military was crossing its red lines — a theoretical situation that the Pentagon said it deemed an “unacceptable risk to national security.” The judge’s pre-released questions seem to challenge that statement, or at least request more information on it, stating, “What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in such acts of sabotage or subversion?”
-
Detroit, MI1 week agoDrummer Brian Pastoria, longtime Detroit music advocate, dies at 68
-
Movie Reviews1 week ago‘Youth’ Twitter review: Ken Karunaas impresses audiences; Suraj Venjaramoodu adds charm; music wins praise | – The Times of India
-
Sports6 days agoIOC addresses execution of 19-year-old Iranian wrestler Saleh Mohammadi
-
New Mexico5 days agoClovis shooting leaves one dead, four injured
-
Business1 week agoDisney’s new CEO says his focus is on storytelling and creativity
-
Technology5 days agoYouTube job scam text: How to spot it fast
-
Tennessee4 days agoTennessee Police Investigating Alleged Assault Involving ‘Reacher’ Star Alan Ritchson
-
Texas1 week agoHow to buy Houston vs. Texas A&M 2026 March Madness tickets