After Anthropic’s weeks-long standoff with the Pentagon, the company won one milestone: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to reverse its government blacklisting while the judicial process plays out.
Technology
Netflix is raising prices again
Netflix’s prices just went up, with its cheapest, ad-supported tier now reaching $8.99 / month (up from $7.99 / month), according to an updated support page spotted earlier by Android Authority. The standard and premium plans are also getting a hike, going from $17.99 to $19.99 / month and $24.99 to $26.99 / month, respectively.
Netflix didn’t share its reasoning for the price hike this time around, as it last cited delivering “more value for our customers.” It’s also unclear when the price hike will go into effect for existing subscribers. The Verge reached out to Netflix with a request for comment but didn’t immediately hear back.
Technology
Judge sides with Anthropic to temporarily block the Pentagon’s ban
“The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press,’” Judge Rita F. Lin, a district judge in the northern district of California, wrote in the order, which will go into effect in seven days. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”
A final verdict could be weeks or months out.
Anthropic spokesperson Danielle Cohen said in a Thursday statement, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
“I do think this case touches on an important debate,” Judge Lin said during the Tuesday hearing. “On the one hand, Anthropic is saying that its AI product, Claude, is not safe to use for autonomous lethal weapons and domestic mass surveillance. Anthropic’s position is that if the government wants to use its technology, the government has to agree not to use it for those purposes. On the other hand the Department of War is saying that military commanders have to decide what is safe for its AI to do.”
On Tuesday, Judge Lin went on to say, “It’s not my role to decide who’s right in that debate… The Department of War decides what AI product it wants to use and buy. And everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor.” She added, “I see the question in this case as being … whether the government violated the law when it went beyond that.”
It all started with a memo sent by Defense Secretary Pete Hegseth on Jan. 9, calling for “any lawful use” language to be written into any AI services procurement contract within 180 days, which would include existing contracts with companies like Anthropic, OpenAI, xAI, and Google. Anthropic’s negotiations with the Pentagon stretched on for weeks, hinging on two “red lines” that the company did not want the military to use its AI for: domestic mass surveillance and lethal autonomous weapons (or AI systems with the power to kill targets with no human involvement in the decision-making process). The rollercoaster series of events that followed has included a barrage of social media insults, a formal “supply chain risk” designation with the potential to significantly handicap Anthropic’s business, competing AI companies swooping in to make deals, and an ensuing lawsuit.
With its lawsuit, Anthropic argues that it was punished for speech protected under the First Amendment, and it’s seeking to reverse the supply chain risk designation.
It’s rare, and potentially even unheard of until now, for a US company to be named a supply chain risk, a designation typically reserved for non-US companies potentially linked to foreign adversaries. Anthropic’s designation as such raised eyebrows nationwide and caused bipartisan controversy due to concerns that disagreeing with a presidential administration could potentially lead to outsized retribution for a business in any sector.
Anthropic’s own business has been significantly affected by the designation, according to its court filings, which say that it has “received outreach from numerous outside partners … expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic” and that “dozens of companies have contacted Anthropic” for guidance or information about their rights to terminate usage. Depending on the level to which the government prohibits its contractors’ work with Anthropic, the company alleged that revenue adding up to between hundreds of millions and multiple billions could be at risk.
During Tuesday’s hearing, both companies had a chance to respond to Judge Lin’s questions, which were released in a document the day prior and hinged on matters like whether Hegseth lacked authority to issue certain directives and why Anthropic was named a supply chain risk. The judge also asked, in her pre-released questions, about the circumstances under which a government contractor could face termination for using Anthropic’s technology in their work — for instance, “if a contractor for the Department uses Claude Code as a tool to write software for the Department’s national security systems, would that contractor face termination as a result?”
On Tuesday, the judge also seemed to admonish the Department of War for Hegseth’s X post that caused a lot of widespread confusion per Anthropic’s earlier court filings, stating that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
“You’re standing here saying, ‘We said it but we didn’t really mean it,’” Judge Lin said during the hearing, later pressing on the question of why Hegseth wrote the above barring contractors from working with Anthropic instead of just simply designating Anthropic as a supply chain risk.
In a series of questions on Tuesday, Judge Lin asked whether the Department of War plans to terminate contractors on the basis of their work with Anthropic if it’s separate from their work with the department, and a representative for the Department of War responded, “That is my understanding.”
Judge Lin asked, “Let’s say I’m a military contractor. I don’t provide IT to the military. I provide toilet paper to the military. I’m not going to be terminated for using Anthropic — is that accurate?” The representative for the Department of War responded, “For non-DoW work, that is my understanding.” But when the judge asked whether a military contractor providing IT services to the Department of War, but not for national security systems, could be terminated for using Anthropic, the representative for the Department of War did not give a concrete answer.
During the hearing, Judge Lin cited one of the amicus briefs, which she said used the term “attempted corporate murder.” She said, “I don’t know if it’s ‘murder,’ but it looks like an attempt to cripple Anthropic.”
“We are continuing to be irreparably injured by this directive,” a lawyer for Anthropic said during the hearing, citing Hegseth’s nine-paragraph X post.
In a recent court filing, the Department of Defense alleged that Anthropic could ostensibly “attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations” in the event it felt the military was crossing its red lines — a theoretical situation that the Pentagon said it deemed an “unacceptable risk to national security.” The judge’s pre-released questions seem to challenge that statement, or at least request more information on it, stating, “What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in such acts of sabotage or subversion?”
Technology
Drone food delivery launches in New Jersey
NEWYou can now listen to Fox News articles!
You place a food order, check your phone, and instead of a driver pulling up, a drone lowers your meal to your front yard. That scenario is already playing out in the Garden State. But before you get too excited, this is still a limited test.
Grubhub just launched New Jersey’s first drone-powered food delivery pilot, and it is getting plenty of attention. The three-month program kicked off on March 18 in Green Brook, just a few miles from Middlesex. If you live within about 2.5 miles of the location, you may be able to try it yourself.
Even better, you will not pay anything extra to choose the drone option.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
YOUR DOORDASH ORDER MIGHT ARRIVE FROM THE SKY AS DRONE DELIVERIES TAKE OFF
Grubhub launches a three-month drone delivery test in New Jersey, offering faster drop-offs with no added cost. (Grubhub)
How the drone delivery program works
The program is based out of Wonder’s Green Brook location, which operates a multi-restaurant kitchen. That means your order can come from one of 15 different food concepts, all prepared in the same place.
Here is how it works step by step:
- You order through the Grubhub app
- You select drone delivery if you are eligible
- Your food is prepared and secured by trained staff
- A drone flies it along a pre-approved route
- The order is lowered safely to the ground using a tether
You can track everything in real time, just like a regular delivery. It feels familiar, but the final step looks very different.
Why this could be faster than your usual delivery
Timing matters when you are hungry. That is where drones may have a real advantage. Unlike drivers, drones do not deal with traffic, stoplights or parking. They fly directly to your location using optimized flight paths.
Grubhub says deliveries should arrive faster than traditional methods. While that will vary based on conditions, the goal is simple. Less waiting, more eating. This test will help the company see if that promise holds up in real neighborhoods.
AIR TAXIS IN THE US COULD LAUNCH THIS SUMMER
New Jersey residents within range can order food by drone, with real-time tracking and tethered drop-offs. (Grubhub)
The tech behind the delivery drones
The program uses the DE-2020 drone from Dexa, a company that specializes in autonomous delivery systems.
This is not a hobby drone. It is a fully automated aircraft built for commercial use.
Key features include:
- FAA-certified operations for safety and compliance
- Secure communication systems during flight
- Controlled drop-off using a tether system
- Pre-planned routes to reduce noise and disruption
Before each flight, crews check that food is packaged and secured properly. That step helps prevent spills or issues mid-air. In short, there is a lot more going on behind the scenes than a simple takeoff and landing.
We reached out to Grubhub, and a spokesperson shared the following statement:
“Our partnership with Dexa represents a major step forward in Grubhub’s commitment to delivery innovation,” said Abhishek “PJ” Poykayil, SVP of customer delivery operations at Wonder and Grubhub. “By connecting Grubhub’s marketplace expertise, Wonder’s innovative mealtime platform, and Dexa’s expansive drone technology, we’re proud to introduce a faster and more efficient way for New Jersey diners to experience food delivery without compromising safety or reliability.”
We also reached out to Dexa for more insight into the technology behind the program. CEO and founder Beth Flippo shared the following with CyberGuy:
“At Dexa, we’re proud to be powering the underlying autonomous technology that enables this new generation of on-demand delivery. Our partnership with Grubhub brings together their industry-leading logistics network with our advanced autonomy platform, which is designed to safely navigate complex environments, optimize real-time routing, and operate reliably without the need for continuous human intervention. This is a meaningful step toward a future where autonomous systems are woven seamlessly into everyday life, from delivering food and goods to supporting transportation, infrastructure and critical services. As consumers continue to expect faster, more efficient and more sustainable options, autonomy will play a central role in meeting those expectations at scale.”
FORGET DRONES, THIS STREET-SMART ROBOT COULD BE FUTURE OF LOCAL DELIVERIES
Autonomous drones designed by Dexa deliver meals from a central kitchen, bypassing traffic in a new suburban pilot program. (Grubhub)
Why companies are pushing drone delivery now
This move is not random. It is part of a bigger shift in how companies think about delivery. You and I want speed, convenience and reliability. At the same time, businesses want to reduce costs and scale faster. Drone delivery sits right in the middle of that.
It removes many of the delays tied to traditional delivery. It also opens the door to new models, especially in suburban areas where distances are manageable.
We are already seeing this play out in other parts of the country. Companies like Wing, backed by Google’s parent company Alphabet, have been testing and expanding drone deliveries for food, retail and small packages in select U.S. markets.
This New Jersey test is another step in that direction, and it shows how quickly the space is evolving.
What this means to you
Even if you are not in Green Brook, New Jersey, this still matters. Here is why:
You may get faster deliveries
If this works, shorter delivery times could become the new normal.
You could see more delivery options
Apps may soon offer choices like driver, robot or drone depending on your location.
It could change delivery costs
Right now, there is no added fee. In the future, pricing models may shift based on speed and demand.
Your neighborhood may see more drones
That raises questions about noise, safety and privacy that communities will need to address.
This is not only about food. The same technology could expand to groceries, retail and even medical supplies.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
It is easy to see drone delivery as some sort of cool experiment. But something bigger is starting to take shape right above us. For the first time, the sky is becoming part of everyday delivery. Today it is takeout. Tomorrow it could be groceries, last-minute essentials or even urgent supplies. If this technology proves reliable, and we get comfortable with it, the way you get what you need could change faster than you expect. So the next time you hear a faint buzz overhead, you may want to look up. It might not be a plane. It could be your dinner on the way. The real question is not if drones will become part of daily life. It is how soon you will be tracking one to your doorstep.
Would you trust a drone to deliver your next meal? Why or why not? Let us know by writing to us at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Roblox is changing online safety with AI
NEWYou can now listen to Fox News articles!
If you’ve ever wondered how platforms keep up with millions of users at once, this is where things get real. Roblox has over 144 million daily users. That scale creates a massive challenge. Harmful content does not always show up in obvious ways. Sometimes, it is the combination of things that creates the problem. Now, the company is rolling out a new system designed to catch exactly that. But first, it helps to understand what Roblox actually is.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
MEXICAN ILLEGAL ALIEN ALLEGEDLY USED ROBLOX CURRENCY TO SOLICIT EXPLICIT CONTENT FROM KIDS UNDER 10
Roblox rolls out a new AI system that analyzes entire scenes in real time to detect harmful content across its platform. (Brent Lewin/Bloomberg via Getty Images)
What is Roblox
Roblox is an online platform where people can create, share and play games built by other users. Instead of being a single game, it is a massive ecosystem of user-generated experiences that range from simple obstacle courses to complex virtual worlds.
What makes Roblox different is how much control users have. Players are not just consuming content. They are constantly creating it in real time through avatars, text and interactive environments. That constant creation is exactly what makes moderation more complex.
A smarter way to spot harmful content
Most moderation tools look at one thing at a time. A message. An image. An avatar. That approach can miss the bigger picture. Speaking exclusively with CyberGuy, Matt Kaufman, Roblox’s chief safety officer, explained the shift clearly:
“We already moderate all of the objects in a virtual world, but how they come together and interact has long been a challenge. Our new real-time multimodal moderation system looks at an entire scene simultaneously from the user’s point of view – including 3D objects, avatars, and text – capturing all of these elements together in a specific moment to assess whether the combination of content types breaks our rules.”
This is called multimodal moderation. Instead of analyzing pieces in isolation, it looks at everything together in real time.
Why older systems were missing the problem
Here is the issue platforms have faced for years. Something can look harmless on its own. But when combined with other elements, it can become harmful or violate rules.
Kaufman puts it this way: “Traditional AI moderation systems, which moderate one object at a time, can lack context and miss combinations that could be problematic in ways that the individual items are not. This model understands the relationship between different objects and how they come together to catch nuanced violations that standard filters may miss.”
That missing context is exactly what bad actors have been exploiting.
What this new AI actually catches
This system focuses on scenarios that previously slipped through. Think about games where users can draw freely or customize avatars. A drawing alone might seem fine. An avatar alone might seem fine. But together, they could create something inappropriate.
Kaufman explains how the system handles that: “The system can detect combinations of objects that may violate our community standards. For example, some games allow free-form drawing. This real-time multimodal moderation system would look at the drawing, avatar, and 3D setting together and assess it holistically, in order to catch and shut down servers with violating content.”
Right now, the rollout is already targeting problematic avatars and inappropriate drawings.
LOUISIANA SUES ONLINE GAMING PLATFORM ROBLOX FOR ALLEGEDLY ENABLING CHILD PREDATORS
Roblox officials say the new system aims to proactively protect children while maintaining gameplay for compliant users. (Riccardo Milani/Hans Lucas/AFP via Getty Images)
The scale is bigger than you think
This is not a small tweak. It is operating at a massive scale. Roblox says it is already shutting down about 5,000 servers per day for violations.
Kaufman says that reflects the reality of the platform: “With 144 million users connecting and creating on Roblox every single day, our safety systems must be as agile and dynamic as our creators themselves.”
He also adds an important reality check: “No system is foolproof against bad actors, so we are committed to doing our best to stay ahead of those attempting to bypass safety protocols, and we are working to scale this new multimodal system to capture and monitor 100% of playtime.”
What changes for everyday Roblox users
If you or your kids use Roblox, this system will likely work in the background without you noticing. But it changes how quickly harmful behavior gets stopped.
“When problematic behavior repeatedly occurs in a single game instance, this new system is designed to automatically detect and shut down those specific servers in real time, greatly reducing the number of users who might be exposed to that behavior.”
That last part matters. Instead of shutting down an entire game, it targets only the problem.
“By targeting only the violating server rather than the entire experience, we can help prevent violations from reaching more users while allowing well-intentioned players to continue their sessions uninterrupted.”
What this means for parents
For parents, this is a big shift toward proactive safety. Instead of waiting for reports, the system acts in real time.
Kaufman explains: “We want parents to know that we aren’t just reacting to reports – we are proactively building some of the most sophisticated AI moderation systems in the world to help protect their children in real time.”
There is also an important layer of protection during gameplay: “We can now evaluate a combination of problematic text, 3D drawings, or avatar movements in real-time and shut down that specific server immediately – often before a child ever encounters it.”
Still, Roblox stresses that technology alone is not enough. “No system is perfect, and we encourage parents to talk to their children about online safety.”
Ways parents can help keep kids safe
Even with advanced AI moderation, a few simple steps can help you stay one step ahead and keep your child safer online.
1) Talk about what your child is doing online
Ask what games they play and who they interact with so you stay involved.
2) Encourage reporting anything that feels off
Remind your child to report behavior that seems inappropriate or uncomfortable.
3) Check privacy and safety settings together
Review account settings to limit who can chat or interact with your child.
4) Set clear boundaries for gameplay
Agree on rules around screen time and which types of experiences are allowed.
ROBLOX CEO RESPONDS TO SCRUTINY OVER CHILD SAFETY: ESTABLISHING THE ‘GOLD STANDARD’ FOR SAFETY
Roblox targets nuanced rule-breaking by analyzing avatars, text and environments together instead of in isolation. (JasonDoiy/Getty Images)
How Roblox avoids false positives
One concern with any AI system is getting it wrong. Roblox says it is actively working to improve accuracy over time.
“We have a continuous evaluation loop set up to measure false positives from the multimodal moderation system, and we are training the system with that feedback to help it catch those types of examples in the future.”
User feedback also plays a role. “Our creators and users are often the ones to spot new trends emerging… This type of reporting is the most effective way for users to help protect the community.”
AI plus human oversight still matters
Even with automation, humans are still involved.”We already use a combination of AI and a team of safety experts to review content uploaded to the platform before it is ever shown to users.”
The new system adds another layer, not a replacement. “This real-time multimodal moderation system is an additional layer and is fully automated in its evaluation of the entire scene.”
What about privacy and fairness?
Any system this powerful raises questions about privacy and overreach. Roblox says it is limiting how data is used: “Our systems and processes are designed so that data collected for safety is used only for safety purposes.”
On fairness, the company points to ongoing training and transparency: “We are focused on ensuring our safety systems are both highly effective and fair.”
They are also giving creators more visibility: “We have introduced a new chart in the creator dashboard that allows developers to see exactly how many of their game’s servers have been shut down.”
Where this is heading next
This system is just getting started. One future focus is detecting recreations of real-world events that may cross the line.
Kaufman explains why context matters here: “Standard filters might see a specific building or a line of text in isolation and not recognize a violation. However, real-time multimodal moderation can understand the relationship between an environment, the way avatars are interacting within it, and the accompanying chat.”
There is also a push to go beyond shutting down servers: “We’re working on ways to identify specific bad actors so we can remove them without disrupting the experience for the vast majority of our well-intentioned players.”
Kurt’s key takeaways
This is a major shift in how online platforms approach safety. Instead of reacting after something goes wrong, Roblox is trying to stop harmful behavior before most users ever see it. That is a big promise, especially at this scale. At the same time, it highlights a deeper question about the future of online spaces. As AI becomes more involved in moderating behavior, the balance between safety, fairness and freedom will only get more complicated.
So here is the question worth thinking about: If AI is now deciding what crosses the line in real time, how much control are we comfortable handing over to it? Let us know by writing to us at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
Copyright 2026 CyberGuy.com. All rights reserved.
-
Detroit, MI1 week agoDrummer Brian Pastoria, longtime Detroit music advocate, dies at 68
-
Science1 week agoHow a Melting Glacier in Antarctica Could Affect Tens of Millions Around the Globe
-
Movie Reviews1 week ago‘Youth’ Twitter review: Ken Karunaas impresses audiences; Suraj Venjaramoodu adds charm; music wins praise | – The Times of India
-
Science1 week agoI had to man up and get a mammogram
-
Sports6 days agoIOC addresses execution of 19-year-old Iranian wrestler Saleh Mohammadi
-
New Mexico4 days agoClovis shooting leaves one dead, four injured
-
Business1 week agoDisney’s new CEO says his focus is on storytelling and creativity
-
Texas1 week agoHow to buy Houston vs. Texas A&M 2026 March Madness tickets