Connect with us

Technology

Redbox’s disc rentals are over

Published

on

Redbox’s disc rentals are over

A judge overseeing Redbox owner Chicken Soup for the Soul Entertainment’s bankruptcy case granted a request Wednesday to convert it from Chapter 11 to Chapter 7 bankruptcy, according to Lowpass’ Janko Roettgers and The Wall Street Journal. The company’s lawyers said Chicken Soup for the Soul Entertainment will lay off its remaining 1,000 employees and liquidate the businesses, including streaming operations and the 24,000 or so disc kiosks that have rented out DVDs, Blu-rays, and videogames for years.

According to Roettgers, Judge Thomas Horan said, “There is no means to continue to pay employees, pay any bills, otherwise finance this case. It is hopelessly insolvent… Given the fact that there may also be at least the possibility of misappropriation of funds that were held in trust for employees, there is more than ample reason why this case should be converted.”

In addition to operating Redbox, Chicken Soup for the Soul Entertainment also manages brands like Crackle and Screen Media. (Note that Chicken Soup for the Soul Entertainment is a part of Chicken Soup for the Soul LLC; the broader company isn’t a part of this bankruptcy case, according to the WSJ.)

Chicken Soup for the Soul Entertainment didn’t immediately reply to a request for comment.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Here’s how much Valve pays its staff — and how few people it employs

Published

on

Here’s how much Valve pays its staff — and how few people it employs

Valve is a famously secretive company with an enormous influence on the gaming industry, particularly because it runs the massive PC gaming storefront Steam. But despite that influence, Valve isn’t a large organization on par with EA or Riot Games’ thousands of employees: according to leaked data we’ve seen, as of 2021, Valve employed just 336 staffers.

The data was included as part of an otherwise heavily redacted document from Wolfire’s antitrust lawsuit against Valve. As spotted by SteamDB creator Pavel Djundik, some data in the document was viewable despite the black redaction boxes, including Valve’s headcount and gross pay across various parts of the company over 18 years, and even some data about its gross margins that we weren’t able to uncover fully.

The employee data starts with 2003, which is a few years after Valve’s 1996 founding and the same year Valve launched Steam, and goes all the way up until 2021. The data breaks Valve employees into four different groups: “Admin,” “Games,” “Steam,” and, starting in 2011, “Hardware.”

If you want to sift through the numbers yourself, I’ve included a full table of the data, sorted by year and category, at the end of this story.

One data point I found interesting: Valve peaked with its “Games” payroll spending in 2017 at $221 million (the company didn’t release any new games that year, but that spending could have gone toward supporting games like Dota 2 and developing new games like Artifact); by 2021, that was down to $192 million. Another: as of 2021, Valve employed just 79 people for Steam, which is one of the most influential gaming storefronts on the planet.

Advertisement

“Hardware,” to my surprise, has been a relatively small part of the company, with just 41 employees paid a gross of more than $17 million in 2021. But I’m guessing Valve now employs more hardware-focused staffers following the runaway success of the Steam Deck. In November 2023, Valve’s Pierre-Loup Griffais told The Verge that he thinks “we’re firmly in the camp of being a full fledged hardware company by now.”

Wolfire alleged Valve “…devotes a miniscule percentage of its revenue to maintaining and improving the Steam Store.”

The small number of staff across the board seemingly explains why Valve’s product list is so limited despite its immense business as basically the de facto PC gaming platform. It’s had to get help on hardware and software and has worked with other companies to have them build Steam boxes and controllers. (The company’s flat structure may have something to do with it, too.)

Valve’s small staff is also something that’s been a sticking point for Wolfire. When it filed its lawsuit in 2021, Wolfire alleged that Valve “…devotes a miniscule percentage of its revenue to maintaining and improving the Steam Store.” Valve, as a private company, doesn’t have to share its headcount or financials, but Wolfire estimated that Valve had roughly 360 employees (a number likely sourced from Valve itself in 2016) and that per-employee profit was around $15 million per year.

Even if that $15 million number isn’t exactly right, Valve, in its public employee handbook, says that “our profitability per employee is higher than that of Google or Amazon or Microsoft.” A document from the Wolfire lawsuit revealed Valve employees discussing just how much higher — though the specific number for Valve employees is redacted.

Advertisement

While we haven’t seen any leaked profit numbers from this new headcount and payroll data, the figures give a more detailed picture of how much Valve is spending on its staff — which, given the massive popularity of Steam, is probably still just a fraction of the money the company is pulling in.

Valve didn’t immediately reply to a request for comment. After we reached out, the court pulled the document from the docket.

Sean Hollister contributed reporting. 

Continue Reading

Technology

Would you want to chat with this creepy-looking Lego head powered by AI?

Published

on

Would you want to chat with this creepy-looking Lego head powered by AI?

Join Fox News for access to this content

You have reached your maximum number of articles. Log in or create an account FREE of charge to continue reading.

By entering your email and pushing continue, you are agreeing to Fox News’ Terms of Use and Privacy Policy, which includes our Notice of Financial Incentive.

Please enter a valid email address.

Having trouble? Click here.

Imagine a Lego creation that can not only move but also see, hear and talk back to you. 

That’s exactly what Creative Mindstorms has achieved with Dave, the world’s most advanced artificial intelligence Lego robotic head. 

Advertisement

Created over several months, this robotic head showcases the incredible potential of combining Lego bricks with cutting-edge AI technology.

GET SECURITY ALERTS, EXPERT TIPS — SIGN UP FOR KURT’S NEWSLETTER — THE CYBERGUY REPORT HERE

AI lego robotic head. (Creative Mindstorms)

Dave’s AI brain

What truly sets Dave apart is his integration with ChatGPT. This lets Dave engage in natural, flowing conversations, making interactions feel remarkably lifelike. He can even play games like rock-paper-scissors and respond in real time, creating a seamless dialogue experience.

Advertisement
Lego Dave head 2

AI Lego robotic head. (Creative Mindstorms)

Adding to his impressive capabilities, Dave is also bilingual. He can communicate fluently in English and Dutch, making him the world’s first bilingual Lego robotic head. This feature showcases the potential for AI-powered Lego creations to bridge language barriers and enhance global communication.

lego dave head 3

AI Lego robotic head plays games like rock-paper-scissors. (Creative Mindstorms)

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

The creative process

Building Dave was no small feat. The creator spent weeks designing complex mechanisms, such as the compact system that allows the eyes to move in multiple directions. The mouth mechanism alone took two weeks to perfect, involving more gears than the entire head.

lego dave head 4

AI Lego robotic head mechanisms. (Creative Mindstorms)

BOSTON DYNAMICS’ CREEPY ROBOTIC CANINE DANCES IN SPARKLY BLUE COSTUME

How Dave works

Dave’s lifelike movements are powered by an intricate system of motors and gears. His eyes can move up, down, and side to side, while his eyebrows and mouth corners are also articulated to convey a range of emotions. The eyes, which are probably the most complicated parts of the machine, are connected to each other on a horizontal shaft with vertical axles on both ends. They can be turned up and down with a large motor and side to side using a rack and pinion setup.

Advertisement

Dave’s jaw is a simple hinge with a motor pushing and pulling it, while the corners of the mouth are moved by a lift arm that can be rotated up or down. But Dave isn’t just about hardware. He’s brought to life by nearly 1,100 lines of code that enable him to track hands and faces, recognize faces and objects, read text, count, estimate emotions, age and gender, plus tell time and weather.

lego dave head 5

AI Lego robotic head mechanisms. (Creative Mindstorms)

ARE THESE ROBOTS MAKING HUMANS OBSOLETE FOR HOME AND REPAIR TASKS?  

Challenges and solutions

One of the biggest challenges was creating Dave’s hair. After struggling with Lego bricks, the creator chose a more practical solution — a wig. This creative workaround required some adjustments to the head’s shape but ultimately proved successful.

lego dave head 6

AI Lego robotic head and its creator. (Creative Mindstorms)

BEST AMAZON PRIME DAY 2024 EARLY DEALS

Kurt’s key takeaways

Dave represents a significant step forward in the world of Lego robotics and AI integration. While he’s currently a unique creation and not available for public purchase, he demonstrates the incredible possibilities that arise when creativity, engineering, and artificial intelligence converge.

Advertisement

What innovative application or feature would you like to see implemented in future AI-powered Lego creations? Let us know by writing us at Cyberguy.com/Contact

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter

Ask Kurt a question or let us know what stories you’d like us to cover.

Follow Kurt on his social channels:

Advertisement

Answers to the most-asked CyberGuy questions:

Copyright 2024 CyberGuy.com. All rights reserved.

Continue Reading

Technology

OpenAI is plagued by safety concerns

Published

on

OpenAI is plagued by safety concerns

OpenAI is a leader in the race to develop AI as intelligent as a human. Yet, employees continue to show up in the press and on podcasts to voice their grave concerns about safety at the $80 billion nonprofit research lab. The latest comes from The Washington Post, where an anonymous source claimed OpenAI rushed through safety tests and celebrated their product before ensuring its safety.

“They planned the launch after-party prior to knowing if it was safe to launch,” an anonymous employee told The Washington Post. “We basically failed at the process.”

Safety issues loom large at OpenAI — and seem to just keep coming. Current and former employees at OpenAI recently signed an open letter demanding better safety and transparency practices from the startup, not long after its safety team was dissolved following the departure of cofounder Ilya Sutskever. Jan Leike, a key OpenAI researcher, resigned shortly after, claiming in a post that “safety culture and processes have taken a backseat to shiny products” at the company.

Safety is core to OpenAI’s charter, with a clause that claims OpenAI will assist other organizations to advance safety if AGI is reached at a competitor, instead of continuing to compete. It claims to be dedicated to solving the safety problems inherent to such a large, complex system. OpenAI even keeps its proprietary models private, rather than open (causing jabs and lawsuits), for the sake of safety. The warnings make it sound as though safety has been deprioritized despite being so paramount to the culture and structure of the company.

It’s clear that OpenAI is in the hot seat — but public relations efforts alone won’t suffice to safeguard society

Advertisement

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” OpenAI spokesperson Taya Christianson said in a statement to The Verge. “Rigorous debate is critical given the significance of this technology, and we will continue to engage with governments, civil society and other communities around the world in service of our mission.” 

The stakes around safety, according to OpenAI and others studying the emergent technology, are immense. “Current frontier AI development poses urgent and growing risks to national security,” a report commissioned by the US State Department in March said. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”

The alarm bells at OpenAI also follow the boardroom coup last year that briefly ousted CEO Sam Altman. The board said he was removed due to a failure to be “consistently candid in his communications,” leading to an investigation that did little to reassure the staff.

OpenAI spokesperson Lindsey Held told the Post the GPT-4o launch “didn’t cut corners” on safety, but another unnamed company representative acknowledged that the safety review timeline was compressed to a single week. We “are rethinking our whole way of doing it,” the anonymous representative told the Post. “This [was] just not the best way to do it.”

In the face of rolling controversies (remember the Her incident?), OpenAI has attempted to quell fears with a few well timed announcements. This week, it announced it is teaming up with Los Alamos National Laboratory to explore how advanced AI models, such as GPT-4o, can safely aid in bioscientific research, and in the same announcement it repeatedly pointed to Los Alamos’s own safety record. The next day, an anonymous spokesperson told Bloomberg that OpenAI created an internal scale to track the progress its large language models are making toward artificial general intelligence.

This week’s safety-focused announcements from OpenAI appear to be defensive window dressing in the face of growing criticism of its safety practices. It’s clear that OpenAI is in the hot seat — but public relations efforts alone won’t suffice to safeguard society. What truly matters is the potential impact on those beyond the Silicon Valley bubble if OpenAI continues to fail to develop AI with strict safety protocols, as those internally claim: the average person doesn’t have a say in the development of privatized-AGI, and yet they have no choice in how protected they’ll be from OpenAI’s creations.

“AI tools can be revolutionary,” FTC chair Lina Khan told Bloomberg in November. But “as of right now,” she said, there are concerns that “the critical inputs of these tools are controlled by a relatively small number of companies.”

If the numerous claims against their safety protocols are accurate, this surely raises serious questions about OpenAI’s fitness for this role as steward of AGI, a role that the organization has essentially assigned to itself. Allowing one group in San Francisco to control potentially society-altering technology is cause for concern, and there’s an urgent demand even within its own ranks for transparency and safety now more than ever.

Advertisement
Continue Reading

Trending