Connect with us


Fox News AI Newsletter: AI app promises to help pastors preach



Fox News AI Newsletter: AI app promises to help pastors preach

Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.


– ‘Pulpit AI’ aims to help pastors use artificial intelligence to preach beyond Sunday services

– Hacker accessed OpenAI’s internal AI details in 2023 breach: report

– Wild new way to search for anything, anywhere with Google’s Circle to Search AI feature

Pulpit AI, co-founder Jake Sweetman said, is a way for pastors to ensure that their sermons are able to live on “beyond the 90-minute Sunday service.”  (iStock)


AI FAITH: A new artificial intelligence platform aimed at helping pastors preach their sermons more effectively is set to launch later this month.

OPENAI HACK: OpenAI was reportedly breached last year by a hacker who accessed the company’s internal discussions about details of its artificial intelligence technologies.

circle to search 1

Google’s Circle to Search AI feature  (Google)

SECRET GOOGLE SEARCH: Circle to Search is activated by long-pressing the home button or navigation bar on your Android device. Once activated, you can select any part of your screen using various gestures.

AI RACE IS ON: Artificial Intelligence has become the next great domain in the theater of war, and NATO allies have made it a top priority as they look to bolster the alliance’s collective defense.

drones tech Ukraine AI

A UJ-22 Airborne (UkrJet) reconnaissance drone readies to land during a test flight in the Kyiv region on Aug. 2, 2022, prior to being sent to the front line. (Sergei Supinsky/AFP via Getty Images)

Subscribe now to get the Fox News Artificial Intelligence Newsletter in your inbox.





Fox News First
Fox News Opinion
Fox News Lifestyle
Fox News Health


Fox News
Fox Business
Fox Weather
Fox Sports



Fox News Go


Fox Nation

Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.



Here’s how much Valve pays its staff — and how few people it employs



Here’s how much Valve pays its staff — and how few people it employs

Valve is a famously secretive company with an enormous influence on the gaming industry, particularly because it runs the massive PC gaming storefront Steam. But despite that influence, Valve isn’t a large organization on par with EA or Riot Games’ thousands of employees: according to leaked data we’ve seen, as of 2021, Valve employed just 336 staffers.

The data was included as part of an otherwise heavily redacted document from Wolfire’s antitrust lawsuit against Valve. As spotted by SteamDB creator Pavel Djundik, some data in the document was viewable despite the black redaction boxes, including Valve’s headcount and gross pay across various parts of the company over 18 years, and even some data about its gross margins that we weren’t able to uncover fully.

The employee data starts with 2003, which is a few years after Valve’s 1996 founding and the same year Valve launched Steam, and goes all the way up until 2021. The data breaks Valve employees into four different groups: “Admin,” “Games,” “Steam,” and, starting in 2011, “Hardware.”

If you want to sift through the numbers yourself, I’ve included a full table of the data, sorted by year and category, at the end of this story.

One data point I found interesting: Valve peaked with its “Games” payroll spending in 2017 at $221 million (the company didn’t release any new games that year, but that spending could have gone toward supporting games like Dota 2 and developing new games like Artifact); by 2021, that was down to $192 million. Another: as of 2021, Valve employed just 79 people for Steam, which is one of the most influential gaming storefronts on the planet.


“Hardware,” to my surprise, has been a relatively small part of the company, with just 41 employees paid a gross of more than $17 million in 2021. But I’m guessing Valve now employs more hardware-focused staffers following the runaway success of the Steam Deck. In November 2023, Valve’s Pierre-Loup Griffais told The Verge that he thinks “we’re firmly in the camp of being a full fledged hardware company by now.”

Wolfire alleged Valve “…devotes a miniscule percentage of its revenue to maintaining and improving the Steam Store.”

The small number of staff across the board seemingly explains why Valve’s product list is so limited despite its immense business as basically the de facto PC gaming platform. It’s had to get help on hardware and software and has worked with other companies to have them build Steam boxes and controllers. (The company’s flat structure may have something to do with it, too.)

Valve’s small staff is also something that’s been a sticking point for Wolfire. When it filed its lawsuit in 2021, Wolfire alleged that Valve “…devotes a miniscule percentage of its revenue to maintaining and improving the Steam Store.” Valve, as a private company, doesn’t have to share its headcount or financials, but Wolfire estimated that Valve had roughly 360 employees (a number likely sourced from Valve itself in 2016) and that per-employee profit was around $15 million per year.

Even if that $15 million number isn’t exactly right, Valve, in its public employee handbook, says that “our profitability per employee is higher than that of Google or Amazon or Microsoft.” A document from the Wolfire lawsuit revealed Valve employees discussing just how much higher — though the specific number for Valve employees is redacted.


While we haven’t seen any leaked profit numbers from this new headcount and payroll data, the figures give a more detailed picture of how much Valve is spending on its staff — which, given the massive popularity of Steam, is probably still just a fraction of the money the company is pulling in.

Valve didn’t immediately reply to a request for comment. After we reached out, the court pulled the document from the docket.

Sean Hollister contributed reporting. 

Continue Reading


Would you want to chat with this creepy-looking Lego head powered by AI?



Would you want to chat with this creepy-looking Lego head powered by AI?

Join Fox News for access to this content

You have reached your maximum number of articles. Log in or create an account FREE of charge to continue reading.

By entering your email and pushing continue, you are agreeing to Fox News’ Terms of Use and Privacy Policy, which includes our Notice of Financial Incentive.

Please enter a valid email address.

Having trouble? Click here.

Imagine a Lego creation that can not only move but also see, hear and talk back to you. 

That’s exactly what Creative Mindstorms has achieved with Dave, the world’s most advanced artificial intelligence Lego robotic head. 


Created over several months, this robotic head showcases the incredible potential of combining Lego bricks with cutting-edge AI technology.


AI lego robotic head. (Creative Mindstorms)

Dave’s AI brain

What truly sets Dave apart is his integration with ChatGPT. This lets Dave engage in natural, flowing conversations, making interactions feel remarkably lifelike. He can even play games like rock-paper-scissors and respond in real time, creating a seamless dialogue experience.

Lego Dave head 2

AI Lego robotic head. (Creative Mindstorms)

Adding to his impressive capabilities, Dave is also bilingual. He can communicate fluently in English and Dutch, making him the world’s first bilingual Lego robotic head. This feature showcases the potential for AI-powered Lego creations to bridge language barriers and enhance global communication.

lego dave head 3

AI Lego robotic head plays games like rock-paper-scissors. (Creative Mindstorms)


The creative process

Building Dave was no small feat. The creator spent weeks designing complex mechanisms, such as the compact system that allows the eyes to move in multiple directions. The mouth mechanism alone took two weeks to perfect, involving more gears than the entire head.

lego dave head 4

AI Lego robotic head mechanisms. (Creative Mindstorms)


How Dave works

Dave’s lifelike movements are powered by an intricate system of motors and gears. His eyes can move up, down, and side to side, while his eyebrows and mouth corners are also articulated to convey a range of emotions. The eyes, which are probably the most complicated parts of the machine, are connected to each other on a horizontal shaft with vertical axles on both ends. They can be turned up and down with a large motor and side to side using a rack and pinion setup.


Dave’s jaw is a simple hinge with a motor pushing and pulling it, while the corners of the mouth are moved by a lift arm that can be rotated up or down. But Dave isn’t just about hardware. He’s brought to life by nearly 1,100 lines of code that enable him to track hands and faces, recognize faces and objects, read text, count, estimate emotions, age and gender, plus tell time and weather.

lego dave head 5

AI Lego robotic head mechanisms. (Creative Mindstorms)


Challenges and solutions

One of the biggest challenges was creating Dave’s hair. After struggling with Lego bricks, the creator chose a more practical solution — a wig. This creative workaround required some adjustments to the head’s shape but ultimately proved successful.

lego dave head 6

AI Lego robotic head and its creator. (Creative Mindstorms)


Kurt’s key takeaways

Dave represents a significant step forward in the world of Lego robotics and AI integration. While he’s currently a unique creation and not available for public purchase, he demonstrates the incredible possibilities that arise when creativity, engineering, and artificial intelligence converge.


What innovative application or feature would you like to see implemented in future AI-powered Lego creations? Let us know by writing us at

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to

Ask Kurt a question or let us know what stories you’d like us to cover.

Follow Kurt on his social channels:


Answers to the most-asked CyberGuy questions:

Copyright 2024 All rights reserved.

Continue Reading


OpenAI is plagued by safety concerns



OpenAI is plagued by safety concerns

OpenAI is a leader in the race to develop AI as intelligent as a human. Yet, employees continue to show up in the press and on podcasts to voice their grave concerns about safety at the $80 billion nonprofit research lab. The latest comes from The Washington Post, where an anonymous source claimed OpenAI rushed through safety tests and celebrated their product before ensuring its safety.

“They planned the launch after-party prior to knowing if it was safe to launch,” an anonymous employee told The Washington Post. “We basically failed at the process.”

Safety issues loom large at OpenAI — and seem to just keep coming. Current and former employees at OpenAI recently signed an open letter demanding better safety and transparency practices from the startup, not long after its safety team was dissolved following the departure of cofounder Ilya Sutskever. Jan Leike, a key OpenAI researcher, resigned shortly after, claiming in a post that “safety culture and processes have taken a backseat to shiny products” at the company.

Safety is core to OpenAI’s charter, with a clause that claims OpenAI will assist other organizations to advance safety if AGI is reached at a competitor, instead of continuing to compete. It claims to be dedicated to solving the safety problems inherent to such a large, complex system. OpenAI even keeps its proprietary models private, rather than open (causing jabs and lawsuits), for the sake of safety. The warnings make it sound as though safety has been deprioritized despite being so paramount to the culture and structure of the company.

It’s clear that OpenAI is in the hot seat — but public relations efforts alone won’t suffice to safeguard society


“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” OpenAI spokesperson Taya Christianson said in a statement to The Verge. “Rigorous debate is critical given the significance of this technology, and we will continue to engage with governments, civil society and other communities around the world in service of our mission.” 

The stakes around safety, according to OpenAI and others studying the emergent technology, are immense. “Current frontier AI development poses urgent and growing risks to national security,” a report commissioned by the US State Department in March said. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”

The alarm bells at OpenAI also follow the boardroom coup last year that briefly ousted CEO Sam Altman. The board said he was removed due to a failure to be “consistently candid in his communications,” leading to an investigation that did little to reassure the staff.

OpenAI spokesperson Lindsey Held told the Post the GPT-4o launch “didn’t cut corners” on safety, but another unnamed company representative acknowledged that the safety review timeline was compressed to a single week. We “are rethinking our whole way of doing it,” the anonymous representative told the Post. “This [was] just not the best way to do it.”

In the face of rolling controversies (remember the Her incident?), OpenAI has attempted to quell fears with a few well timed announcements. This week, it announced it is teaming up with Los Alamos National Laboratory to explore how advanced AI models, such as GPT-4o, can safely aid in bioscientific research, and in the same announcement it repeatedly pointed to Los Alamos’s own safety record. The next day, an anonymous spokesperson told Bloomberg that OpenAI created an internal scale to track the progress its large language models are making toward artificial general intelligence.

This week’s safety-focused announcements from OpenAI appear to be defensive window dressing in the face of growing criticism of its safety practices. It’s clear that OpenAI is in the hot seat — but public relations efforts alone won’t suffice to safeguard society. What truly matters is the potential impact on those beyond the Silicon Valley bubble if OpenAI continues to fail to develop AI with strict safety protocols, as those internally claim: the average person doesn’t have a say in the development of privatized-AGI, and yet they have no choice in how protected they’ll be from OpenAI’s creations.

“AI tools can be revolutionary,” FTC chair Lina Khan told Bloomberg in November. But “as of right now,” she said, there are concerns that “the critical inputs of these tools are controlled by a relatively small number of companies.”

If the numerous claims against their safety protocols are accurate, this surely raises serious questions about OpenAI’s fitness for this role as steward of AGI, a role that the organization has essentially assigned to itself. Allowing one group in San Francisco to control potentially society-altering technology is cause for concern, and there’s an urgent demand even within its own ranks for transparency and safety now more than ever.

Continue Reading