Connect with us

Business

Religious Leaders Experiment with A.I. in Sermons

Published

on

Religious Leaders Experiment with A.I. in Sermons

To members of his synagogue, the voice that played over the speakers of Congregation Emanu El in Houston sounded just like Rabbi Josh Fixler’s.

In the same steady rhythm his congregation had grown used to, the voice delivered a sermon about what it meant to be a neighbor in the age of artificial intelligence. Then, Rabbi Fixler took to the bimah himself.

“The audio you heard a moment ago may have sounded like my words,” he said. “But they weren’t.”

The recording was created by what Rabbi Fixler called “Rabbi Bot,” an A.I. chatbot trained on his old sermons. The chatbot, created with the help of a data scientist, wrote the sermon, even delivering it in an A.I. version of his voice. During the rest of the service, Rabbi Fixler intermittently asked Rabbi Bot questions aloud, which it would promptly answer.

Rabbi Fixler is among a growing number of religious leaders experimenting with A.I. in their work, spurring an industry of faith-based tech companies that offer A.I. tools, from assistants that can do theological research to chatbots that can help write sermons.

Advertisement

For centuries, new technologies have changed the ways people worship, from the radio in the 1920s to television sets in the 1950s and the internet in the 1990s. Some proponents of A.I. in religious spaces have gone back even further, comparing A.I.’s potential — and fears of it — to the invention of the printing press in the 15th century.

Religious leaders have used A.I. to translate their livestreamed sermons into different languages in real time, blasting them out to international audiences. Others have compared chatbots trained on tens of thousands of pages of Scripture to a fleet of newly trained seminary students, able to pull excerpts about certain topics nearly instantaneously.

But the ethical questions around using generative A.I. for religious tasks have become more complicated as the technology has improved, religious leaders say. While most agree that using A.I. for tasks like research or marketing is acceptable, other uses for the technology, like sermon writing, are seen by some as a step too far.

Jay Cooper, a pastor in Austin, Texas, used OpenAI’s ChatGPT to generate an entire service for his church as an experiment in 2023. He marketed it using posters of robots, and the service drew in some curious new attendees — “gamer types,” Mr. Cooper said — who had never before been to his congregation.

The thematic prompt he gave ChatGPT to generate various parts of the service was: “How can we recognize truth in a world where A.I. blurs the truth?” ChatGPT came up with a welcome message, a sermon, a children’s program and even a four-verse song, which was the biggest hit of the bunch, Mr. Cooper said. The song went:

Advertisement

As algorithms spin webs of lies

We lift our gaze to the endless skies

Where Christ’s teachings illuminate our way

Dispelling falsehoods with the light of day

Mr. Cooper has not since used the technology to help write sermons, preferring to draw instead from his own experiences. But the presence of A.I. in faith-based spaces, he said, poses a larger question: Can God speak through A.I.?

“That’s a question a lot of Christians online do not like at all because it brings up some fear,” Mr. Cooper said. “It may be for good reason. But I think it’s a worthy question.”

Advertisement

The impact of A.I. on religion and ethics has been a touch point for Pope Francis on several occasions, though he has not directly addressed using A.I. to help write sermons.

Our humanity “enables us to look at things with God’s eyes, to see connections, situations, events and to uncover their real meaning,” the pope said in a message early last year. “Without this kind of wisdom, life becomes bland.”

He added, “Such wisdom cannot be sought from machines.”

Phil EuBank, a pastor at Menlo Church in Menlo Park, Calif., compared A.I. to a “bionic arm” that could supercharge his work. But when it comes to sermon writing, “there’s that Uncanny Valley territory,” he said, “where it may get you really close, but really close can be really weird.”

Rabbi Fixler agreed. He recalled being taken aback when Rabbi Bot asked him to include in his A.I. sermon, a one-time experiment, a line about itself.

Advertisement

“Just as the Torah instructs us to love our neighbors as ourselves,” Rabbi Bot said, “can we also extend this love and empathy to the A.I. entities we create?”

Rabbis have historically been early adopters of new technologies, especially for printed books in the 15th century. But the divinity of those books was in the spiritual relationship that their readers had with God, said Rabbi Oren Hayon, who is also a part of Congregation Emanu El.

To assist his research, Rabbi Hayon regularly uses a custom chatbot trained on 20 years of his own writings. But he has never used A.I. to write portions of sermons.

“Our job is not just to put pretty sentences together,” Rabbi Hayon said. “It’s to hopefully write something that’s lyrical and moving and articulate, but also responds to the uniquely human hungers and pains and losses that we’re aware of because we are in human communities with other people.” He added, “It can’t be automated.”

Kenny Jahng, a tech entrepreneur, believes that fears about ministers’ using generative A.I. are overblown, and that leaning into the technology may even be necessary to appeal to a new generation of young, tech-savvy churchgoers when church attendance across the country is in decline.

Advertisement

Mr. Jahng, the editor in chief of a faith- and tech-focused media company and founder of an A.I. education platform, has traveled the country in the last year to speak at conferences and promote faith-based A.I. products. He also runs a Facebook group for tech-curious church leaders with over 6,000 members.

“We are looking at data that the spiritually curious in Gen Alpha, Gen Z are much higher than boomers and Gen X-ers that have left the church since Covid,” Mr. Jahng said. “It’s this perfect storm.”

As of now, a majority of faith-based A.I. companies cater to Christians and Jews, but custom chatbots for Muslims and Buddhists exist as well.

Some churches have already started to subtly infuse their services and websites with A.I.

The chatbot on the website of the Father’s House, a church in Leesburg, Fla., for instance, appears to offer standard customer service. Among its recommended questions: “What time are your services?”

Advertisement

The next suggestion is more complex.

“Why are my prayers not answered?”

The chatbot was created by Pastors.ai, a start-up founded by Joe Suh, a tech entrepreneur and attendee of Mr. EuBank’s church in Silicon Valley.

After one of Mr. Suh’s longtime pastors left his church, he had the idea of uploading recordings of that pastor’s sermons to ChatGPT. Mr. Suh would then ask the chatbot intimate questions about his faith. He turned the concept into a business.

Mr. Suh’s chatbots are trained on archives of a church’s sermons and information from its website. But around 95 percent of the people who use the chatbots ask them questions about things like service times rather than probing deep into their spirituality, Mr. Suh said.

Advertisement

“I think that will eventually change, but for now, that concept might be a little bit ahead of its time,” he added.

Critics of A.I. use by religious leaders have pointed to the issue of hallucinations — times when chatbots make stuff up. While harmless in certain situations, faith-based A.I. tools that fabricate religious scripture present a serious problem. In Rabbi Bot’s sermon, for instance, the A.I. invented a quote from the Jewish philosopher Maimonides that would have passed as authentic to the casual listener.

For other religious leaders, the issue of A.I. is a simpler one: How can sermon writers hone their craft without doing it entirely themselves?

“I worry for pastors, in some ways, that it won’t help them stretch their sermon writing muscles, which is where I think so much of our great theology and great sermons come from, years and years of preaching,” said Thomas Costello, a pastor at New Hope Hawaii Kai in Honolulu.

On a recent afternoon at his synagogue, Rabbi Hayon recalled taking a picture of his bookshelf and asking his A.I. assistant which of the books he had not quoted in his recent sermons. Before A.I., he would have pulled down the titles themselves, taking the time to read through their indexes, carefully checking them against his own work.

Advertisement

“I was a little sad to miss that part of the process that is so fruitful and so joyful and rich and enlightening, that gives fuel to the life of the Spirit,” Rabbi Hayon said. “Using A.I. does get you to an answer quicker, but you’ve certainly lost something along the way.”

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Business

Commentary: A leading roboticist punctures the hype about self-driving cars, AI chatbots and humanoid robots

Published

on

Commentary: A leading roboticist punctures the hype about self-driving cars, AI chatbots and humanoid robots

It may come to your attention that we are inundated with technological hype. Self-driving cars, human-like robots and AI chatbots all have been the subject of sometimes outlandishly exaggerated predictions and promises.

So we should be thankful for Rodney Brooks, an Australian-born technologist who has made it one of his missions in life to deflate the hyperbole about these and other supposedly world-changing technologies offered by promoters, marketers and true believers.

As I’ve written before, Brooks is nothing like a Luddite. Quite the contrary: He was a co-founder of IRobot, the maker of the Roomba robotic vacuum cleaner, though he stepped down as the company’s chief technology officer in 2008 and left its board in 2011. He’s a co-founder and chief technology officer of RobustAI, which makes robots for factories and warehouses, and former director of computer science and artificial intelligence labs at Massachusetts Institute of Technology.

Having ideas is easy. Turning them into reality is hard. Turning them into being deployed at scale is even harder.

— Rodney Brooks

Advertisement

In 2018, Brooks published a post of dated predictions about the course of major technologies and promised to revisit them annually for 32 years, when he would be 95. He focused on technologies that were then — and still are — the cynosures of public discussion, including self-driving cars, human space travel, AI bots and humanoid robots.

“Having ideas is easy,” he wrote in that introductory post. “Turning them into reality is hard. Turning them into being deployed at scale is even harder.”

Brooks slotted his predictions into three pigeonholes: NIML, for “not in my lifetime,” NET, for “no earlier than” some specified date, and “by some [specified] date.”

On Jan. 1 he published his eighth annual predictions scorecard. He found that over the years “my predictions held up pretty well, though overall I was a little too optimistic.”

Advertisement

For example in 2018 he predicted “a robot that can provide physical assistance to the elderly over multiple tasks [e.g., getting into and out of bed, washing, using the toilet, etc.]” wouldn’t appear earlier than 2028; as of New Year’s Day, he writes, “no general purpose solution is in sight.”

The first “permanent” human colony on Mars would come no earlier than 2036, he wrote then, which he now calls “way too optimistic.” He now envisions a human landing on Mars no earlier than 2040, and the settlement no earlier than 2050.

A robot that seems “as intelligent, as attentive, and as faithful, as a dog” — no earlier than 2048, he conjectured in 2018. “This is so much harder than most people imagine it to be,” he writes now. “Many think we are already there; I say we are not at all there.” His verdict on a robot that has “any real idea about its own existence, or the existence of humans in the way that a 6-year-old understands humans” — “Not in my lifetime.”

Brooks points out that one way high-tech promoters finesse their exaggerated promises is through subtle redefinition. That has been the case with “self-driving cars,” he writes. Originally the term referred to “any sort of car that could operate without a driver on board, and without a remote driver offering control inputs … where no person needed to drive, but simply communicated to the car where it should take them.”

Waymo, the largest purveyor of self-driven transport, says on its website that its robotaxis are “the embodiment of fully autonomous technology that is always in control from pickup to destination.” Passengers “can sit in the back seat, relax, and enjoy the ride with the Waymo Driver getting them to their destination safely.”

Advertisement

Brooks challenges this claim. One hole in the fabric of full autonomy, he observes, became clear Dec. 20, when a power blackout blanketing San Francisco stranded much of Waymo’s robotaxi fleet on the streets. Waymos, which can read traffic lights, clogged intersections because traffic lights went dark.

The company later acknowledged its vehicles occasionally “require a confirmation check” from humans when they encounter blacked-out traffic signals or other confounding situations. The Dec. 20 blackout, Waymo said, “created a concentrated spike in these requests,” resulting in “a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.”

It’s also known that Waymo pays humans to physically deal with vehicles immobilized by — for example — a passenger’s failure to fully close a car door when exiting. They can be summoned via the third-party app Honk, which chiefly is used by tow truck operators to find stranded customers.

“Current generation Waymos need a lot of human help to operate as they do, from people in the remote operations center to intervene and provide human advice for when something goes wrong, to Honk gig workers scampering around the city,” Brooks observes.

Waymo told me its claim of “fully autonomous” operation is based on the fact that the onboard technology is always in control of its vehicles. In confusing situations the car will call on Waymo’s “fleet response” team of humans, asking them to choose which of several optional paths is the best one. “Control of the vehicle is always with the Waymo Driver” — that is, the onboard technology, spokesman Mark Lewis told me. “A human cannot tele-operate a Waymo vehicle.”

Advertisement

As a pioneering robot designer, Brooks is particularly skeptical about the tech industry’s fascination with humanoid robots. He writes from experience: In 1998 he was building humanoid robots with his graduate students at MIT. Back then he asserted that people would be naturally comfortable with “robots with humanoid form that act like humans; the interface is hardwired in our brains,” and that “humans and robots can cooperate on tasks in close quarters in ways heretofore imaginable only in science fiction.”

Since then it has become clear that general-purpose robots that look and act like humans are chimerical. In fact in many contexts they’re dangerous. Among the unsolved problems in robot design is that no one has created a robot with “human-like dexterity,” he writes. Robotics companies promoting their designs haven’t shown that their proposed products have “multi-fingered dexterity where humans can and do grasp things that are unseen, and grasp and simultaneously manipulate multiple small objects with one hand.”

Two-legged robots have a tendency to fall over and “need human intervention to get back up,” like tortoises fallen on their backs. Because they’re heavy and unstable, they are “currently unsafe for humans to be close to when they are walking.”

(Brooks doesn’t mention this, but even in the 1960s the creators of “The Jetsons” understood that domestic robots wouldn’t rely on legs — their robot maid, Rosie, tooled around their household on wheels, a perception that came as second nature to animators 60 years ago but seems to have been forgotten by today’s engineers.)

As Brooks observes, “even children aged 3 or 4 can navigate around cluttered houses without damaging them. … By age 4 they can open doors with door handles and mechanisms they have never seen before, and safely close those doors behind them. They can do this when they enter a particular house for the first time. They can wander around and up and down and find their way.

Advertisement

“But wait, you say, ‘I’ve seen them dance and somersault, and even bounce off walls.’ Yes, you have seen humanoid robot theater. “

Brooks’ experience with artificial intelligence gives him important insights into the shortcomings of today’s crop of large language models — that’s the technology underlying contemporary chatbots — what they can and can’t do, and why.

“The underlying mechanism for Large Language Models does not answer questions directly,” he writes. “Instead, it gives something that sounds like an answer to the question. That is very different from saying something that is accurate. What they have learned is not facts about the world but instead a probability distribution of what word is most likely to come next given the question and the words so far produced in response. Thus the results of using them, uncaged, is lots and lots of confabulations that sound like real things, whether they are or not.”

The solution is not to “train” LLM bots with more and more data, in the hope that eventually they will have databases large enough to make their fabrications unnecessary. Brooks thinks this is the wrong approach. The better option is to purpose-build LLMs to fulfill specific needs in specific fields. Bots specialized for software coding, for instance, or hardware design.

“We need guardrails around LLMs to make them useful, and that is where there will be lot of action over the next 10 years,” he writes. “They cannot be simply released into the wild as they come straight from training. … More training doesn’t make things better necessarily. Boxing things in does.”

Advertisement

Brooks’ all-encompassing theme is that we tend to overestimate what new technologies can do and underestimate how long it takes for any new technology to scale up to usefulness. The hardest problems are almost always the last ones to be solved; people tend to think that new technologies will continue to develop at the speed that they did in their earliest stages.

That’s why the march to full self-driving cars has stalled. It’s one thing to equip cars with lane-change warnings or cruise control that can adjust to the presence of a slower car in front; the road to Level 5 autonomy as defined by the Society of Automotive Engineers — in which the vehicle can drive itself in all conditions without a human ever required to take the wheel — may be decades away at least. No Level 5 vehicles are in general use today.

Believing the claims of technology promoters that one or another nirvana is just around the corner is a mug’s game. “It always takes longer than you think,” Brooks wrote in his original prediction post. “It just does.”

Advertisement
Continue Reading

Business

Versant launches, Comcast spins off E!, CNBC and MS NOW

Published

on

Versant launches, Comcast spins off E!, CNBC and MS NOW

Comcast has officially spun off its cable channels, including CNBC and MS NOW, into a separate company, Versant Media Group.

The transaction was completed late Friday. On Monday, Versant took a major tumble in its stock market debut — providing a key test of investors’ willingness to hold on to legacy cable channels.

The initial outlook wasn’t pretty, providing awkward moments for CNBC anchors reporting the story.

Versant fell 13% to $40.57 a share on its inaugural trading day. The stock opened Monday on Nasdaq at $45.17 per share.

Comcast opted to cast off the still-profitable cable channels, except for the perennially popular Bravo, as Wall Street has soured on the business, which has been contracting amid a consumer shift to streaming.

Advertisement

Versant’s market performance will be closely watched as Warner Bros. Discovery attempts to separate its cable channels, including CNN, TBS and Food Network, from Warner Bros. studios and HBO later this year. Warner Chief Executive David Zaslav’s plan, which is scheduled to take place in the summer, is being contested by the Ellison family’s Paramount, which has launched a hostile bid for all of Warner Bros. Discovery.

Warner Bros. Discovery has agreed to sell itself to Netflix in an $82.7-billion deal.

The market’s distaste for cable channels has been playing out in recent years. Paramount found itself on the auction block two years ago, in part because of the weight of its struggling cable channels, including Nickelodeon, Comedy Central and MTV.

Management of the New York-based Versant, including longtime NBCUniversal sports and television executive Mark Lazarus, has been bullish on the company’s balance sheet and its prospects for growth. Versant also includes USA Network, Golf Channel, Oxygen, E!, Syfy, Fandango, Rotten Tomatoes, GolfNow, GolfPass and SportsEngine.

“As a standalone company, we enter the market with the scale, strategy and leadership to grow and evolve our business model,” Lazarus, who is Versant’s chief executive, said Monday in a statement.

Advertisement

Through the spin-off, Comcast shareholders received one share of Versant Class A common stock or Versant Class B common stock for every 25 shares of Comcast Class A common stock or Comcast Class B common stock, respectively. The Versant shares were distributed after the close of Comcast trading Friday.

Comcast gained about 3% on Monday, trading around $28.50.

Comcast Chairman Brian Roberts holds 33% of Versant’s controlling shares.

Advertisement
Continue Reading

Business

Ties between California and Venezuela go back more than a century with Chevron

Published

on

Ties between California and Venezuela go back more than a century with Chevron

As a stunned world processes the U.S. government’s sudden intervention in Venezuela — debating its legality, guessing who the ultimate winners and losers will be — a company founded in California with deep ties to the Golden State could be among the prime beneficiaries.

Venezuela has the largest proven oil reserves on the planet. Chevron, the international petroleum conglomerate with a massive refinery in El Segundo and headquartered, until recently, in San Ramon, is the only foreign oil company that has continued operating there through decades of revolution.

Other major oil companies, including ConocoPhillips and Exxon Mobil, pulled out of Venezuela in 2007 when then-President Hugo Chávez required them to surrender majority ownership of their operations to the country’s state-controlled oil company, PDVSA.

But Chevron remained, playing the “long game,” according to industry analysts, hoping to someday resume reaping big profits from the investments the company started making there almost a century ago.

Looks like that bet might finally pay off.

Advertisement

In his news conference Saturday, after U.S. Special Forces snatched Venezuelan President Nicolás Maduro and his wife in Caracas and extradited them to face drug-trafficking charges in New York, President Trump said the U.S. would “run” Venezuela and open more of its massive oil reserves to American corporations.

“We’re going to have our very large U.S. oil companies, the biggest anywhere in the world, go in, spend billions of dollars, fix the badly broken infrastructure, the oil infrastructure, and start making money for the country,” Trump said during a news conference Saturday.

While oil industry analysts temper expectations by warning it could take years to start extracting significant profits given Venezuela’s long-neglected, dilapidated infrastructure, and everyday Venezuelans worry about the proceeds flowing out of the country and into the pockets of U.S. investors, there’s one group who could be forgiven for jumping with unreserved joy: Chevron insiders who championed the decision to remain in Venezuela all these years.

But the company’s official response to the stunning turn of events has been poker-faced.

“Chevron remains focused on the safety and well-being of our employees, as well as the integrity of our assets,” spokesman Bill Turenne emailed The Times on Sunday, the same statement the company sent to news outlets all weekend. “We continue to operate in full compliance with all relevant laws and regulations.”

Advertisement

Turenne did not respond to questions about the possible financial rewards for the company stemming from this weekend’s U.S. military action.

Chevron, which is a direct descendant of a small oil company founded in Southern California in the 1870s, has grown into a $300-billion global corporation. It was headquartered in San Ramon, just outside of San Francisco, until executives announced in August 2024 that they were fleeing high-cost California for Houston.

Texas’ relatively low taxes and light regulation have been a beacon for many California companies, and most of Chevron’s competitors are based there.

Chevron began exploring in Venezuela in the early 1920s, according to the company’s website, and ramped up operations after discovering the massive Boscan oil field in the 1940s. Over the decades, it grew into Venezuela’s largest foreign investor.

The company held on over the decades as Venezuela’s government moved steadily to the left; it began to nationalize the oil industry by creating a state-owned petroleum company in 1976, and then demanded majority ownership of foreign oil assets in 2007, under then-President Hugo Chávez.

Advertisement

Venezuela has the world’s largest proven crude oil reserves — meaning they’re economical to tap — about 303 billion barrels, according to the U.S. Energy Information Administration.

But even with those massive reserves, Venezuela has been producing less than 1% of the world’s crude oil supply. Production has steadily declined from the 3.5 million barrels per day pumped in 1999 to just over 1 million barrels per day now.

Currently, Chevron’s operations in Venezuela employ about 3,000 people and produce between 250,000 and 300,000 barrels of oil per day, according to published reports.

That’s less than 10% of the roughly 3 million barrels the company produces from holdings scattered across the globe, from the Gulf of Mexico to Kazakhstan and Australia.

But some analysts are optimistic that Venezuela could double or triple its current output relatively quickly — which could lead to a windfall for Chevron.

Advertisement

The Associated Press contributed to this report.

Continue Reading
Advertisement

Trending