Business
Your guide to the presidential candidates' views on tax policy
Though sparse on details, the broad outlines of what Vice President Kamala Harris and former President Trump want to do on taxes are clear — and they are very different.
Trump’s tax proposals are tilted to benefit wealthy Americans and large corporations. Under Harris, the bulk of personal gains would come to those with lower and lower-middle incomes, according to the Penn Wharton Budget Model.
“Harris has a more ‘coherent’ plan because she’s essentially got [President] Biden’s budget proposals, which are fairly scored, scrubbed and all that stuff,” said Douglas Holtz-Eakin, president of the conservative-leaning American Action Forum and former director of the nonpartisan Congressional Budget Office. “We know that agenda — enhance the child tax credit, raise the corporate rate, tax high-income people.”
Trump, he said, “has got a more tax cut orientation. He’s talked about a 15% corporate rate” — down from the current 21% — “and now he’s walking around and offering a handout at every rally on what he’s not going to tax next — tips, Social Security, overtime. It looks to me he’s just trying to match her on middle-class tax cuts.”
Business
Hall of Fame won't get Freddie Freeman's grand slam ball, but Dodgers donate World Series memorabilia
The most valuable piece of memorabilia from the Dodgers’ World Series championship run is easily identified. It’s the baseball struck by Freddie Freeman that landed in the right-field pavilion in the 10th inning of Game 1, the first walk-off grand slam in fall classic history.
Auction experts estimate it would fetch more than $2 million, the value burnished by the Dodgers winning the five-game series over the New York Yankees and Freeman being named the most valuable player. The ball was scooped up by a 10-year-old diehard Dodgers fan, and he’s been floating on cloud nine ever since.
Yet many other items also have value, and there is no shortage of fans that would love nothing more than to own something authentic to forever remind them of the Dodgers’ first full-season championship since 1988.
But first, the National Baseball Hall of Fame and Museum got a haul, coming away with enough Dodgers artifacts to outfit what promises to be a cool display in Cooperstown. David Kohler, president of SCP Auctions, said the collection would be worth “$100,000-plus” at auction and “will make a great display at the Hall of Fame.”
Following the 7-6 series-clinching victory in Game 5 Wednesday night at Yankee Stadium, the Dodgers donated the following:
- Spikes worn by Freeman in Game 1 and 2.
- Glove worn by Walker Buehler, who got the save in Game 5 two days after winning Game 3.
- Cap worn by manager Dave Roberts.
- Clayton Kershaw’s champagne-soaked championship cap.
- Batting gloves worn by Mookie Betts, who hit .290 with 16 runs batted in in the postseason.
- Jersey worn by Anthony Banda, who turned in scoreless relief appearances in each of the four World Series wins.
- Cap and chest protector worn by Will Smith, who caught the final strikeout of the World Series.
- A ball used during the ninth inning of Game 5.
- Max Muncy’s bat and batting gloves when he set a record by reaching base in 12 straight postseason games.
- Batting helmet worn by National League Championship Series Most Valuable Player Tommy Edman.
Mark Langill, the Dodgers staff historian since 1994, also will corral enough artifacts to create displays throughout the stadium. Langill works fast: The jersey Freeman wore when he hit the iconic Game 1 grand slam was already framed and hanging in a Dodger Stadium hallway during Game 2 the next day.
Players own everything in their locker, so the team or the Hall of Fame must get their permission to take clothing or gear. Langill said for the most part players and coaches are happy to donate something that will be displayed for fans to enjoy.
“There is a happy medium,” he said. “You have to respect what the players want.”
Langill is averse to displays loaded up with several baseballs, bats, caps and jerseys.
“You don’t want it to look like a sporting goods store,” he said.
Most everything from the clubhouse after a World Series win has value on the auction market. Players in all sports often sell championship memorabilia, usually after they retire.
Occasionally they even plan ahead. Hall of Famer Gaylord Perry pitched a complete game for his 300th victory in 1982, and he changed his jersey after every inning, creating nine authentic artifacts he could peddle.
These days, Major League Baseball positions employees in each dugout to immediately authenticate everything from milestone baseballs to gear worn by players. When Walker Buehler struck out Alex Verdugo to end the World Series, Smith shoved the ball into his back pocket.
An MLB authenticator tracked him down during the on-field celebration, and Smith handed him the ball. Once the hologram was affixed, the authenticator handed it back to Smith, who said, “I’m gonna give it to Walker.” The authenticator replied, “Absolutely. Congratulations!”
Business
Carol Lombardini, studio negotiator during Hollywood strikes, to step down
Carol Lombardini, who represented the major studios at the bargaining table during last year’s writers’ and actors’ strikes, is set to step down as president and chief negotiator of the Alliance of Motion Picture and Television Producers in 2025.
A spokesperson for the AMPTP confirmed Lombardini’s forthcoming exit Thursday night in an email, adding that she had long planned to retire next year. After 15 years at the helm, Lombardini, 69 according to public records, will transition into an advisory role as the organization conducts a search for her successor.
“We are incredibly grateful to Carol for her many years of leadership at the AMPTP and wish her the very best in her retirement,” the spokesperson said in a statement.
“She has been a steady and invaluable advocate at the bargaining table, strengthening relationships with our union partners every step of the way.”
Lombardini was appointed president of the AMPTP in 2009 after working for the group in a legal capacity since its inception in 1982. She recently came into the spotlight during the dual Hollywood strikes of 2023, bargaining on behalf of Disney, Warner Bros. Discovery, Netflix, Amazon and other entertainment companies.
“I think I’ve participated in more than 300 deals,” Lombardini told The Times in 2009.
“This is probably one of the most heavily unionized industries in the U.S. When you step foot on a set in Hollywood, you’re automatically dealing with 25 unions. It’s very challenging because you have to know what’s in each contract.”
Lombardini’s retirement announcement is not expected to affect ongoing contract negotiations between the AMPTP and the Animation Guild. Negotiations for that contract have historically been led by Lombardini’s deputy, Tracy Cahill.
Before becoming the first female leader of the AMPTP, Lombardini worked for decades under her predecessor and mentor, Nick Counter, who retired from his post and died in 2009.
She was a lightning rod for criticism by Hollywood workers, particularly during last year’s walkouts. A parody account portraying Lombardini as a cartoonish corporate shill went viral on X during the work stoppages of 2023.
The chief negotiator for the top studios and streamers is often regarded as the nemesis of Hollywood labor, but Lombardini had a different take upon stepping into the role more than a decade ago.
“As the chief negotiator, you are the target of negative attention from the other side,” she told The Times.
“But the irony of the situation is that, in reality, I’m labor’s closest ally because if I can’t convince my bargaining committee to do something they are asking for, they are not going to get it.”
Business
Column: These Apple researchers just showed that AI bots can't think, and possibly never will
See if you can solve this arithmetic problem:
Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?
If you answered “190,” congratulations: You did as well as the average grade school kid by getting it right. (Friday’s 44 plus Saturday’s 58 plus Sunday’s 44 multiplied by 2, or 88, equals 190.)
You also did better than more than 20 state-of-the-art artificial intelligence models tested by an AI research team at Apple. The AI bots, they found, consistently got it wrong.
The fact that Apple did this has gotten a lot of attention, but nobody should be surprised at the results.
— AI critic Gary Marcus
The Apple team found “catastrophic performance drops” by those models when they tried to parse simple mathematical problems written in essay form. In this example, the systems tasked with the question often didn’t understand that the size of the kiwis have nothing to do with the number of kiwis Oliver has. Some, consequently, subtracted the five undersized kiwis from the total and answered “185.”
Human schoolchildren, the researchers posited, are much better at detecting the difference between relevant information and inconsequential curveballs.
The Apple findings were published earlier this month in a technical paper that has attracted widespread attention in AI labs and the lay press, not only because the results are well-documented, but also because the researchers work for the nation’s leading high-tech consumer company — and one that has just rolled out a suite of purported AI features for iPhone users.
“The fact that Apple did this has gotten a lot of attention, but nobody should be surprised at the results,” says Gary Marcus, a critic of how AI systems have been marketed as reliably, well, “intelligent.”
Indeed, Apple’s conclusion matches earlier studies that have found that large language models, or LLMs, don’t actually “think” so much as match language patterns in materials they’ve been fed as part of their “training.” When it comes to abstract reasoning — “a key aspect of human intelligence,” in the words of Melanie Mitchell, an expert in cognition and intelligence at the Santa Fe Institute — the models fall short.
“Even very young children are adept at learning abstract rules from just a few examples,” Mitchell and colleagues wrote last year after subjecting GPT bots to a series of analogy puzzles. Their conclusion was that “a large gap in basic abstract reasoning still remains between humans and state-of-the-art AI systems.”
That’s important because LLMs such as GPT underlie the AI products that have captured the public’s attention. But the LLMs tested by the Apple team were consistently misled by the language patterns they were trained on.
The Apple researchers set out to answer the question, “Do these models truly understand mathematical concepts?” as one of the lead authors, Mehrdad Farajtabar, put it in a thread on X. Their answer is no. They also pondered whether the shortcomings they identified can be easily fixed, and their answer is also no: “Can scaling data, models, or compute fundamentally solve this?” Farajtabar asked in his thread. “We don’t think so!”
The Apple research, along with other findings about the limitations of AI bots’ cogitative limitations, is a much-needed corrective to the sales pitches coming from companies hawking their AI models and systems, including OpenAI and Google’s DeepMind lab.
The promoters generally depict their products as dependable and their output as trustworthy. In fact, their output is consistently suspect, posing a clear danger when they’re used in contexts where the need for rigorous accuracy is absolute, say in healthcare applications.
That’s not always the case. “There are some problems which you can make a bunch of money on without having a perfect solution,” Marcus told me. Recommendation engines powered by AI — those that steer buyers on Amazon to products they might also like, for example. If those systems get a recommendation wrong, it’s no big deal; a customer might spend a few dollars on a book he or she didn’t like.
“But a calculator that’s right only 85% of the time is garbage,” Marcus says. “You wouldn’t use it.”
The potential for damagingly inaccurate outputs is heightened by AI bots’ natural language capabilities, with which they offer even absurdly inaccurate answers with convincingly cocksure elan. Often they double down on their errors when challenged.
These errors are typically described by AI researchers as “hallucinations.” The term may make the mistakes seem almost innocuous, but in some applications, even a minuscule error rate can have severe ramifications.
That’s what academic researchers concluded in a recently published analysis of Whisper, an AI-powered speech-to-text tool developed by OpenAI, which can be used to transcribe medical discussions or jailhouse conversations monitored by correction officials.
The researchers found that about 1.4% of Whisper-transcribed audio segments in their sample contained hallucinations, including the addition to transcribed conversation of wholly fabricated statements including portrayals of “physical violence or death … [or] sexual innuendo,” and demographic stereotyping.
That may sound like a minor flaw, but the researchers observed that the errors could be incorporated in official records such as transcriptions of court testimony or prison phone calls — which could lead to official decisions based on “phrases or claims that a defendant never said.”
Updates to Whisper in late 2023 improved its performance, the researchers said, but the updated Whisper “still regularly and reproducibly hallucinated.”
That hasn’t deterred AI promoters from unwarranted boasting about their products. In an Oct. 29 tweet, Elon Musk invited followers to submit “x-ray, PET, MRI or other medical images to Grok [the AI application for his X social media platform] for analysis.” Grok, he wrote, “is already quite accurate and will become extremely good.”
It should go without saying that, even if Musk is telling the truth (not an absolutely certain conclusion), any system used by healthcare providers to analyze medical images needs to be a lot better than “extremely good,” however one might define that standard.
That brings us to the Apple study. It’s proper to note that the researchers aren’t critics of AI as such but believers that its limitations need to be understood. Farajtabar was formerly a senior research scientist at DeepMind, where another author interned under him; other co-authors hold advanced degrees and professional experience in computer science and machine learning.
The team plied their subject AI models with questions drawn from a popular collection of more than 8,000 grade school arithmetic problems testing schoolchildren’s understanding of addition, subtraction, multiplication and division. When the problems incorporated clauses that might seem relevant but weren’t, the models’ performance plummeted.
That was true of all the models, including versions of the GPT bots developed by OpenAI, Meta’s Llama, Microsoft’s Phi-3, Google’s Gemma and several models developed by the French lab Mistral AI.
Some did better than others, but all showed a decline in performance as the problems became more complex. One problem involved a basket of school supplies including erasers, notebooks and writing paper. That requires a solver to multiply the number of each item by its price and add them together to determine how much the entire basket costs.
When the bots were also told that “due to inflation, prices were 10% cheaper last year,” the bots reduced the cost by 10%. That produces a wrong answer, since the question asked what the basket would cost now, not last year.
Why did this happen? The answer is that LLMs are developed, or trained, by feeding them huge quantities of written material scraped from published works or the internet — not by trying to teach them mathematical principles. LLMs function by gleaning patterns in the data and trying to match a pattern to the question at hand.
But they become “overfitted to their training data,” Farajtabar explained via X. “They memorized what is out there on the web and do pattern matching and answer according to the examples they have seen. It’s still a [weak] type of reasoning but according to other definitions it’s not a genuine reasoning capability.” (the brackets are his.)
That’s likely to impose boundaries on what AI can be used for. In mission-critical applications, humans will almost always have to be “in the loop,” as AI developers say—vetting answers for obvious or dangerous inaccuracies or providing guidance to keep the bots from misinterpreting their data, misstating what they know, or filling gaps in their knowledge with fabrications.
To some extent, that’s comforting, for it means that AI systems can’t accomplish much without having human partners at hand. But it also means that we humans need to be aware the tendency of AI promoters to overstate their products’ capabilities and conceal their limitations. The issue is not so much what AI can do, but how users can be gulled into thinking what it can do.
“These systems are always going to make mistakes because hallucinations are inherent,” Marcus says. “The ways in which they approach reasoning are an approximation and not the real thing. And none of this is going away until we have some new technology.”
-
Movie Reviews1 week ago
Alien Country (2024) – Movie Review
-
Technology1 week ago
OpenAI plans to release its next big AI model by December
-
Health1 week ago
New cervical cancer treatment approach could reduce risk of death by 40%, trial results show
-
Culture1 week ago
Top 45 MLB free agents for 2024-25 with contract predictions, team fits: Will Soto get $600M+?
-
Sports1 week ago
Freddie Freeman's walk-off grand slam gives Dodgers Game 1 World Series win vs. Yankees
-
News6 days ago
Sikh separatist, targeted once for assassination, says India still trying to kill him
-
Culture6 days ago
Freddie Freeman wallops his way into World Series history with walk-off slam that’ll float forever
-
Technology6 days ago
When a Facebook friend request turns into a hacker’s trap