Connect with us

Business

Opinion: Happy birthday, Amazon? Why one longtime user isn't celebrating the tech behemoth's 30th

Published

on

Opinion: Happy birthday, Amazon? Why one longtime user isn't celebrating the tech behemoth's 30th

I had just started my master’s degree in artificial intelligence when a classmate asked if I’d heard of Amazon, a new online bookstore where you could order basically any book in the world and have it shipped to your front door. Feeling all the excitement of a middle school book fair flooding back, I entered the world of Amazon.com and ordered a beautiful book. It felt revolutionary and futuristic but still cozy and personal. At the end of that year, 1995, Amazon sent loyal customers, including me, a free coffee mug for the holidays.

It would have been hard to imagine then that the small business famously run out of Jeff Bezos’ Bellevue, Wash., garage would be celebrating its 30th anniversary and a mind-bending $1.97 trillion net worth today. I continue to use Amazon to order gadgets and basic necessities, watch movies and shows and read books on a Kindle. I do all of this even though I know the once-beloved bookseller has become a data-hungry behemoth that is laying waste to personal privacy.

Today, Amazon sells basically everything and knows basically everything, from our favorite toilet paper to our kids’ questions for Alexa to what’s going on in our neighborhoods — and has let police in on that, too! Amazon knows where we live, what our voices sound like, who our contacts are, how our credit histories are, at what temperature we like to keep our homes and even whether we have allergies or other health issues.

Based on this information, the company infers a whole profile: It potentially knows whether we’re gay or straight, married or divorced, Republican or Democratic, sexually active or not, religious or secular. It knows how educated we are and how much money we make. And it uses this data to sell to us better.

As a privacy researcher, I advocate for strong consumer privacy protections. After spending the better part of a decade going through privacy policies with a fine-tooth comb, I can safely say that Amazon has been worse for privacy than nearly any other company. It’s not just that Amazon has awful privacy policies; it’s also that, along with Facebook and Google, it co-authored our terrible targeted-ad economy, built on siphoning as much data as possible from users so that anyone with access to it can manipulate you into buying more stuff.

Advertisement

Considering the importance of freedom to America’s origin story, it’s ironic that the country is so beholden to a company that has manipulation of our free will down to a science.

“Did you just buy these Italian coffee beans?” Amazon asks us. “Here’s what you should buy next.”

Privacy and free will are inextricably intertwined: Both rest on being left to decide who we are, what we want and when we want it without anyone watching or interfering. Privacy is good for our mental health and good for society. Neither corporations nor governments — which have a way of acquiring the data the companies collect — should have access to unlimited knowledge about who we are and what we do all the time.

Amazon has played a pivotal role in making that possible. Its war on privacy took a particularly dystopian turn recently in Britain, where some train stations were using an Amazon artificial intelligence system called Rekognition to scan passengers’ faces and determine their age, gender and emotional state, whether happy, sad or angry; identify supposedly antisocial behavior such as running, shouting, skateboarding and smoking; and guess if they were suicidal. It’s like Orwell’s thought police came to life, but instead of Big Brother, it’s Big Bezos.

The worst part is that we just went right along with this intrusion in exchange for cheap stuff and free two-day shipping.

Advertisement

Unfortunately, Amazon has become almost a basic necessity. But we can take steps to rein in its worst consequences.

Consumers shouldn’t bear the burden of making Amazon better; policymakers and regulators should. A good place for them to start is with the American Privacy Rights Act, legislation currently before Congress. It isn’t perfect, but it would at least address our glaring lack of a federal privacy law. State privacy laws form a patchwork that varies widely in how well it protects consumers.

We need to start thinking of data privacy as a human right. The idea that companies have a right to all the data they can collect on and infer about us is absolutely bonkers. Thirty years ago, no one would have agreed with it.

This isn’t how the world should work, and it’s particularly terrifying that this is where we are as we enter the age of artificial intelligence. Generative AI programs, like the chatbots we hear about constantly, are designed to root out as much personal information as they can, supposedly to make them more effective. And Amazon is upgrading its Alexa assistant to incorporate generative AI technology.

Nothing I can impulse-buy on Amazon will help me feel better about a future with no privacy, mass surveillance and pervasive monitoring of our feelings and tendencies. What started as a beautiful book and a free mug has yielded a world where everything I buy, everywhere I go and, perhaps in the not-so-distant future, every emotion I feel can be tracked and turned into inferences to sell me more stuff or push dangerous ideologies or advance any other purpose that corporations or governments deem useful. If it sounds dystopian, that’s because it is.

Advertisement

Jen Caltrider is the director of Mozilla’s *Privacy Not Included project.

Business

California-based company recalls thousands of cases of salad dressing over ‘foreign objects’

Published

on

California-based company recalls thousands of cases of salad dressing over ‘foreign objects’

A California food manufacturer is recalling thousands of cases of salad dressing distributed to major retailers over potential contamination from “foreign objects.”

The company, Irvine-based Ventura Foods, recalled 3,556 cases of the dressing that could be contaminated by “black plastic planting material” in the granulated onion used, according to an alert issued by the U.S. Food and Drug Administration.

Ventura Foods voluntarily initiated the recall of the product, which was sold at Costco, Publix and several other retailers across 27 states, according to the FDA.

None of the 42 locations where the product was sold were in California.

Ventura Foods said it issued the recall after one of its ingredient suppliers recalled a batch of onion granules that the company had used n some of its dressings.

Advertisement

“Upon receiving notice of the supplier’s recall, we acted with urgency to remove all potentially impacted product from the marketplace. This includes urging our customers, their distributors and retailers to review their inventory, segregate and stop the further sale and distribution of any products subject to the recall,” said company spokesperson Eniko Bolivar-Murphy in an emailed statement. “The safety of our products is and will always be our top priority.”

The FDA issued its initial recall alert in early November. Costco also alerted customers at that time, noting that customers could return the products to stores for a full refund. The affected products had sell-by dates between Oct. 17 and Nov. 9.

The company recalled the following types of salad dressing:

  • Creamy Poblano Avocado Ranch Dressing and Dip
  • Ventura Caesar Dressing
  • Pepper Mill Regal Caesar Dressing
  • Pepper Mill Creamy Caesar Dressing
  • Caesar Dressing served at Costco Service Deli
  • Caesar Dressing served at Costco Food Court
  • Hidden Valley, Buttermilk Ranch
Continue Reading

Business

They graduated from Stanford. Due to AI, they can’t find a job

Published

on

They graduated from Stanford. Due to AI, they can’t find a job

A Stanford software engineering degree used to be a golden ticket. Artificial intelligence has devalued it to bronze, recent graduates say.

The elite students are shocked by the lack of job offers as they finish studies at what is often ranked as the top university in America.

When they were freshmen, ChatGPT hadn’t yet been released upon the world. Today, AI can code better than most humans.

Top tech companies just don’t need as many fresh graduates.

“Stanford computer science graduates are struggling to find entry-level jobs” with the most prominent tech brands, said Jan Liphardt, associate professor of bioengineering at Stanford University. “I think that’s crazy.”

Advertisement

While the rapidly advancing coding capabilities of generative AI have made experienced engineers more productive, they have also hobbled the job prospects of early-career software engineers.

Stanford students describe a suddenly skewed job market, where just a small slice of graduates — those considered “cracked engineers” who already have thick resumes building products and doing research — are getting the few good jobs, leaving everyone else to fight for scraps.

“There’s definitely a very dreary mood on campus,” said a recent computer science graduate who asked not to be named so they could speak freely. “People [who are] job hunting are very stressed out, and it’s very hard for them to actually secure jobs.”

The shake-up is being felt across California colleges, including UC Berkeley, USC and others. The job search has been even tougher for those with less prestigious degrees.

Eylul Akgul graduated last year with a degree in computer science from Loyola Marymount University. She wasn’t getting offers, so she went home to Turkey and got some experience at a startup. In May, she returned to the U.S., and still, she was “ghosted” by hundreds of employers.

Advertisement

“The industry for programmers is getting very oversaturated,” Akgul said.

The engineers’ most significant competitor is getting stronger by the day. When ChatGPT launched in 2022, it could only code for 30 seconds at a time. Today’s AI agents can code for hours, and do basic programming faster with fewer mistakes.

Data suggests that even though AI startups like OpenAI and Anthropic are hiring many people, it is not offsetting the decline in hiring elsewhere. Employment for specific groups, such as early-career software developers between the ages of 22 and 25 has declined by nearly 20% from its peak in late 2022, according to a Stanford study.

It wasn’t just software engineers, but also customer service and accounting jobs that were highly exposed to competition from AI. The Stanford study estimated that entry-level hiring for AI-exposed jobs declined 13% relative to less-exposed jobs such as nursing.

In the Los Angeles region, another study estimated that close to 200,000 jobs are exposed. Around 40% of tasks done by call center workers, editors and personal finance experts could be automated and done by AI, according to an AI Exposure Index curated by resume builder MyPerfectResume.

Advertisement

Many tech startups and titans have not been shy about broadcasting that they are cutting back on hiring plans as AI allows them to do more programming with fewer people.

Anthropic Chief Executive Dario Amodei said that 70% to 90% of the code for some products at his company is written by his company’s AI, called Claude. In May, he predicted that AI’s capabilities will increase until close to 50% of all entry-level white-collar jobs might be wiped out in five years.

A common sentiment from hiring managers is that where they previously needed ten engineers, they now only need “two skilled engineers and one of these LLM-based agents,” which can be just as productive, said Nenad Medvidović, a computer science professor at the University of Southern California.

“We don’t need the junior developers anymore,” said Amr Awadallah, CEO of Vectara, a Palo Alto-based AI startup. “The AI now can code better than the average junior developer that comes out of the best schools out there.”

To be sure, AI is still a long way from causing the extinction of software engineers. As AI handles structured, repetitive tasks, human engineers’ jobs are shifting toward oversight.

Advertisement

Today’s AIs are powerful but “jagged,” meaning they can excel at certain math problems yet still fail basic logic tests and aren’t consistent. One study found that AI tools made experienced developers 19% slower at work, as they spent more time reviewing code and fixing errors.

Students should focus on learning how to manage and check the work of AI as well as getting experience working with it, said John David N. Dionisio, a computer science professor at LMU.

Stanford students say they are arriving at the job market and finding a split in the road; capable AI engineers can find jobs, but basic, old-school computer science jobs are disappearing.

As they hit this surprise speed bump, some students are lowering their standards and joining companies they wouldn’t have considered before. Some are creating their own startups. A large group of frustrated grads are deciding to continue their studies to beef up their resumes and add more skills needed to compete with AI.

“If you look at the enrollment numbers in the past two years, they’ve skyrocketed for people wanting to do a fifth-year master’s,” the Stanford graduate said. “It’s a whole other year, a whole other cycle to do recruiting. I would say, half of my friends are still on campus doing their fifth-year master’s.”

Advertisement

After four months of searching, LMU graduate Akgul finally landed a technical lead job at a software consultancy in Los Angeles. At her new job, she uses AI coding tools, but she feels like she has to do the work of three developers.

Universities and students will have to rethink their curricula and majors to ensure that their four years of study prepare them for a world with AI.

“That’s been a dramatic reversal from three years ago, when all of my undergraduate mentees found great jobs at the companies around us,” Stanford’s Liphardt said. “That has changed.”

Advertisement
Continue Reading

Business

Disney+ to be part of a streaming bundle in Middle East

Published

on

Disney+ to be part of a streaming bundle in Middle East

Walt Disney Co. is expanding its presence in the Middle East, inking a deal with Saudi media conglomerate MBC Group and UAE firm Anghami to form a streaming bundle.

The bundle will allow customers in Bahrain, Kuwait, Oman, Qatar, Saudi Arabia and the UAE to access a trio of streaming services — Disney+; MBC Group’s Shahid, which carries Arabic originals, live sports and events; and Anghami’s OSN+, which carries Arabic productions as well as Hollywood content.

The trio bundle costs AED89.99 per month, which is the price of two of the streaming services.

“This deal reflects a shared ambition between Disney+, Shahid and the MBC Group to shape the future of entertainment in the Middle East, a region that is seeing dynamic growth in the sector,” Karl Holmes, senior vice president and general manager of Disney+ EMEA, said in a statement.

Disney has already indicated it plans to grow in the Middle East.

Advertisement

Earlier this year, the company announced it would be building a new theme park in Abu Dhabi in partnership with local firm Miral, which would provide the capital, construction resources and operational oversight. Under the terms of the agreement, Disney would oversee the parks’ design, license its intellectual property and provide “operational expertise,” as well as collect a royalty.

Disney executives said at the time that the decision to build in the Middle East was a way to reach new audiences who were too far from the company’s current hubs in the U.S., Europe and Asia.

Continue Reading

Trending