Connect with us

Technology

What can a 100-pixel video teach us about storytelling around the world?

Published

on

What can a 100-pixel video teach us about storytelling around the world?

Since its founding in 2007, the Mumbai-based collaborative studio CAMP has used surveillance, TV networks, and digital archives to examine how we move through and record the world. In addition to their film and video projects, the wildly prolific studio runs a rooftop cinema in Mumbai and maintains several online video archives, including the largest digital archive of Indian film.

CAMP’s first major US museum exhibition is on view now at the Museum of Modern Art in New York through July 20th and includes three video projects spanning two decades of work. The exhibit’s three films repurposed private television sets into interactive neighborhood portrayals, collected cellphone footage recorded by sailors navigating the Indian Ocean, and reimagined how a CCTV camera could be utilized for exploration rather than control. In one film, CAMP collected cellphone videos that sailors shared at ports via bluetooth; in another, passersby on street level control a surveillance camera 35 stories above.

I chatted with two of CAMP’s founders, Shaina Anand and Ashok Sukumaran, about the importance of maintaining an open digital archive, the slippery definition of piracy, and how footage that never makes it into a finished film is often the most illuminating.

This interview has been edited for length and clarity.

Shaina Anand and Ashok Sukumaran at the opening for the exhibit Video After Video: The Critical Media of CAMP, at The Museum of Modern Art in New York on February 20th, 2025.
Photo by Amelia Holowaty Krales / The Verge
Advertisement

Your film, From Gulf to Gulf to Gulf, offers a portrait of sailors navigating the Indian Ocean, using cellphone videos to document their journeys and daily lives. Can you talk about how that project came to be and how this partnership with the sailors began?

Ashok Sukumaran: Around the global financial crisis, in 2009, we were walking around the city of Sharjah in the UAE. Sharjah is a creek city, like Dubai. Before oil was discovered, the creeks were the main city center focus. And these boats were these kind of weird, out-of-time wooden ships, and many of them were going to Somali ports. So, we asked them, “How come there were no issues with pirates?” Because everything we were hearing about Somalia at that time was about piracy. They said, “No, no, there’s a difference between going to the Somali town carrying everything they need and driving past it with a ton of oil.”

Shaina Anand: Almost all of these giant wooden boats were built in these twin towns in the Gulf of Kutch, in Gujarat, and they were massive. They were 800–2,000-ton giant wooden crafts.

AS: There’s a kind of language of the port. The Iranians, the UAE folks, the Somali, and of course, Indians and Pakistanis speak a kind of common language, which is close to a Hindustani mix of Farsi and Urdu. So, we were able to talk to everyone, to some extent, and we discovered a kind of music video genre that was really inspiring. This was the 2000s, with early Nokia phones, and sailors would shoot video and add music to it. Then their memory cards would run out [and they’d get deleted]. Some of the videos were 100 by 200 pixels.

SA: It was really important to us to try to trace the genealogy of the cellphone video, and it obviously was changing so fast. [The videos were] 10 frames a second, or 13 frames a second, in odd, square formats. It was rapidly changing.

Advertisement

For us, what was striking was that this image emerged in the middle of nowhere, out at sea, when a brethren boat or a comrade boat was filming on a phone. When our film had its festival run at the National Theatre in London, one of the film programmers came and told me, “It gives us such joy to see those images on the best screen in London.” And it gave us the same joy, too. That there is an equality, then.

Many people misread this “low-res image” and [call it] “a poor image,” and we’re like, that is not what it is at all.

How were the videos originally transferred and shared among sailors?

SA: It was a very physical process because these were not found on the internet. We were physically sitting down with people and saying, “What’s on your phone? Can I have a look at it? What did you film?” These [videos] were exchanged over Bluetooth, so they were not uploaded to YouTube, but they were literally transferred by putting the phones together.

AS: [When the boats] anchor for a bit at these smaller islands along the Gulf of Aden or Gulf of Persia, they’re still always in pairs or threes. They travel together for safety. That’s also the time for leisure and piping in those songs.

Advertisement
From Gulf to Gulf to Gulf presented in the first room of the Video After Video: The Critical Media of CAMP exhibition.

From Gulf to Gulf to Gulf presented in the first room of the Video After Video: The Critical Media of CAMP exhibition.
Photo by Amelia Holowaty Krales / The Verge

There’s something sweet about this moment of being bored at sea and using that space to create something.

SA: In a lot of our work, you see this idea that the subject of the film is usually behind the camera. They’re usually running the thing, and they are looking out at whatever interests them. At sea, you have a lot of time, even though it’s busy when it’s loading and unloading. But at sea, a lot of people are basically hanging out and taking pictures of the things that they can see. Then the music adds the emotional tenor. All the music in the film was found with the video; we didn’t add any music ourselves.

AS: And then if your phone has 2GB memory, that’s the ephemera bit. The video gets deleted, but it’s found on another boat on someone else’s phone.

SA: And within these communities, the videos are quite traceable because the boats are known. There are a thousand boats, but people would instantly recognize, “That’s so and so.” Even by looking at the shape of the boat in a 100-pixel video, they would know which boat it was.

You talked a little bit about how these videos were really ephemeral; they got erased very quickly. So much of your work seems to be about a commitment to maintaining an archive.

Advertisement

AS: We set up CAMP in 2007, with our collaborators who were lawyers and coders and cinephiles, and then, all of us together, good friends. We set up Pad.ma, our first online archive, and the lawyers were working around copyright law and trying to challenge them legally, pushing fair use. We didn’t want to valorize piracy, but we realized how, for countries in Asia, piracy was vital.

You didn’t even think of [buying software from] Microsoft. You bought the parts of a computer with help from the person selling them, saying, “Okay, so much RAM, this motherboard,” and so on, and then loaded what you wanted.

Shaina Anand, Ashok Sukumaran, Rohan Chavan, and Jan Gerber from left.

Shaina Anand, Ashok Sukumaran, Rohan Chavan, and Jan Gerber from left.
Photo by Amelia Holowaty Krales / The Verge

SA: The whole Indian tech sector was built on piracy, or what’s called piracy. People were not able to pay the fees. With Pad.ma, we basically initiated this idea of a footage archive or a collection of material that was not films, but things that were shot by people during film projects that never made it into the cut. For political reasons, for economic reasons, for the reasons that the films were only 30 or 60 minutes long and they had filmed for years, all those kinds of things. The idea was that Pad.ma was a footage archive that allowed you to deeply access that material.

So it’s an archive of scraps — the things around the edges that maybe weren’t shown elsewhere.

SA: Yeah, but here, the scraps are 20 times the size of the finished thing.

Advertisement

AS: I think that’s the important thing. You had 100 hours of footage for a 60-minute film. That was really the reason for building a non-state archive, and we’re the custodians and collaborators who think the 99 hours may be more important. It’s not those old remnant scraps.

It’s the other way around.

AS: It’s the other way around. I mean, you have a one-hour interview, and two minutes might make it into a film.

SA: You had all these examples of European avant-garde filmmakers coming to India making films and then doing these edits of what they thought they were seeing. But the footage is saying much more than their particular edit at the time. It can be very revealing of what was actually going on and how they filmed.

So the archives contain a huge amount of data.

Advertisement

SA: I mean, we have committed to that. We raised money from various sources for the projects. Indiancine.ma, which is a sister project, that’s like the whole of Indian cinema as a metadata archive. AS: There were magical things in 2008 on the platform. One was that the timeline had cut detection. So, you can actually go to a cut just by using your left and right arrow keys. And you don’t have that even in [Adobe] Premiere. You could also densely annotate. So you have researchers working, you have activists, you have film scholars, and they may take from the archive. But in that process, they’ve given back their expertise or their views of the archive.

Can you talk more about your work with participatory filmmaking?

AS: On one level, what had been occupying my head space was this critique of how documentary images are taken, or why this relationship between subject, author, and technology is so dumb.

I would keep saying, “look at the image,” and we can say a white guy filmed it, or we can know this really important Indian filmmaker filmed it, or you can say a top feminist filmmaker filmed it, or a queer person filmed it or a person from that community. But something’s a bit off in that form as well. Not just [in terms of] who’s speaking for who and all of that.

Another of your projects in the exhibit, Khirkeeyaan, which created video portals between neighbors and community centers using CCTV, seems like a place where the subject has a lot of authority over their image.

Advertisement

AS: Between 2005 and 2006, CCTV cameras started to proliferate all over. And they were cheap. So, the electronic market where we’d go to buy computer stuff now had become a CCTV market.

It was $10 for those static cameras. You could get that quad box, like a four-channel mixer. They were everywhere really fast: the grocery store, the dive bar, the beauty salon, the abortion clinic. Wherever I went, I was seeing these tiny things.

Photo by Amelia Holowaty Krales / The Verge

Photo by Amelia Holowaty Krales / The Verge

Photo by Amelia Holowaty Krales / The Verge

Photo by Amelia Holowaty Krales / The Verge

SA: When you put the camera on top of the TV and you allow the two systems to meet, you can just look into the television, and then that’s part of the cable television network. By default, these systems are kind of oppositional. One is a broadcast system, or one is a sucking and one is a closed thing, and if you join them together, they start to talk to each other or—

Download and upload simultaneously.

AS: Exactly, which was the key property of video. That there was feedback. It was immediate.

Advertisement

SA: It was live, and unlike film, you don’t have to process it. They were ambient. They would go on for 24 hours. You were able to say that your household TV is now a portal.

AS: The key thing was that this wasn’t the internet. The cables were all 100 meters each. For a long time, until it got replaced by dish antennas, coaxial cable just used to snake across our cities. The cable would come to your house from the window sill, where the coax would be wrapped around, and there’d be a little booster. It would go from neighborhood to neighborhood, building to building, terrace to terrace. [With Khirkeeyaan], the network was neighborly, but these neighbors were meeting each other for the first time.

Was there anything that kind of surprised you about the way that this network was used?

AS: What always surprises me, and continues to, is that when you set up your own kind of collaboration with the subjects, and then you exit, you’re not asking those leading questions of, “Tell me about your life,” or “Which village do you come from?” And poetry happens. I think, what was very affirmative for me, was just the confidence with which people sat and looked at their TV sets. You sit and look at your TV set all the time, but the TV set now had a hole in it, and it was looking back at you.

Shaina Anand stands in front of the projection of Bombay Tilts Down displayed in the final room of the exhibit, Video After Video: The Critical Media of CAMP.

Shaina Anand stands in front of the projection of Bombay Tilts Down displayed in the final room of the exhibit, Video After Video: The Critical Media of CAMP.
Photo by Amelia Holowaty Krales / The Verge

Another of your videos in the show, Bombay Tilts Down, uses a CCTV camera. Can you talk more about your work utilizing surveillance?

Advertisement

SA: CCTV, in a way, changes how we behave. It sort of infects, depending on who is watching us and how.

In Bombay Tilts Down, it was the simple idea that this gaze of the camera is already there. In the city, there are 5,000 of exactly the same kind of camera, and probably many more.

They’re all at least 4K, and now they’re 8K, but they are robotic controllable cameras that are designed to do facial recognition at a distance. Instead of being a guard, waiting for something to happen, we used it to film the city. And the range is incredible; it goes way beyond the property line of the thing it’s trying to protect. You can see 15 kilometers away with it, from the 35th floor.

So you installed the camera yourself.

SA: This one, yes. The people you see in Bombay Tilts Down are looking up at the camera because people could see the stream downstairs, and some of them were moving the camera around, calling the shots.

Advertisement

Technology

Here’s your first look at Kratos in Amazon’s God of War show

Published

on

Here’s your first look at Kratos in Amazon’s God of War show

Amazon has slowly been teasing out casting details for its live-action adaptation of God of War, and now we have our first look at the show. It’s a single image but a notable one showing protagonist Kratos and his son Atreus. The characters are played by Ryan Hurst and Callum Vinson, respectively, and they look relatively close to their video game counterparts.

There aren’t a lot of other details about the show just yet, but this is Amazon’s official description:

The God of War series storyline follows father and son Kratos and Atreus as they embark on a journey to spread the ashes of their wife and mother, Faye. Through their adventures, Kratos tries to teach his son to be a better god, while Atreus tries to teach his father how to be a better human.

That sounds a lot like the recent soft reboot of the franchise, which started with 2018’s God of War and continued through Ragnarök in 2022. For the Amazon series, Ronald D. Moore, best-known for his work on For All Mankind and Battlestar Galactica, will serve as showrunner. The rest of the cast includes: Mandy Patinkin (Odin), Ed Skrein (Baldur), Max Parker (Heimdall), Ólafur Darri Ólafsson (Thor), Teresa Palmer (Sif), Alastair Duncan (Mimir), Jeff Gulka (Sindri), and Danny Woodburn (Brok).

While production is underway on the God of War series, there’s no word on when it might start streaming.

Advertisement
Continue Reading

Technology

300,000 Chrome users hit by fake AI extensions

Published

on

300,000 Chrome users hit by fake AI extensions

NEWYou can now listen to Fox News articles!

Your web browser may feel like a safe place, especially when you install helpful tools that promise to make your life easier. But security researchers have uncovered a dangerous campaign in which more than 300,000 people installed Chrome extensions pretending to be artificial intelligence (AI) assistants. Instead of helping, these fake tools secretly collect sensitive information like your emails, passwords and browsing activity.

They used familiar names like ChatGPT, Gemini and AI Assistant. If you use Chrome and have installed any AI-related extension, your personal information may already be exposed. Even worse, some of these malicious extensions are still available today, putting more people at risk without their knowing.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

More than 300,000 Chrome users installed fake AI extensions that secretly harvested sensitive data. (Kurt “CyberGuy” Knutsson)

Advertisement

What you need to know about fake AI extensions

Security researchers at browser security company LayerX discovered a large campaign involving 30 malicious Chrome extensions disguised as AI-powered assistants (via BleepingComputer). Together, these extensions were installed more than 300,000 times by unsuspecting users.

Some of the most popular extensions included names like AI Sidebar with 70,000 users, AI Assistant with 60,000 users, ChatGPT Translate with 30,000 users, and Google Gemini with 10,000 users. Another extension called Gemini AI Sidebar had 80,000 users before it was removed.

These extensions were distributed through the official Chrome Web Store, which made them appear legitimate and trustworthy. Even more concerning, researchers found that many of these extensions were connected to the same malicious server, showing they were part of a coordinated effort.

While some extensions have since been removed, others remain available. This means new users could still unknowingly install them and expose their personal data. Here’s the list of the affected extensions:

  • AI Assistant
  • Llama
  • Gemini AI Sidebar
  • AI Sidebar
  • ChatGPT Sidebar
  • Grok
  • Asking ChatGPT
  • ChatGBT
  • Chat Bot GPT
  • Grok Chatbot
  • Chat With Gemini
  • XAI
  • Google Gemini
  • Ask Gemini
  • AI Letter Generator
  • AI Message Generator
  • AI Translator
  • AI For Translation
  • AI Cover Letter Generator
  • AI Image Generator ChatGPT
  • Ai Wallpaper Generator
  • Ai Picture Generator
  • DeepSeek Download
  • AI Email Writer
  • Email Generator AI
  • DeepSeek Chat
  • ChatGPT Picture Generator
  • ChatGPT Translate
  • AI GPT
  • ChatGPT Translation
  • ChatGPT for Gmail

FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

These malicious tools were listed in the official Chrome Web Store, making them appear legitimate and trustworthy. (LayerX)

Advertisement

How the fake AI Chrome extension attack works

These fake extensions pretend to offer helpful AI features, such as translating text, summarizing emails, or acting as an AI assistant. But behind the scenes, they quietly monitor what you are doing online.

Once installed, the extension gains permission to view and interact with the websites you visit. This allows it to read the contents of web pages, including login screens where you enter your username and password.

In some cases, the extensions specifically targeted Gmail. They could read your email messages directly from your browser, including emails you received and even drafts you were still writing. This means attackers could access private conversations, financial information and sensitive personal details.

The extensions then sent this information to servers controlled by the attackers. Because they loaded content remotely, the attackers could change their behavior at any time without needing to update the extension.

Some versions could also activate voice features through your browser. This could potentially capture spoken conversations near your device and send transcripts back to the attackers.

Advertisement

If you installed one of these extensions, attackers may already have access to extremely sensitive information. This includes your email content, login credentials, browsing habits and possibly even voice recordings.

We reached out to Google for comment, and a spokesperson told CyberGuy that the company “can confirm that the extensions from this report have all been removed from the Google Web Store.”

BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK

Once installed, the extensions could read emails, capture passwords, monitor browsing activity and send the data to attacker-controlled servers. (Bildquelle/ullstein bild via Getty Images)

7 ways you can protect yourself from malicious Chrome extensions

If you have ever installed an AI-related Chrome extension, taking a few simple precautions now can help protect your accounts and prevent further damage.

Advertisement

1) Remove any suspicious or unused browser extensions

On a Windows PC or Mac, open Chrome and type chrome://extensions into the address bar. Review every extension listed. If you see anything unfamiliar, especially AI assistants you don’t remember installing, click “Remove” immediately. Malicious extensions depend on going unnoticed. Removing them stops further data collection and cuts off the attacker’s access to your information.

2) Change your passwords

If you installed any suspicious extension, assume your passwords may be compromised. Start by changing your email password first, since email controls access to most other accounts. Then update passwords for banking, shopping and social media accounts. This prevents attackers from using stolen credentials to break into your accounts.

3) Use a password manager to create and protect strong passwords

A password manager generates unique, complex passwords for each account and stores them securely. This prevents attackers from accessing multiple accounts if one password is stolen. Password managers also alert you if your login credentials appear in known data breaches, helping you respond quickly and protect your identity. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

4) Install strong antivirus software and keep it active

Good antivirus software can detect malicious browser extensions, spyware, and other hidden threats. It scans your system for suspicious activity and blocks harmful programs before they can steal your information. This adds an important layer of protection that works continuously in the background to keep your device safe. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

5) Use an identity theft protection service

Identity theft protection services monitor your personal data, including email addresses, financial accounts, and Social Security numbers, for signs of misuse. If criminals try to open accounts or commit fraud using your information, you receive alerts quickly. Early detection allows you to act fast and limit financial and personal damage. See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

Advertisement

6) Keep your browser and computer fully updated

Software updates fix security vulnerabilities that attackers exploit. Enable automatic updates for Chrome and your operating system so you always have the latest protections. These updates strengthen your defenses against malicious extensions and prevent attackers from taking advantage of known weaknesses.

7) Use a personal data removal service

Personal data removal services scan data broker websites that collect and sell your personal information. They help remove your data from these sites, reducing what attackers can find and use against you. Less exposed information means fewer opportunities for criminals to target you with scams, identity theft or phishing attacks.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

Kurt’s key takeaway

Even tools designed to make your life easier can become tools for cybercriminals. Malicious extensions often hide behind trusted names and convincing features, making them difficult to spot. You can significantly reduce your risk by reviewing your browser extensions regularly, removing anything suspicious and using protective tools like password managers and strong antivirus software.

Advertisement

Have you checked your browser extensions recently? Let us know your thoughts by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement

Related Article

Malicious browser extensions hit 4.3M users
Continue Reading

Technology

Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance

Published

on

Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance

Less than 24 hours before the deadline in an ultimatum issued by the Pentagon, Anthropic has refused the Department of Defense’s demands for unrestricted access to its AI.

It’s the culmination of a dramatic exchange of public statements, social media posts, and behind-the-scenes negotiations, coming down to Defense Secretary Pete Hegseth’s desire to renegotiate all AI labs’ current contracts with the military. But Anthropic, so far, has refused to back down from its two current red lines: no mass surveillance of Americans, and no lethal autonomous weapons (or weapons with license to kill targets with no human oversight whatsoever). OpenAI and xAI had reportedly already agreed to the new terms, while Anthropic’s refusal had led to CEO Dario Amodei being summoned to the White House this week for a meeting with Hegseth himself, in which the Secretary reportedly issued an ultimatum to the CEO to back down by the end of business day on Friday or else.

In a statement late Thursday, Amodei wrote, “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community.”

He added that the company has “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner” but that in a “narrow set of cases, we believe AI can undermine, rather than defend, democratic values” — going on to specifically mention mass domestic surveillance and fully autonomous weapons. (Amodei mentioned that “partial autonomous weapons … are vital to the defense of democracy” and that fully autonomous weapons may eventually “prove critical for our national defense,” but that “today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” He did not rule out Anthropic acquiescing to the military’s use of fully autonomous weapons in the future but mentioned that they were not ready now.)

The Pentagon had already reportedly asked major defense contractors to assess their dependence on Anthropic’s Claude, which could be seen as the first step to designating the company a “supply chain risk” – a public threat that the Pentagon had made recently (and a classification usually reserved for threats to national security). The Pentagon was also reportedly considering invoking the Defense Production Act to make Anthropic comply.

Advertisement

Amodei wrote in his statement that the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.” He also wrote that “should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”

Continue Reading

Trending