Connect with us

Technology

What can a 100-pixel video teach us about storytelling around the world?

Published

on

What can a 100-pixel video teach us about storytelling around the world?

Since its founding in 2007, the Mumbai-based collaborative studio CAMP has used surveillance, TV networks, and digital archives to examine how we move through and record the world. In addition to their film and video projects, the wildly prolific studio runs a rooftop cinema in Mumbai and maintains several online video archives, including the largest digital archive of Indian film.

CAMP’s first major US museum exhibition is on view now at the Museum of Modern Art in New York through July 20th and includes three video projects spanning two decades of work. The exhibit’s three films repurposed private television sets into interactive neighborhood portrayals, collected cellphone footage recorded by sailors navigating the Indian Ocean, and reimagined how a CCTV camera could be utilized for exploration rather than control. In one film, CAMP collected cellphone videos that sailors shared at ports via bluetooth; in another, passersby on street level control a surveillance camera 35 stories above.

I chatted with two of CAMP’s founders, Shaina Anand and Ashok Sukumaran, about the importance of maintaining an open digital archive, the slippery definition of piracy, and how footage that never makes it into a finished film is often the most illuminating.

This interview has been edited for length and clarity.

Shaina Anand and Ashok Sukumaran at the opening for the exhibit Video After Video: The Critical Media of CAMP, at The Museum of Modern Art in New York on February 20th, 2025.
Photo by Amelia Holowaty Krales / The Verge
Advertisement

Your film, From Gulf to Gulf to Gulf, offers a portrait of sailors navigating the Indian Ocean, using cellphone videos to document their journeys and daily lives. Can you talk about how that project came to be and how this partnership with the sailors began?

Ashok Sukumaran: Around the global financial crisis, in 2009, we were walking around the city of Sharjah in the UAE. Sharjah is a creek city, like Dubai. Before oil was discovered, the creeks were the main city center focus. And these boats were these kind of weird, out-of-time wooden ships, and many of them were going to Somali ports. So, we asked them, “How come there were no issues with pirates?” Because everything we were hearing about Somalia at that time was about piracy. They said, “No, no, there’s a difference between going to the Somali town carrying everything they need and driving past it with a ton of oil.”

Shaina Anand: Almost all of these giant wooden boats were built in these twin towns in the Gulf of Kutch, in Gujarat, and they were massive. They were 800–2,000-ton giant wooden crafts.

AS: There’s a kind of language of the port. The Iranians, the UAE folks, the Somali, and of course, Indians and Pakistanis speak a kind of common language, which is close to a Hindustani mix of Farsi and Urdu. So, we were able to talk to everyone, to some extent, and we discovered a kind of music video genre that was really inspiring. This was the 2000s, with early Nokia phones, and sailors would shoot video and add music to it. Then their memory cards would run out [and they’d get deleted]. Some of the videos were 100 by 200 pixels.

SA: It was really important to us to try to trace the genealogy of the cellphone video, and it obviously was changing so fast. [The videos were] 10 frames a second, or 13 frames a second, in odd, square formats. It was rapidly changing.

Advertisement

For us, what was striking was that this image emerged in the middle of nowhere, out at sea, when a brethren boat or a comrade boat was filming on a phone. When our film had its festival run at the National Theatre in London, one of the film programmers came and told me, “It gives us such joy to see those images on the best screen in London.” And it gave us the same joy, too. That there is an equality, then.

Many people misread this “low-res image” and [call it] “a poor image,” and we’re like, that is not what it is at all.

How were the videos originally transferred and shared among sailors?

SA: It was a very physical process because these were not found on the internet. We were physically sitting down with people and saying, “What’s on your phone? Can I have a look at it? What did you film?” These [videos] were exchanged over Bluetooth, so they were not uploaded to YouTube, but they were literally transferred by putting the phones together.

AS: [When the boats] anchor for a bit at these smaller islands along the Gulf of Aden or Gulf of Persia, they’re still always in pairs or threes. They travel together for safety. That’s also the time for leisure and piping in those songs.

Advertisement
From Gulf to Gulf to Gulf presented in the first room of the Video After Video: The Critical Media of CAMP exhibition.

From Gulf to Gulf to Gulf presented in the first room of the Video After Video: The Critical Media of CAMP exhibition.
Photo by Amelia Holowaty Krales / The Verge

There’s something sweet about this moment of being bored at sea and using that space to create something.

SA: In a lot of our work, you see this idea that the subject of the film is usually behind the camera. They’re usually running the thing, and they are looking out at whatever interests them. At sea, you have a lot of time, even though it’s busy when it’s loading and unloading. But at sea, a lot of people are basically hanging out and taking pictures of the things that they can see. Then the music adds the emotional tenor. All the music in the film was found with the video; we didn’t add any music ourselves.

AS: And then if your phone has 2GB memory, that’s the ephemera bit. The video gets deleted, but it’s found on another boat on someone else’s phone.

SA: And within these communities, the videos are quite traceable because the boats are known. There are a thousand boats, but people would instantly recognize, “That’s so and so.” Even by looking at the shape of the boat in a 100-pixel video, they would know which boat it was.

You talked a little bit about how these videos were really ephemeral; they got erased very quickly. So much of your work seems to be about a commitment to maintaining an archive.

Advertisement

AS: We set up CAMP in 2007, with our collaborators who were lawyers and coders and cinephiles, and then, all of us together, good friends. We set up Pad.ma, our first online archive, and the lawyers were working around copyright law and trying to challenge them legally, pushing fair use. We didn’t want to valorize piracy, but we realized how, for countries in Asia, piracy was vital.

You didn’t even think of [buying software from] Microsoft. You bought the parts of a computer with help from the person selling them, saying, “Okay, so much RAM, this motherboard,” and so on, and then loaded what you wanted.

Shaina Anand, Ashok Sukumaran, Rohan Chavan, and Jan Gerber from left.

Shaina Anand, Ashok Sukumaran, Rohan Chavan, and Jan Gerber from left.
Photo by Amelia Holowaty Krales / The Verge

SA: The whole Indian tech sector was built on piracy, or what’s called piracy. People were not able to pay the fees. With Pad.ma, we basically initiated this idea of a footage archive or a collection of material that was not films, but things that were shot by people during film projects that never made it into the cut. For political reasons, for economic reasons, for the reasons that the films were only 30 or 60 minutes long and they had filmed for years, all those kinds of things. The idea was that Pad.ma was a footage archive that allowed you to deeply access that material.

So it’s an archive of scraps — the things around the edges that maybe weren’t shown elsewhere.

SA: Yeah, but here, the scraps are 20 times the size of the finished thing.

Advertisement

AS: I think that’s the important thing. You had 100 hours of footage for a 60-minute film. That was really the reason for building a non-state archive, and we’re the custodians and collaborators who think the 99 hours may be more important. It’s not those old remnant scraps.

It’s the other way around.

AS: It’s the other way around. I mean, you have a one-hour interview, and two minutes might make it into a film.

SA: You had all these examples of European avant-garde filmmakers coming to India making films and then doing these edits of what they thought they were seeing. But the footage is saying much more than their particular edit at the time. It can be very revealing of what was actually going on and how they filmed.

So the archives contain a huge amount of data.

Advertisement

SA: I mean, we have committed to that. We raised money from various sources for the projects. Indiancine.ma, which is a sister project, that’s like the whole of Indian cinema as a metadata archive. AS: There were magical things in 2008 on the platform. One was that the timeline had cut detection. So, you can actually go to a cut just by using your left and right arrow keys. And you don’t have that even in [Adobe] Premiere. You could also densely annotate. So you have researchers working, you have activists, you have film scholars, and they may take from the archive. But in that process, they’ve given back their expertise or their views of the archive.

Can you talk more about your work with participatory filmmaking?

AS: On one level, what had been occupying my head space was this critique of how documentary images are taken, or why this relationship between subject, author, and technology is so dumb.

I would keep saying, “look at the image,” and we can say a white guy filmed it, or we can know this really important Indian filmmaker filmed it, or you can say a top feminist filmmaker filmed it, or a queer person filmed it or a person from that community. But something’s a bit off in that form as well. Not just [in terms of] who’s speaking for who and all of that.

Another of your projects in the exhibit, Khirkeeyaan, which created video portals between neighbors and community centers using CCTV, seems like a place where the subject has a lot of authority over their image.

Advertisement

AS: Between 2005 and 2006, CCTV cameras started to proliferate all over. And they were cheap. So, the electronic market where we’d go to buy computer stuff now had become a CCTV market.

It was $10 for those static cameras. You could get that quad box, like a four-channel mixer. They were everywhere really fast: the grocery store, the dive bar, the beauty salon, the abortion clinic. Wherever I went, I was seeing these tiny things.

Photo by Amelia Holowaty Krales / The Verge

Photo by Amelia Holowaty Krales / The Verge

Photo by Amelia Holowaty Krales / The Verge

Photo by Amelia Holowaty Krales / The Verge

SA: When you put the camera on top of the TV and you allow the two systems to meet, you can just look into the television, and then that’s part of the cable television network. By default, these systems are kind of oppositional. One is a broadcast system, or one is a sucking and one is a closed thing, and if you join them together, they start to talk to each other or—

Download and upload simultaneously.

AS: Exactly, which was the key property of video. That there was feedback. It was immediate.

Advertisement

SA: It was live, and unlike film, you don’t have to process it. They were ambient. They would go on for 24 hours. You were able to say that your household TV is now a portal.

AS: The key thing was that this wasn’t the internet. The cables were all 100 meters each. For a long time, until it got replaced by dish antennas, coaxial cable just used to snake across our cities. The cable would come to your house from the window sill, where the coax would be wrapped around, and there’d be a little booster. It would go from neighborhood to neighborhood, building to building, terrace to terrace. [With Khirkeeyaan], the network was neighborly, but these neighbors were meeting each other for the first time.

Was there anything that kind of surprised you about the way that this network was used?

AS: What always surprises me, and continues to, is that when you set up your own kind of collaboration with the subjects, and then you exit, you’re not asking those leading questions of, “Tell me about your life,” or “Which village do you come from?” And poetry happens. I think, what was very affirmative for me, was just the confidence with which people sat and looked at their TV sets. You sit and look at your TV set all the time, but the TV set now had a hole in it, and it was looking back at you.

Shaina Anand stands in front of the projection of Bombay Tilts Down displayed in the final room of the exhibit, Video After Video: The Critical Media of CAMP.

Shaina Anand stands in front of the projection of Bombay Tilts Down displayed in the final room of the exhibit, Video After Video: The Critical Media of CAMP.
Photo by Amelia Holowaty Krales / The Verge

Another of your videos in the show, Bombay Tilts Down, uses a CCTV camera. Can you talk more about your work utilizing surveillance?

Advertisement

SA: CCTV, in a way, changes how we behave. It sort of infects, depending on who is watching us and how.

In Bombay Tilts Down, it was the simple idea that this gaze of the camera is already there. In the city, there are 5,000 of exactly the same kind of camera, and probably many more.

They’re all at least 4K, and now they’re 8K, but they are robotic controllable cameras that are designed to do facial recognition at a distance. Instead of being a guard, waiting for something to happen, we used it to film the city. And the range is incredible; it goes way beyond the property line of the thing it’s trying to protect. You can see 15 kilometers away with it, from the 35th floor.

So you installed the camera yourself.

SA: This one, yes. The people you see in Bombay Tilts Down are looking up at the camera because people could see the stream downstairs, and some of them were moving the camera around, calling the shots.

Advertisement

Technology

Two of my favorite color e-book readers are the cheapest they’ve been in months

Published

on

Two of my favorite color e-book readers are the cheapest they’ve been in months

Color isn’t essential in an e-reader, but let’s be honest, it’s a nice perk that can bring digital books, magazines, comics, cookbooks, and other publications to life. The catch is that color ebook readers tend to be substantially pricier, which makes today’s deals stand out. Right now, the Kindle Colorsoft (16GB) and Kobo Libra Colour are matching their lowest prices to date, with the Amazon e-reader going for $169.99 ($80 off) at Amazon and Best Buy, and the Libra Colour going for $199.99 ($30 off) via Rakuten’s online storefront.

At their core, both are excellent e-readers with 7-inch, 300ppi E Ink displays, which drop to 150ppi when viewing color. The Colorsoft’s display is slightly more vibrant in most instances, but the difference isn’t dramatic. Each also offers IPX8 water resistance, so you don’t need to worry about spills and can comfortably read in the bath or by the pool.

Which one makes more sense for you largely depends on where you buy your books, how much storage you need, and whether you like to take notes. The Colorsoft is great if you’re heavily embedded in Amazon’s ecosystem, as buying and accessing Kindle books is intuitive and doesn’t require any sideloading. As the more affordable option in Amazon’s lineup, the standard Colorsoft delivers a nearly identical reading experience to the Signature Edition, and it supports Amazon’s “Send to Alexa Plus” feature, which lets you send notes or documents to Amazon’s AI-powered assistant for summaries, to-do lists, reminders, and more. The downside is that it lacks wireless charging and an auto-adjusting front light — which are standard on the step-up model — and comes with 16GB of storage instead of 32GB.

That said, if I didn’t already own so many Kindle books, the Libra Colour would be my pick. It offers double the storage at 32GB and includes intuitive physical page-turn buttons. You can also write notes while reading, given that it offers stylus support, and it includes built-in notebook templates, as well as the ability to convert handwriting to typed text. It also supports EPUB and a wider range of file formats, and lets you save articles for offline reading with Instapaper. And it also offers adjustable warm lighting, which makes reading at night a little easier on the eyes.

Continue Reading

Technology

Robot plays tennis with humans in real time

Published

on

Robot plays tennis with humans in real time

NEWYou can now listen to Fox News articles!

A humanoid robot is now rallying tennis shots with a human in real time. It runs without a script or remote control, so it can react instantly on a tennis court.

The robot stands about 4 feet tall, giving it a compact, human-like frame.  Galbot Robotics released a video showing its robot going shot-for-shot with a human player. The system behind it is called LATENT and runs on the Unitree G1.

And it is not just returning the ball. It is moving, adjusting and competing during live play.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

CHINA’S COMPACT HUMANOID ROBOT SHOWS OFF BALANCE AND FLIPS
 

A humanoid robot rallies tennis shots with a human in real time, reacting without scripts or remote control during live play. (Galbot Robotics)

Why this tennis robot is different from others

Most athletic robots you have seen follow scripts. They perform pre-programmed actions or rely on a remote control. This one operates differently. It reacts to a human opponent in real time, tracking fast-moving balls, shifting across the court and returning shots with surprising accuracy. It also adjusts to changing trajectories and unpredictable shots during rallies. Researchers say it can sustain long rallies with millisecond-level reactions and full-body coordination. That marks a major step forward.

How the AI learned to play tennis

Training a robot to play tennis is extremely complex. Tennis involves:

  • Tennis ball speeds can reach up to 67 miles per hour
  • Split-second racket contact
  • Constant movement across a large court

Capturing complete human gameplay data is difficult. So the researchers used a different method.

Training the robot using motion fragments

Instead of recording full matches, they focused on small segments of movement:

Advertisement
  • Forehands
  • Backhands
  • Side steps

They gathered about five hours of motion data from five players. The sessions took place on a compact 10-by-16-foot court. That space is more than 17 times smaller than a standard tennis court.

RESTAURANT ROBOT GOES HAYWIRE, SENDS TABLEWARE FLYING BEFORE BREAKING OUT IN DANCE MOVES
 

Humanoid robots designed by Galbot Robotics select items from a shelf at the Shanghai New Expo Center in Shanghai, China, on July 26, 2025. Galbot Robotics also designed the tennis-playing robot that learns movement fragments and applies them in live competition. (Ying Tang/NurPhoto via Getty Images)

How the robot plays tennis during live rallies

The system first learns individual movements. Then it combines them into coordinated sequences. That allows the robot to:

  • Move toward the ball
  • Strike it with control
  • Recover and reposition

To improve performance, the team trained the model in simulation. They varied physical conditions such as mass, friction and aerodynamics. This helps the robot adapt to real-world unpredictability. As a result, the system responds dynamically instead of following a fixed routine. 

How well does it actually perform against humans?

In testing, the system achieved up to 96% success on forehand shots in simulation. In real-world trials, the robot can sustain rallies with a human and consistently return the ball across the net.

Advertisement

Watching the demo, it appears competitive. At times, the robot places shots away from the human player. That suggests more than a simple reaction. It points toward early forms of decision-making.

There are still limits. The robot can look unstable at times. Its motion is not yet as fluid as a trained athlete. High or unpredictable shots may still present challenges. Even so, the progress is clear.

Why this matters beyond tennis

This breakthrough goes far beyond tennis. It shows how robots can learn complex human skills without perfect data. The same approach could apply to:

  • Football
  • Badminton
  • Industrial work
  • Search and rescue

Any task that lacks complete motion data could benefit from this method. That is the bigger picture.

WORLD’S FASTEST HUMANOID ROBOT RUNS 22 MPH
 

Advertisement

A robot dances at the launch ceremony of a Galbot Robotics retail store in Beijing, China, on August 7, 2025. The company has also designed a 4-foot robot that returns tennis shots with millisecond reactions and full-body coordination. (VCG/VCG via Getty Images)

Could robots compete with humans one day?

The path forward is becoming clearer. Today, the robot rallies. Next, it competes. In time, robots could train with or challenge professional athletes. Exhibition matches between humans and machines may become part of the sport. That future no longer feels far away.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

This demo shows how quickly things are changing. Robots are no longer stuck following scripts. They can now react, adjust and compete in real situations. What used to feel far off is starting to show up right in front of us.

Advertisement

So here is the question: If a robot could outplay you on the court, would you still want to compete, or would you rather train with it? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading

Technology

AI influencer awards season is upon us

Published

on

AI influencer awards season is upon us

First came the AI beauty pageant. Then the AI music contests. Now, there is an award for AI Personality of the Year — perhaps the inevitable next step for the AI influencer economy as it transforms from quirky novelty into a serious and lucrative industry.

The contest, a joint venture between generative AI studio OpenArt and AI-powered creator platform Fanvue, with backing from AI voice company ElevenLabs, opens on Monday and runs for a month. The organizers said it is intended to “celebrate the creative talent ‘behind’ AI Influencers” and recognize their growing commercial and cultural clout.

Contestants will compete for a total prize fund of $20,000, which will be split between an overall winner and individual categories of fitness, lifestyle, comedian, music and dance entertainer, and fictional cartoon, anime, or fantasy personality. Victors will be celebrated at an event in May that the organizers are dubbing the “‘Oscars’ for AI personalities.”

To enter, you must develop your AI influencer on OpenArt’s platform and submit it at www.AIpersonality.ai. You’ll be asked for social media handles across TikTok, X, YouTube, and Instagram, as well as the story behind the character, your motivations for creating it, and details of any brand work.

Among those assessing contestants are 13‑time Emmy‑winning comedy writer Gil Rief, the creators of Spanish AI model Aitana Lopez, and Christopher “Topher” Townsend, the MAGA rapper behind AI-generated gospel singer Solomon Ray. According to a copy of the judges’ briefing seen by The Verge, contestants will be scored on four criteria: quality, social clout, brand appeal, and the inspiration behind the avatar. Specific points include reliably engaging with followers, portraying a consistent look across social channels, accurate details like having the “right number of fingers and thumbs,” and having “an authentic narrative” behind the avatar.

Advertisement

The contest is open to established creators and novices alike, though existing AI influencers will still need to submit material produced on OpenArt’s platform, Matt Jones, head of brand at Fanvue, told The Verge.

Despite being designed to celebrate creators of virtual influencers, Jones said that entrants don’t need to publicly identify themselves. “If a person who created this amazing piece of work wants nothing to do with the press or to expose themselves or to have their name out there, that’s obviously fine,” he said. “There would be no need to thrust anybody into the limelight here. We would just celebrate the piece of work.”

That creators can remain anonymous feels odd for a contest judging authenticity, particularly in an AI influencer ecosystem built on fictional people, fake personas, and fabricated backstories. That same anonymity has also helped grifts flourish with little accountability, from the AI white nationalist rapper Danny Bones to MAGA fantasy girl Jessica Foster.

There’s familiar baggage too, including persistent questions about originality, whether AI-generated work, or even a likeness, has been lifted from real creators, and whether these tools simply reproduce the same old biases in synthetic form. Organizer Fanvue has already faced criticism for this in the past: in 2024, a Guardian columnist described its “Miss AI” beauty pageant as something that “take(s) every toxic gendered beauty norm and bundle(s) them up into a completely unrealistic package.”

To Fanvue’s Jones, creators inevitably leave something of themselves in the AI characters they make. “You can’t help but put a little bit of yourself into the stories that you tell and the characters that you make,” he said, urging creators to “lean into that.” The idea feels at home in the influencer economy: not strictly real, but a form of synthetic authenticity the internet already knows how to handle.

Advertisement
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Continue Reading

Trending