Connect with us

Technology

Ghosts in the Kinect

Published

on

Ghosts in the Kinect

Billy Tolley swings a Microsoft Kinect around an abandoned room in sudden, jittery movements. “Whoa!” he says. “Dude, it was so creepy.” On the display, we see an anomaly of arrows, spheres, and red lines that disappears almost as soon as it arrives. For Tolley and Zak Bagans, two members of the Ghost Adventures YouTube channel, this is enough to suggest they should leave the building. Because for this team and other similar enthusiasts, that seemingly innocuous blotter of white arrows means something more terrifying: a glimpse at specters and phantoms invisible to the human eye.

Fifteen years after its release, just about the only people still buying the Microsoft Kinect are ghost hunters like Tolley and Bagans. Though the body-tracking camera, which was discontinued in 2017, started as a gaming peripheral, it also enjoyed a spirited afterlife outside of video games. But in 2025, its most notable application is helping paranormal investigators, like the Ghost Adventures team, in their attempts at documenting the afterlife.

The Kinect’s ability to convert the data from its body-tracking sensors into an on-screen skeletal dummy delights these investigators, who allege the figures it shows in empty space are, in fact, skeletons of the spooky, scary variety. Looking at it in use — the Kinect is particularly popular with ghost-hunting YouTubers — it’s certainly producing results, showing human-like figures where there are none. The question is: why?

With the help of ghost hunters and those familiar with how the Kinect actually works, The Verge set out to understand why the perhaps most misbegotten gaming peripheral has gained such a strong foothold in the search for the paranormal.

Part of the reason is purely technical. “The Kinect’s popularity as a depth camera for ghost hunting stems from its ability to detect depth and create stick-figure representations of humanoid shapes, making it easier to identify potential human-like forms, even if faint or translucent,” says Sam Ashford, founder of ghost-hunting equipment store SpiritShack.

Advertisement

This is made possible by the first-generation Kinect’s structured light system. By projecting a grid of infrared dots into an environment — even a dark one — and reading the resulting pattern, the Kinect can detect deformations in the projection and, through a machine-learning algorithm, discern human limbs within those deformations. The Kinect then converts that data into a visual representation of a stick figure, which, in its previous life, was pumped back into games like Dance Central and Kinect Sports.

The Kinect isn’t always seeing what it thinks it is

When it was released in 2010, the first-gen Kinect was cutting-edge technology: a high-powered, robust, and lightweight depth camera that condensed what would usually retail upward of $6,000 into a $150 peripheral. Today, you can find a Kinect on eBay for around $20. Ghost hunters, however, typically mount it to a carry handle and a tablet and upsell it for around $400-600, rebranded as a “structured light sensor” (SLS) camera. “The user will direct the camera to a certain point of the room where they believe activity to be present,” says Andy Bailey, founder of a gear shop for ghost hunters called Infraready. “The subject area will be absent of human beings. However, the camera will often calculate and display the presence of a skeletal image.”

Though this is often touted as proof we’re all bound for an eternity haunting aging hotels and abandoned prisons, Bailey urges caution, telling would-be ghost hunters that the cameras are best paired with other equipment to “provide an additional layer of supporting evidence.” For this, Ghost Hunters Equipment, the retail arm of haunted tour operator Ghost Augustine recommends that “EMF readings, temperature, baseline readings, and all of that are essential when considering authentication of paranormal activity.”

That’s because the Kinect isn’t always seeing what it thinks it is. But what is it actually seeing? Did Microsoft, while trying to break into a motion-control market monopolized by the Nintendo Wii, accidentally create a conduit through which we might glimpse the afterlife? Sadly, no.

Advertisement

Photo by Joe Raedle/Getty Images

The Kinect is actually a straightforward piece of hardware. It is trained to recognize the human body, and assumes that it’s always looking at one — because that’s what it’s designed to do. Whatever you show it, whether human or humanoid or something entirely different, it will try and discern human anatomy. If the Kinect is not 100 percent sure of its position, it might even look like the figure it displays is moving. “We may recognise the face of Jesus in a piece of toast or an elephant in a rock formation,” says Jon Wood, a science performer who has a show devoted to examining ghost hunting equipment. “Our brains are trying to make sense of the randomness.” The Kinect does much the same, except it cannot overrule its hunches.

That suits ghost hunters just fine, of course: the Kinect’s habit of finding human shapes where there are none is a crowd-pleaser. The Kinect, deployed in dark rooms bathed in infrared light from cameras and torches, wobbling in the hands of excitable ghost hunters as it tries to read a precise grid of infrared points, is almost guaranteed to show them what they want to see.

Much of ghost hunting depends on ambiguity. If you’re searching for proof of something, be it the afterlife or not, logic suggests you’d want tools that can provide the clearest results, the better to cement the veracity of that proof. Ghost hunters, however, prefer technology that will produce results of any kind: murky recordings on 2000s voice recorders that might be mistaken for voices, low-resolution videos haunted by shadowy artifacts, and any cheap equipment that can call into question the existence of dust (sorry, spirit orbs) — bonus points if battery life is temperamental.

“I’ve watched ghost hunters use two different devices for measuring electromagnetic fields (EMF),” Wood says. “One would be an accurate and expensive Trifled TF2, that never moves unless it actually encounters an electrical field. The other would be a £15 [$18], no-brand, ‘KII’ device with five lights that go berserk when someone so much as sneezes. Which one was more popular, do you think?”

Advertisement

Glitches aren’t tolerated — they’re encouraged

Given the notoriously unreliable skeletal tracking of the Kinect — most non-gaming applications bypass the Kinect’s default SDKs, preferring to process its raw data by other, less error-prone, means — it would be stranger if it didn’t see figures every time it’s deployed. But that’s the point. Like so much technology ghost hunters use, the Kinect’s flaws aren’t bugs or glitches. They’re not tolerated — they’re encouraged.

“If a person pays good money to enjoy a ghost hunt, what are they after?” Wood asks. “They prime themselves for a ‘spooky encounter’ and open up to the suggestion of anything being ‘evidence of a ghost’ — they want to find a ghost, so they make sure they do.”

If it were just the skeletal tracking that ghost hunters were after, better options are now possible with a simple color image. But improved methodology wouldn’t return the false-positives that maintain belief, and so skeletal tracking from 2010 is preferred. None of this is likely to move the needle for those who believe towards something more skeptical. But we do know why the Kinect (or SLS) returns the results it does, and we know it’s not ghosts.

That said, even if its results are erroneous, maybe the Kinect’s new lease on afterlife isn’t a bad thing. Much as ghosts supposedly patrol the same paths over and over until interrupted by ghost hunters, perhaps it’s fitting that the Kinect will continue forevermore to track human bodies — even if the bodies aren’t really there.

Advertisement

Technology

Legendary composer Laurie Spiegel on the difference between algorithmic music and ‘AI’

Published

on

Legendary composer Laurie Spiegel on the difference between algorithmic music and ‘AI’

In 1986, electronic music pioneer Laurie Spiegel created Music Mouse, a way for those with a Mac, Atari, or Amiga computer to dabble in algorithmic music creation. Music Mouse is deceptively simple: Notes are arranged on an XY grid, and you play it by moving a mouse around. Back in 1986, the computer mouse was still a relatively novel device. While it can trace its origins back to the late ’60s, it wasn’t until the Macintosh 128K in 1984 that it started seeing widespread adoption.

By then Spiegel, was already an accomplished composer. Her 1980 album The Expanding Universe is generally considered among the greatest ambient records of all time. And her composition “Harmony of the Worlds” is currently tearing through interstellar space as part of the Voyager Golden Record, launched in 1977. But she is also a technical wizard who joined Bell Labs in 1973 and was instrumental in early digital synthesis experiments and worked on an early computer graphics system called Vampire.

Spiegel was deeply drawn to algorithmic music composition and this new tool, the home computer. So, she created what she calls an “intelligent instrument” that enables the creation of complex melodies and harmonies with minimal music-theory knowledge. Music Mouse restricts you to particular scales, and then you explore them simply by pushing a mouse around.

Spiegel gives the user some control, of course. You can choose if notes move in parallel or contrary to each other, there are options to play notes back as chords or arpeggios, and there is even a simple pattern generator.

Despite being available for purchase until 2021, Spiegel never updated it to work on anything more current than Mac OS 9. Now, 40 years after its debut, it’s getting reborn for modern machines with help from Eventide.

Advertisement

Music Mouse is finally running on modern hardware.
Image: Eventide

While it would have been easy for Eventide and Spiegel to overload the 2026 version of Music Mouse with countless modern amenities and new features, they kept things restrained for version 1.0. The core feature set is the same, though the sound engine is more robust and includes patches based on Spiegel’s own Yamaha DX7. There are also some enhanced MIDI features, including the ability to feed data from Music Mouse into your DAW or an external synthesizer.

Laurie Spiegel answered some questions for us about the history of Music Mouse, algorithmic composition, AI, and why she thinks the computer is a “folk instrument.”

What were the origins of Music Mouse? Was there something specific that inspired its creation?

When the first Macs came out, the use of a mouse as an input device, as an XY controller, was altogether new. Previous computers had just alphanumeric keyboard input or maybe custom controllers. The most obvious thing I immediately wanted to do was to be able to push sound around with that mouse. So, as soon as the first C compilers came out, I coded up a way to do that. Pretty soon, though, I wanted the sound quantized into scales, then to add more voices to fill out the harmony. Then I wanted to have controls for timbre, tempo, and everything else I eventually added.

Advertisement

How did you connect with Eventide for this new version?

I first met Tony and Richard of Eventide all the way back in the early 1970s. They are longtime good friends. I’d been involved in various music tech projects at Eventide over the years. Tony knew that I really missed Music Mouse and that I still get a fair number of requests for the 1980s versions from people who keep vintage computers from that era just to be able to run Music Mouse or other obsolete software. He decided it was a musical instrument worth reviving. I had been wanting to revive it, but hadn’t been able to find the time to even just keep up with the way development tech keeps changing. My main thing is really composing music, and I have an active enough career doing that to not have enough time to do coding as well. I am extremely grateful to Eventide for resuscitating Music Mouse. I hope a lot of people will get a lot of music out of this new version.

Did you feel compelled to make any big changes to it after 40 years?

We decided to keep 1.0 of this new version of Music Mouse functionally the same as the 1980s original. The exceptions are adding a higher-quality internal synthesizer and providing ways to sync it with other software, to record or notate its MIDI output. We have a growing list of features to add in 2.0.

“It’s pretty easy by now to use computers to generate music-like material that is not actually the expression of an individual human being.”

Advertisement

Are there any current innovations in music tech that excite you?

That’s a hard question, because I am not all that excited about music tech right now. It’s music itself that holds my interest — composition, form, structure. I love counterpoint and the various contrapuntal forms. I studied them extensively when I was younger. Of course, harmonic progression is something I’m also very interested in, and in algorithmic assistance for composing it.

That various kinds of structures within music can now be more easily dealt with in computer software by now has both pros and cons. The pros include how much more deeply we have to understand how music works, how it is structured, and how it affects us, in order to represent it as a process description in software. That means learning, research, and self-discovery. The cons include that it’s pretty easy by now to use computers to generate music-like material that is not actually the expression of an individual human being. Music is a fundamental human experience. There is no human society that doesn’t have it. But it is something that comes from within human beings, as personal expression, as communication, as a sort of form of documentation of what we are feeling, and as a means of sharing it.

You’ve been credited as saying that the computer is a new kind of folk instrument. Can you explain what you mean by that? How does something like Music Mouse fit into that model?

Now that everyone with a computer or even just a phone has the ability to record and edit and play back and digitally process and transform sound, and particularly ever since sampling became a common musical technique, people have been doing remixes, collages, sonic montages… doing all kinds of stuff to audio they get from others or find online. This is very like what we used to call “the folk process,” in which music is repurposed, re-orchestrated, given new lyrics or otherwise modified as it goes from person to person and is adapted to fit what is meaningful in successive groups of people.

Advertisement

Music Mouse will help people create musical materials that can be used in a potentially infinite number of ways. It is a personal, often home-based instrument played by an individual, like a guitar.

Laurie Spiegel in 1990 in front of her Mac and a large collection of synths and other audio gear.

Laurie Spiegel in 1990 working at her Macintosh.
Image: Marilyn McLaren

You refer to Music Mouse as an “intelligent instrument”; it automates a certain amount of creation. What is the appeal of letting a computer take the wheel to a degree, as an artist?

Music Mouse is not a generative algorithm or an “AI.” It’s a musical instrument that a person can play. It is, to some degree, what we used to call an “expert system,” as it has some musical expertise built in. But that is meant to be supportive for the real live human being who is playing it, not to replace them. It makes the playing of notes easier in order to let the player’s focus be on the level of phrasing or form. I have coded up generative algorithms for music. Music Mouse is not one of them. It’s an instrument that an individual can play, and it’s under their control. It enables a different perspective that’s from above the level of the individual note.

Do you see a connection between modern generative AI and algorithmic composition tools?

Of course. Algorithms can be used to generate music. I have written and used some. Music Mouse is not generative, though. It does nothing on its own. It’s a musical instrument played by a person.

Advertisement

What is currently called “AI” is different from previous generations of artificial intelligence. I expect there will doubtless be further evolution. In the early years of my use of computer logic in composing, AI was more of a rule-based practice. We would try to figure out how the mind was making a specific kind of decision, code up a simulation to test our hypothesis, and then refine our understanding in light of the result. After that, there was a period of AI taking more of a brute-force approach. Computer chess, for example, would involve generating all possible moves possible in a given situation, then eliminating those that would be less beneficial. Then neural nets were brought in for a next generation of AI. I look forward to getting beyond the imitative homogenizing LLM approach and seeing whatever comes next.

There are many ways of designing an algorithm that either generates music or else helps a human being to do that, making some of the decisions during the person’s creative process to leave them free to focus on other aspects. By taking over some of the decision-making, they can free a creative mind to focus on different perspectives. People just starting to learn music too often bog down and give up at the level of simply playing the notes, just figuring out where to put their fingers. We can make musical instruments now that let people use a bit of automation on those low levels to let them express themselves on a larger level, for example, to make gestures in texture-space rather than thinking ahead just one note at a time.

“Music Mouse is not a generative algorithm or an ‘AI.’ It’s a musical instrument that a person can play.”

What do you think separates algorithmically generated music from something created by generative AI?

Artificial intelligence refers to a specific subset of ways to use algorithms. An algorithm is just a description of a process, a sequence of steps to be taken. A generative algorithm can make decisions involved in the production of information, and, of course, music is a kind of information. You can think of AI as trying to simulate human intelligence. It might have a purpose, such as taking over some of our cognitive workload. In contrast, the purpose of generative algorithms is to create stuff. In music, that purpose is to create an experience.

Advertisement

Music Mouse is not a generative algorithmic program. It’s more of a small expert system in that it has built into it information and methods that can help its player get beyond the level of just finding notes, to the level of finding personal expression.

Suno’s CEO Mikey Shulman has said that, “Increasingly taste is the only thing that matters in art and skill is going to matter a lot less.” In an age where music can be easily created using algorithms, plug-ins, and text prompts on cheap laptops and smartphones, do you see the role of composer being one primarily of curation?

I can see where he’s coming from, but, no, I don’t think so. The range and kinds of skills used in the creative arts will continue to evolve and expand. But the history of creative techniques shows them to be largely cumulative versus sequential. The keyboard synthesizer has not replaced the piano, which has not replaced the harpsichord or the organ. We have them all, that whole lineage, all still in use. Each musical instrument or artistic technique implies its own unique artistic realm. Each is defined by its specific limitations, which guide us as we use them. It is true that skills and traditional techniques will be an option rather than a prerequisite to creating music and art, but people will still do them. Just as LPs and chemical film have made comebacks recently, I expect to see traditional musical skills do the same. We have had computers and synthesizers for decades, yet there are still little children captivated by instruments made out of wood or painting or drawing, and I have yet to use any music editing software that gives me the fluidity and freedom of a pencil on staff paper. There will just be more kinds of complementary ways of making music.

More importantly, we humans have imaginations and emotions. There are internal experiences going on inside of us that we feel driven to express, to communicate, to share. It doesn’t matter what machines can generate on their own. We will always have those internal subjective experiences, emotion, and imagination, and people will experience them intensely enough to feel driven to create them external to their own selves in order to communicate and share them. You can’t replace human self-expression or the need for it by simulating their results. Artistic creation comes from a fundamental human drive, the need for self-expression. Artistic creativity is an essential method of processing the intensity of being alive.

Laurie Spiegel in the studio in 1985.

Laurie Spiegel in the studio in 1985.
Image: Enrico Ferorelli

You told New Music USA in 2014 that, in regard to electronic music, “There is no single creator… the concept of a finite fixed-form piece with an identifiable creator that is property and a medium of exchange or the embodiment of economic value really disappears.” Does this idea shape your views on ownership of art?

Advertisement

Those assumptions, which we inherited from the European classical model of music, are already much less prominent in our musical landscape. Improvisation, “process pieces,” the ease with which we can do transformations of audio files are all over the place. Folk music, and a lot of what we heard online here and there, might be audio that no longer has any known originator. We don’t know, and people don’t really care, who first created a swatch of sound. We are experiencing whatever has been done with it — different orchestrations, durations, signal processing. The huge proliferation of plug-ins and guitar effects pedals let anyone transform a sound beyond recognition. This is composition on a different level than on the level of the individual note, similarly to Music Mouse.

Another very important aspect of “folk music” is that it is typically played at home, with or for friends or family, or alone. This is very different from formal concert settings and programming we in the US inherited from Europe. For me, the most important musical experience is just about always at home, where we live. To quote what Pete Seeger said in his write-up of Music Mouse in Sing Out, that “she [meaning me] foresees a day when computer pieces will be like folksongs, anonymous common property to be altered by each new user. She would like to get music out of the concert hall and back into the living room.”

Music Mouse is available for macOS and Windows 11 for $29.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Continue Reading

Technology

China unveils the world’s largest flying car

Published

on

China unveils the world’s largest flying car

NEWYou can now listen to Fox News articles!

China just sent a clear signal about where it believes air travel is headed next. A Shanghai-based aviation company called AutoFlight has unveiled Matrix, now recognized as the world’s largest flying car. This is not a concept image or a brief hover test. Matrix has already completed successful flight tests near Shanghai, bringing real size and real ambition to an industry still dominated by small prototypes.

The launch also highlights China’s push to dominate what it calls the low-altitude economy. That sector focuses on short-distance flights using electric aircraft to move people and cargo above busy roads.

Sign up for my FREE CyberGuy Report

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

THE WORLD’S FIRST FLYING CAR IS READY FOR TAKEOFF

Matrix during flight testing near Shanghai, where the aircraft demonstrated real world performance at a scale rarely seen in flying car development. (AutoFlight)

Matrix becomes the world’s largest flying car

Matrix stands out immediately once you look at the specs. The aircraft weighs nearly 11,000 pounds. It measures about 56 feet long, stands roughly 11 feet tall and has a wingspan close to 66 feet. That makes it significantly larger than most flying cars currently under development. Most electric vertical takeoff and landing aircraft today focus on compact designs. Many seat four to six passengers and prioritize lightweight frames. Matrix takes a different approach. Its scale allows it to operate more like a true aircraft rather than a personal air vehicle.

 Matrix comes in two versions. One supports passenger travel. The other focuses on heavy cargo transport. The passenger model can carry up to 10 people, which is well above the current industry norm. That added capacity matters. It improves efficiency, lowers cost per passenger and makes commercial operations far more realistic.

Why battery technology drives flying car progress

Size alone does not make Matrix possible, power does. AutoFlight receives backing from CATL, the world’s largest electric vehicle battery manufacturer. CATL holds a significant stake in the company and supports battery research and development.

Advertisement

Battery performance affects nearly every part of electric flight. It shapes range, safety margins and payload capacity. Stronger batteries allow aircraft to fly farther while carrying more weight. In flying cars, that difference often separates experimental designs from aircraft ready for real-world service.

TRUMP ADMIN CUTS RED TAPE ON COMMERCIAL DRONES TO COMPETE WITH CHINA’S DOMINANCE OF THE MARKET

The size of Matrix sets it apart, with a wide wingspan and passenger capacity that pushes electric air travel beyond small prototype designs. (AutoFlight)

China builds rules for the low-altitude economy

Matrix did not appear by accident. China is actively building a regulatory framework for the low-altitude economy. That includes standards for aircraft design, safety systems, air traffic control and supporting infrastructure. Officials plan to introduce baseline rules by 2027, with more than 300 detailed standards expected by 2030. These rules are meant to prepare cities for flying cars, cargo aircraft and air taxi services. While many countries still debate how electric air travel should work, China is already laying the foundation.

Cargo flights paved the way for passenger approval

Before shifting focus to passengers, AutoFlight proved itself with cargo. Its earlier aircraft, CarryAll, received full certification in China for design, production and airworthiness. It also completed a real-world cargo flight between two cities, covering about 100 miles in roughly one hour. That flight demonstrated practical use beyond test environments. It also helped build trust with regulators, which plays a critical role in approving passenger aircraft. Today, passenger travel has become the company’s main focus. About 70 percent of AutoFlight’s total orders involve passenger aircraft. Certification is still underway, but company leaders expect approval within one to two years. Orders are already being accepted for future delivery.

Advertisement

NEW PERSONAL EVTOL PROMISES PERSONAL FLIGHT UNDER $40K

Flying cars like Matrix point to a future where short-distance air travel could ease congestion and reshape how cities move people and cargo. (AutoFlight)

How Matrix compares to smaller flying cars like Pivotal

Matrix represents one side of the flying car future. Smaller aircraft such as the Pivotal flying car, which we have covered previously, focus on personal flight and short-range travel. These designs emphasize simplicity, individual control and compact size. Matrix takes the opposite approach. It focuses on shared passenger travel and heavy cargo transport at scale. Together, these models show how the flying car market is splitting into two paths. One is personal air mobility. The other is commercial electric aviation. Both paths matter, but they solve very different transportation problems.

When passenger flying car flights could begin in China

Industry experts see 2026 as a pivotal year for flying cars in China. Several companies plan to begin deliveries, and China could see its first paid passenger flying car flights. New infrastructure, such as landing pads and charging stations, will support this growth. AutoFlight is also looking beyond China. Demand is strong in regions with limited transportation networks. Island nations, mountainous areas and remote regions stand out. The company sees Northeast Asia, Southeast Asia and the Middle East as key markets.

What this means for you

Flying cars still feel futuristic, but they are moving closer to everyday use. Early flights will likely focus on specific routes, cargo delivery, emergency services and premium passenger travel. Over time, costs could fall to levels similar to high-end ride services on the ground. Even if you never board one soon, this technology will shape logistics, emergency response and how cities plan transportation. It also shows how quickly electric aviation can advance when regulation, manufacturing and demand align.

Advertisement

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com

Kurt’s key takeaways

Matrix is more than a big flying machine. It shows how fast flying car ideas are turning into aircraft that can actually be certified and used. China is moving from concepts to real operations step by step. Widespread use will take time, but the trend is clear. Electric flight is becoming practical, scalable and much harder to ignore.

What would need to happen for you to feel comfortable riding in a flying car, and would you try it if one launched in your city? Let us know by writing to us at Cyberguy.com

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Sign up for my FREE CyberGuy Report 

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com. All rights reserved.

Continue Reading

Technology

Apple starts testing end-to-end encrypted RCS messages on iPhone

Published

on

Apple starts testing end-to-end encrypted RCS messages on iPhone

Apple is starting to test end-to-end encrypted (E2EE) RCS messages with the developer beta of iOS 26.4 released Monday. Apple announced plans last year to support the feature, and once fully available, it will let iPhone and Android users send encrypted RCS messages to each other across platforms.

However, with this initial implementation, Apple is only testing RCS encryption between Apple devices. It’s “not yet testable with other platforms,” Apple says. The company also doesn’t plan to ship E2EE RCS messages with iOS 26.4; the feature will actually ship publicly in a “future update,” Apple says.

RCS messages significantly improve the experience of texting between iPhone and Android devices, but cross-platform encryption has been a big thing missing. The GSM Association, which helps develop RCS, announced in September 2024 that it was working on E2EE messages as part of the “next major milestone” for the RCS Universal Profile, and Apple said in March 2025 that it would support E2EE RCS messages on iOS, iPadOS, macOS, and watchOS in “future software updates.”

Continue Reading

Trending