Meta Ray-Ban Smart Glasses: Private Videos Reviewed by Kenyan Workers for AI Training
Meta Ray-Ban Private Videos Reviewed by Kenyan Workers for AI

Meta Ray-Ban Smart Glasses: Private Videos Manually Reviewed by Kenyan Workers for AI Training

A shocking investigation by Swedish newspapers has uncovered that Meta's Ray-Ban smart glasses are capturing users' most private moments, and this footage is not staying confidential. Instead, it is being manually reviewed by a hidden workforce of thousands in Kenya to train the company's next generation of artificial intelligence (AI). This revelation raises serious questions about privacy and data ethics in the age of wearable technology.

Investigation Details: Swedish Newspapers Expose Data Labeling Operation

According to reports from Swedish newspapers Goteborgs-Posten and Svenska Dagbladet, the video feeds recorded by Meta's Ray-Ban smart glasses are being watched by employees of Sama, a technology contractor based in Nairobi, Kenya. These workers, known as data annotators, spend ten-hour shifts reviewing real-world footage as part of a massive data-labeling operation. Their job is to teach AI how to recognize objects, people, and environments by watching and categorizing clips.

The footage includes highly sensitive content, such as people using the toilet, undressing, and engaging in intimate acts, often recorded without the subjects' knowledge. One worker described seeing "everything — from living rooms to naked bodies," emphasizing that these are real people unaware their private moments are being transmitted and reviewed globally.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Workers' Testimonies: Deeply Troubling and Sensitive Content

Data annotators at Sama have expressed deep concern over the nature of the videos they handle. They report that individuals in the footage appear entirely unaware of being recorded, with scenarios like partners in bathrooms or naked bodies captured inadvertently. Workers described clips that could trigger "huge scandals" if leaked, highlighting the extreme sensitivity of the material.

To maintain secrecy, the workplace has strict security measures, including cameras everywhere and bans on personal phones or recording devices. Despite this, workers feel uneasy, as they receive no explanation for why specific clips are selected or whose footage they are watching. One annotator noted, "You understand that it's someone's private life you're looking at, but you're just expected to do the job without questioning."

Additional Tasks: Transcription and Dark Content Handling

Beyond video review, workers also handle transcription tasks, checking whether the AI assistant in the glasses correctly answers users' questions. This involves reviewing chats that can cover any topic, including crime or protests, with one worker describing it as "very dark things." This dual role underscores the breadth of data being processed and the potential for misuse.

Meta's Response: Data Storage and Privacy Policies

Meta has responded to the investigation through its spokesperson in London, Joyce Omope. The company stated that captured media is stored on the glasses until imported to a phone via the Meta AI mobile app, where it is temporarily cached. This cache automatically clears after importing, and users can manually clear it in device settings. When live AI is used, media is processed according to Meta AI Terms of Service and Privacy Policy.

Meta also clarified that voice recordings and queries are used to improve user experience and develop products. However, this response does not address the ethical concerns raised by the manual review of private footage by external workers.

Implications for Privacy and AI Development

This investigation highlights a critical issue in AI development: the need for massive datasets often comes at the cost of user privacy. As smart glasses and other wearable devices become more prevalent, ensuring that data collection and processing respect individual rights is paramount. The case of Meta's Ray-Ban glasses serves as a stark reminder of the hidden human labor behind AI training and the potential for invasive surveillance.

Moving forward, companies must implement stricter safeguards and transparency measures to protect users from unauthorized data access. Regulatory bodies may need to step in to enforce guidelines that balance technological advancement with ethical considerations, ensuring that private moments remain private in the digital age.

Pickt after-article banner — collaborative shopping lists app with family illustration