Meta's Ray-Ban smart glasses, marketed as "designed for privacy, controlled by you," face a proposed U.S. class-action lawsuit filed in San Francisco federal court, accusing the company and Luxottica of misleading consumers about data handling. Plaintiffs from California and New Jersey claim they relied on privacy assurances and would not have bought the glasses had they known contractors could access recordings.
The suit seeks compensation and injunctive relief for over 7 million units sold in 2025. Swedish investigations revealed low-paid Sama contractors in Kenya reviewed user footage, including nudity, sex acts, bathroom scenes, and credit card details, for AI training via data labeling. Meta states that only shared Meta AI content is reviewed by contractors to improve features, with filters to anonymize data like blurring faces, though workers report unreliable safeguards.
Users cannot opt out of this pipeline once using AI features, per terms buried in fine print. The controversy stems from aggressive marketing emphasizing privacy while hiding human review, raising risks of stalking, extortion, and emotional distress. U.K. regulators probe transparency and consent, amid broader concerns over wearable surveillance where recording lights can be obscured. Past Meta biometric settlements highlight ongoing scrutiny.
Do companies prioritize privacy? History suggests it's secondary to AI gains, with outsourced labor amplifying leaks in fragmented ecosystems. Even a lawsuit win may yield minor policy tweaks without systemic change, as fine-print disclosures persist and sales boom.
USER PRIVACY: AN AFTERTHOUGHT IN TECH'S AI RUSH – CUSTOMERS PAY THE PRICE!
Sanjay Sahay
Have a nice evening.
