A ruling from California is drawing global attention. A 20-year-old claimant has prevailed in court against Meta, the company founded by Mark Zuckerberg that operates Instagram and Facebook, and against Google as the operator of YouTube. A jury found that the platforms had deliberately employed addictive design features and, in doing so, caused psychological harm.
The claimant was awarded roughly six million US dollars in damages. Meta itself had previously conducted internal research documenting such harms, but the company withheld its findings from the public.
Historic judgment
Specialists regard the ruling as historic, as it marks a shift in approach. For the first time, social networks have been held liable not for individual pieces of content, but for the design of their products. Until now, platforms in the United States have relied on broad protections under Section 230 of the Communications Act, which shields them from liability for content posted by users. The claimant’s case exploited a gap by targeting not content, but the way the platforms function.
The jury accepted the argument that features such as endless scrolling, algorithmic recommendations and the automatic playback of videos are designed to keep users engaged for as long as possible, thereby encouraging addictive behaviour. The line of reasoning is familiar. Comparable arguments were advanced in earlier litigation against the tobacco industry, where products were alleged to have been designed to foster dependency despite known risks.
Concrete consequences for Meta
The ruling carries such force because it could trigger thousands of similar claims. The associated financial risk could run into billions of dollars and may pose a serious challenge even for a company of Meta’s scale. Several changes now appear not only likely but necessary. Meta, along with many other technology firms, will find it difficult to avoid redesigning its algorithms. The logic of recommendation systems will have to be reconsidered.
Content that amplifies emotions such as insecurity or fear is particularly problematic. Such material tends to increase time spent on the platform, a counter-intuitive effect that is nevertheless well documented. A review of the literature by Italian paediatricians links digital addiction in children to depression, dietary issues and psychological problems, as well as sleep disorders, dependency, anxiety, sexual problems, behavioural issues, body image concerns, reduced physical activity, online care practices, impaired vision, headaches and dental health.
Researchers in Germany, Sweden and the Netherlands have also associated intensive social media use among adolescents with statistically significant changes in the development of cerebellar volume. This region of the brain is involved, among other things, in emotional regulation. Heavy use of social media could therefore influence the brain’s physical development, a thesis supported by a growing body of evidence.
The end of autoplay and infinite scroll?
A reduction in the addictive pull that repeatedly draws users back to platforms appears essential. Personalised feeds and push notifications are part of that dynamic. Proposals to curb them already exist. Chronological feeds, as used in the early days of social networks, could replace algorithmic sorting. That would allow a natural sense of saturation to emerge. Once users reach content they have already seen, interest tends to fade. Built-in stop signals could indicate that it is time to take a break. Some also see potential in setting time limits for minors.

A central criticism in the case was that platforms no longer allow for natural pauses. Whether through regulation or voluntary change, autoplay may no longer remain the default. Since its inception, experts have warned of the risks posed by infinite scroll. Limiting endless feeds – and perhaps introducing usage reminders – could provide relief. Yet such measures would directly affect the business model of platforms, as reduced usage typically correlates with lower advertising revenue.
The question of body image
One particularly sensitive issue is the role of social media in shaping ideals of beauty. The case made clear that algorithmic systems amplify content presenting unrealistic body images, thereby creating pressure to compare and exploit insecurities. The claimant described experiences of body dysmorphia and self-doubt that were intensified by social media.
Beauty ideals themselves cannot be measured. The model Twiggy would scarcely have matched the aesthetic preferences of Peter Paul Rubens. That alone shows how such ideals change over time. The problem, as the case suggests, lies less in the ideal itself than in the way it is presented. When unrealistic images are elevated into supposedly universal standards because algorithms filter out everything else, the result becomes problematic. The issue is not the ideal as such, but the pressure of comparison it creates, in which ordinary adolescent insecurity takes on a life of its own.
With the case, legislative initiatives such as the ‘Kids Online Safety Act’ gain renewed importance. At the same time, liberal critics warn of risks to freedom of expression and of excessive regulation stifling digital innovation. Both concerns are valid. In matters of protecting minors, however, safeguarding takes precedence. The ruling against Meta and Google is unlikely to remain an isolated case. It may fundamentally reshape parts of the internet, above all the social media sector. The era in which platforms could hide behind claims of technological neutrality may be drawing to a close. Where the path ultimately leads remains to be seen.