One reason so many are quitting: We want control over our lives again
The pandemic, and the challenges of balancing life and work during it, have stripped us of agency. Resigning is one way of regaining a sense…
Thought Leader: Amy Cuddy
Social media is entering one of the most consequential periods of transformation since the rise of the first algorithmic feeds. Platforms are no longer operating as traditional social networks. They are becoming AI-driven communication systems that blend user-generated content, automated creation tools, and increasingly complex safety infrastructures. This evolution is reshaping how billions of people communicate, but it also introduces significant governance challenges as regulators, companies, and civil society attempt to keep up with a rapidly changing technological environment.
At the same time, new compliance obligations, the growth of AI-specific safety roles, and the collapse of a unified “public feed” have created an online ecosystem with far more fragmentation and vastly different expectations across regions. The sector is modernising quickly, but the work is uneven, and the stakes continue to rise.
Across platforms, safety risks are scaling faster than the systems designed to manage them. Generative AI now allows individuals to create persuasive deepfakes, synthetic news, and realistic impersonations with minimal expertise. This has increased the volume, velocity, and sophistication of harmful content, from harassment to coordinated influence campaigns.
Yet existing moderation systems still struggle with basic coverage. AI detection tools often underperform on real-world data, frequently missing subtle or context-specific harms. In low-resource languages, accuracy gaps remain especially wide. These disparities have the most significant impact on regions already facing political volatility or weak institutional oversight.
Meanwhile, the structure of social media itself is changing. Fragmented feeds, personalised recommendation systems, and niche community spaces make harmful content less visible to platforms while allowing dangerous narratives to spread rapidly in localised pockets. This fragmentation challenges the long-standing expectation that platforms can govern a single public sphere.
Against this backdrop, trust in platforms continues to erode. Users expect safer environments, yet they also increasingly question the fairness and transparency of automated enforcement systems. This combination fuels a growing legitimacy crisis for social media companies worldwide.
2025 has brought a sharp increase in regulatory scrutiny. The EU’s Digital Services Actcontinues to shape global safety approaches, particularly in terms of transparency, algorithmic accountability, and meaningful human involvement. The UK Online Safety Act introduces additional requirements regarding children’s safety, risk assessments, and compliance reporting. India’s IT Rules and forthcoming AI regulations are creating another layer of obligations in one of the world’s largest digital markets.
In the United States, state-level efforts are driving significant fragmentation. Laws in Utah, Tennessee, California, and Mississippi impose age verification, parental consent, or content access mandates. Several of these laws remain tied up in court, yet they signal a growing interest among policymakers in controlling how young people access digital spaces. Other countries, including Denmark, Australia, and Turkey, have also considered partial or temporary bans on specific platforms, illustrating the growing tension between national sovereignty and globally connected social media systems.
These developments are changing the composition of trust and safety teams. As Alice Hunsberger writes, data from Trust and Safety Jobs indicates that one in five new roles in 2025 will be in policy and compliance, with a significant percentage explicitly focused on AI oversight. Many of these positions are being filled by compliance professionals rather than long-time T&S practitioners, reflecting the profound impact of regulation on the field.
Over the last year, we have seen a wave of innovation aimed at helping platforms adapt to new safety and compliance expectations. Companies are increasingly adopting advanced AI tools to classify content, evaluate risk, and flag potential violations more efficiently. Hybrid humanAI systems are becoming the norm, although the balance between them remains a topic of debate as organisations navigate the need for accuracy, fairness, and culturally informed decision-making.
One promising development is the rise of build-your-own moderation and alignment tools, such as Zentropi, which allow organisations to create and deploy customised classifiers in minutes rather than months. These systems provide teams with the flexibility to adjust safety criteria, test interventions, and scale specialised detection models without relying exclusively on platform-built solutions.
At the same time, expert networks are becoming more central. Platforms are increasingly tapping external specialists through advisory collectives and services, such as Duco, to support high-risk moments, including elections, geopolitical crises, and regional flashpoints. This expert-in-the-loop model is also becoming a formal part of regulatory compliance in jurisdictions that require human review of certain types of decisions.
Despite these advancements, major challenges persist. Automated systems continue to struggle with nuance, particularly in languages with limited training data. Safety failures continue to have a disproportionate impact in the Global South, where political incentives to manipulate online narratives remain high. As generative AI tools become more commonplace, platforms must now contend with AI-created content that appears and behaves like authentic speech yet carries significantly different risk profiles.
The economics of safety are also shifting. Companies are increasingly recognising that safer environments lead to improved retention, engagement, and long-term revenue. Evidence from gaming and social platforms indicates a clear correlation between reduced exposure to harassment and increased user participation. However, the incentive to automate for cost savings creates tension when automation is less reliable in sensitive contexts.
The modernisation of trust and safety now depends on whether the industry can develop systems that keep pace with the rapidly advancing capabilities of AI. Generative models are reshaping how content is produced, how communities form, and how influence spreads. They are also transforming the regulatory environment, workforce expectations, and the technical foundations of safety operations. The next phase of online safety requires not only stronger safeguards but new frameworks that treat AI development and platform governance as inseparable.
Several priorities stand out:
Strengthening AI coverage in low-resource environments.
Platforms cannot rely on generic models trained primarily on English and high-resource languages. Harms in the Global South, where political manipulation and ethnic tensions run high, will increasingly involve AI-generated narratives and synthetic media. Building safety systems that can understand local languages, cultural nuance, and regional political dynamics is essential for preventing AI-amplified harm at scale.
Integrating safety into AI product design.
As social platforms become AI companies, safety must move upstream into model training, evaluation, and deployment. This includes bias testing, adversarial red-team exercises, synthetic content detection, and guardrail design before models are deployed to users. Safety cannot remain a downstream function evaluating outputs after the fact. AI-era product development requires that trust and safety teams operate alongside engineering from the earliest design stages.
Expanding participatory governance for AImediated environments.
AI systems increasingly serve as gatekeepers for what people see, what spreads, and what gets removed. That power demands broader participation in governance. Advisory councils, youth boards, expert networks, and independent oversight bodies can provide human judgment where algorithms fall short. These mechanisms ensure that the values encoded into AI systems reflect diverse perspectives rather than solely platform incentives.
Building AI-ready data infrastructure.
AI safety depends on high-quality datasets for training, benchmarking, and auditing. Yet access to real-world data is shrinking due to privacy concerns, competitive pressures, and legal risk. Without shared, privacy-preserving datasets and clearer exemptions for research, progress in AI safety will slow. Developing secure, governed data-sharing frameworks is now foundational to evidence-based safety innovation.
Aligning AI development with democratic principles and human rights.
AI-driven feeds shape information access, public discourse, and political influence. As generative systems reshape online speech, platforms will need to demonstrate transparency in how models behave, document safety mitigations, and provide pathways for accountability. This alignment is not only a regulatory expectation; it is essential for preserving user trust and preventing AI systems from deepening social fragmentation.
Social media’s transformation into a deeply personalised, AI-infused communication ecosystem demands a new approach to safety. The question is no longer whether platforms will adopt AI, but whether they can do so responsibly in ways that support user trust, democratic resilience, and global equity.
The past year has shown meaningful progress. It has also revealed how quickly risks can scale when technology evolves faster than safeguards. The next era of online safety will depend on integrating expertise, strengthening compliance, expanding global coverage, and engaging communities in the governance of digital spaces.
Platforms face a choice about the kind of environments they want to build. Safer systems will not eliminate all harm, but they can reduce its reach, increase user resilience, and create healthier spaces for expression, connection, and public life. The decisions made now will determine whether social media continues to serve as a force for community and participation or becomes increasingly fragmented and unstable in the years ahead.
Katie Harbath is a leading voice on how technology shapes democracy. Drawing from more than a decade at Facebook, where she built and led the global elections team, Katie offers rare, firsthand insight into the challenges and opportunities at the intersection of tech, policy, and civic engagement. Her engaging presentations help audiences understand how to navigate the evolving digital landscape while leveraging innovation to strengthen communication, trust, and participation. To bring Katie to your next speaking event, contact WWSG.
One reason so many are quitting: We want control over our lives again
The pandemic, and the challenges of balancing life and work during it, have stripped us of agency. Resigning is one way of regaining a sense…
Thought Leader: Amy Cuddy
Scott Gottlieb: How well can AI chatbots mimic doctors in a treatment setting?
This is an Op-ed by WWSG exclusive thought leader, Dr. Scott Gottlieb. Many consumers and medical providers are turning to chatbots, powered by large language…
Thought Leader: Scott Gottlieb
Sara Fischer: The AI-generated disinformation dystopia that wasn’t
This piece is by WWSG exclusive thought leader, Sara Fischer. Amid the craziest news cycle in recent memory, AI-generated deepfakes have yet to become the huge truth…
Thought Leader: Sara Fischer