
Meta to Automate Product Risk Assessments Using AI, Raising Concerns Over Potential Harms
Meta plans to implement an AI-powered system to automate the evaluation of potential harms and privacy risks for up to 90% of updates to its apps, including Instagram and WhatsApp. This shift is detailed in internal documents reportedly reviewed by NPR.
A 2012 agreement between Facebook (now Meta) and the Federal Trade Commission (FTC) mandates that the company conduct privacy reviews of its products, assessing the risks associated with any updates. Historically, these reviews have been performed largely by human evaluators.
The new AI system will reportedly require product teams to complete a questionnaire about their work. The system will then provide an “instant decision,” highlighting AI-identified risks and outlining requirements that the update or feature must meet before launch.
Meta believes this AI-driven approach will accelerate product updates. However, a former Meta executive, speaking to NPR, expressed concerns that it could also lead to “higher risks,” as “negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”
In a statement, a Meta spokesperson emphasized the company’s investment of “over $8 billion in our privacy program” and its commitment to delivering innovative products while adhering to regulatory obligations.
“As risks evolve and our program matures, we enhance our processes to better identify risks, streamline decision-making, and improve people’s experience,” the spokesperson stated. “We leverage technology to add consistency and predictability to low-risk decisions and rely on human expertise for rigorous assessments and oversight of novel or complex issues.”
This move reflects a significant shift in how Meta manages product risk and compliance, balancing efficiency with potential oversight challenges. The company aims to streamline its processes while addressing concerns about the impact of rapid product changes on users and society.