
Meta to Automate Product Risk Assessments with AI, Raising Concerns
Meta is planning to implement an AI-powered system to automate the evaluation of potential harms and privacy risks associated with updates to its products, including Instagram and WhatsApp. According to internal documents reportedly viewed by NPR, this system could handle up to 90% of such assessments.
This move comes as Meta is bound by a 2012 agreement with the Federal Trade Commission (FTC), which requires the company to conduct privacy reviews for its products, evaluating the risks associated with any potential updates. Previously, these reviews were primarily conducted by human evaluators.
Under the new AI-driven system, product teams will be required to complete a questionnaire about their work. The system will then provide an “instant decision,” highlighting AI-identified risks and outlining requirements that the update or feature must meet before launch.
Meta believes this AI-centric approach will enable faster product updates. However, a former Meta executive told NPR that this automation also introduces “higher risks,” as “negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”
In response to these concerns, a Meta spokesperson stated that the company has “invested over $8 billion in our privacy program” and remains committed to “deliver innovative products for people while meeting regulatory obligations.”
The spokesperson further elaborated, “As risks evolve and our program matures, we enhance our processes to better identify risks, streamline decision-making, and improve people’s experience. We leverage technology to add consistency and predictability to low-risk decisions and rely on human expertise for rigorous assessments and oversight of novel or complex issues.”
Meta emphasizes that while AI will handle routine assessments, human experts will continue to oversee complex and novel issues.



