Meta announced Thursday that it will begin deploying a more advanced AI system to handle content enforcement as it plans to cut back on third-party vendors. Tasks related to content enforcement include capturing and removing content related to terrorism, child exploitation, drugs, fraud, and fraud.
The company says that once these more advanced AI systems can consistently outperform current content application methods, it will roll them out across its apps. At the same time, it reduces reliance on third-party vendors for content enforcement.
“There will still be people reviewing content, but these systems will be able to take on tasks better suited to technology, such as repetitive reviews of graphic content and areas where adversaries constantly change their tactics, such as illegal drug sales and fraud,” Mehta explained in a blog post.
Mehta believes these AI systems can detect more violations more accurately, prevent fraud more effectively, respond more quickly to real-world events, and reduce over-policing.
The company says initial testing of its AI system has been promising, detecting twice as much violating adult sexual solicitation content as its review team, while also reducing error rates by more than 60%. The system also says it can identify and prevent further impersonation accounts involving celebrities and other public figures, as well as detect signals such as logging in from new locations, password changes, and profile edits to stop account takeovers.
Additionally, Mehta said the system can identify and mitigate about 5,000 scams per day in which fraudsters try to trick people into divulging their login information.
“Experts design, train, oversee, and evaluate our AI systems, measure their performance, and make the most complex and impactful decisions,” Mehta said in a blog post. “For example, people will continue to play a key role in how we make the riskiest and most important decisions, such as filing account deactivations and reporting to law enforcement.”

The move comes as Meta has loosened its content moderation rules over the past year or so following President Donald Trump’s second inauguration. Last year, the company ended its third-party fact-checking program in favor of an X-like community notes model. It also said restrictions on “topics that are part of mainstream discourse” would be lifted and users would be encouraged to take a “personalized” approach to political content.
This also comes as Mehta and other Big Tech companies are currently facing several lawsuits that hold social media giants liable for harming children and youth users.
Meta also announced Thursday that it is launching the Meta AI Support Assistant, which will give users access to 24/7 support. The assistant is rolling out globally in the Facebook and Instagram apps on iOS and Android, and within the Facebook and Instagram Help Center on desktop.
