Meta has announced that it is rolling out new artificial intelligence systems to improve content enforcement across its platforms. The company also said it plans to reduce its dependence on third-party vendors by using more advanced internal technology to monitor harmful content.
Content enforcement includes detecting and removing posts related to terrorism, scams, illegal drugs, fraud, and other policy violations. According to Meta, the new AI tools will be introduced gradually across its apps once they prove to be more effective than the current moderation methods.
In a recent update, the company explained that human reviewers will still be involved, but AI systems will handle tasks that require repetitive checking or fast decision-making. These include reviewing graphic material, identifying scam attempts, and tracking illegal activities where offenders frequently change their methods.
Meta stated that the new systems are designed to improve accuracy, respond faster to real-world situations, and reduce the chances of removing content by mistake. Early testing has shown strong results, with the AI detecting significantly more policy violations while lowering the error rate compared to manual reviews.
The company also reported that the technology can better identify fake accounts and impersonation attempts, especially those targeting celebrities or well-known personalities. In addition, the system can detect suspicious activities such as logins from unusual locations, sudden password changes, or unexpected profile updates, helping prevent account takeovers.
According to Meta, the new tools are already stopping thousands of scam attempts every day, including cases where attackers try to steal login details from users.
Despite the increased use of AI, Meta said human experts will continue to supervise the systems, train the models, and make final decisions in complex or high-risk situations. Important cases, such as account appeals or reports to law enforcement, will still require human review.
The announcement comes at a time when the company has been making changes to its moderation policies. Over the past year, Meta has reduced some restrictions on certain types of content and replaced its third-party fact-checking program with a community-based reporting system similar to the model used by other social platforms.
At the same time, major technology companies, including Meta, are facing legal challenges related to user safety, especially concerning the impact of social media on children and teenagers.
Along with the new enforcement systems, Meta also revealed that it is launching an AI-powered support assistant. This assistant will provide users with 24-hour help and is being introduced on Facebook and Instagram for both mobile and desktop users worldwide.
Meta says these changes are part of its long-term plan to make its platforms safer while using technology to handle the growing amount of online content more efficiently.
