Seven months ago, Mark Zuckerberg testified to Congress that Facebook Inc. spends billions of dollars to keep harmful content off its platform. Yet some of his own employees had already realized the inadequacy of their efforts to tamp down objectionable speech.

A trove of internal documents shows that Facebook’s Integrity team, the group tasked with stemming the flow of harmful posts, was fighting a losing battle to demote problematic content. The documents were part of disclosures made to the U.S. Securities and Exchange Commission and provided to Congress in redacted form by legal counsel for Frances Haugen, a former product manager who worked on the Integrity team before she left Facebook earlier this year. The redacted versions were obtained by a consortium of news organizations, including Bloomberg News.

Detailed reports show that the social media giant’s algorithms were geared toward keeping people on the platform, where their valuable attention is monetized by showing them ads. As early as 2019, employees working on integrity measures realized that their tools were no match for a system designed to promote content likely to keep people scrolling, liking, commenting and sharing. This internal battle played out across the News Feed, Facebook’s personalized home page fed by the company’s machine-learning algorithms.