Few-Shot Learner is pretrained on a firehose of billions of Facebook posts and images in more than 100 languages. The system uses them to build up an internal sense of the statistical patterns of Facebook content. It is tuned for content moderation by additional training with posts or imagery labeled in previous moderation projects and simplified descriptions of the policies those posts breached.
After that preparation, the system can be directed to find new types of content, such as to enforce a new rule or expand into a new language, with much less effort than previous moderation models, says Cornelia Carapcea, a product manager on moderation AI at Facebook.
More conventional moderation systems might need hundreds of thousands or millions of example posts before they can be deployed, she says. Few-Shot Learner can be put to work using just dozens—the “few shots” in its name—combined with simplified descriptions or “prompts” of the new policy they relate to.
“Because it’s seen so much already, learning a new problem or policy can be faster,” Carapcea says. “There’s always a struggle to have enough labeled data across the huge variety of issues like violence, hate speech, and incitement; this allows us to react more quickly.”
Few-Shot Learner can also be directed to find categories of content without showing it any examples at all, just by giving the system a written description of a new policy—an unusually simple way of interacting with an AI system. Carapcea says results are less reliable this way, but the method can quickly suggest what would be swept up by a new policy, or identify posts that can be used to further train the system.
The impressive capabilities—and many unknowns—about giant AI creations like Facebook’s prompted Stanford researchers to recently launch a center to study such systems, which they call “foundation models” because they appear set to become an underpinning of many tech projects. Large machine-learning models are being developed for uses not only in social networks and search engines, but also in industries such as finance and health care.
Percy Liang, the Stanford center’s director, says Facebook’s system appears to show some of the impressive power of these new models, but will also exhibit some of their trade-offs. It’s exciting and useful to be able to direct an AI system to do what you want just with written text, as Facebook says it can with new content policies, Liang says, but this capacity is poorly understood. “It’s more of an art than a science,” he says.
Liang says that Few-Shot Learner’s speed also may have drawbacks. When engineers don’t have to curate as much training data, they sacrifice some control and knowledge of their system’s capabilities. “There’s a bigger leap of faith,” Liang says. “With more automation, you have less potential oversight.”
Carapcea of Facebook says that as Facebook develops new moderation systems it also develops ways to check their performance for accuracy or bias.
More Great WIRED Stories