Digital platforms and also the training data sets for AI models are kept clean by humans. These content moderators are required to review and, if necessary, delete material. They often suffer from what they see there. OpenAI now hopes that GPT-4 can do this work. The AI-assisted moderation system is available via API.
Advertisement
“Content moderation plays a critical role in keeping digital platforms clean,” reads a blog post from OpenAI. If you use a GPT-4 based content moderation system, you can implement changes in the guidelines much faster. In addition, GPT-4 can understand rules and recognize nuances in long-term guidelines. “We believe this offers a more positive vision for the future of digital platforms, where AI can help moderate online traffic according to platform-specific policies and relieve the psychological burden of large numbers of human moderators.”
understand context
The difficulties of content moderation lie in understanding contexts and in always new cases. The big platform operators like Meta and Google have been using machine learning for a long time to filter out content that is dangerous or criminal. Ultimately, however, people always have to intervene when cases are unclear. According to OpenAI, the current process is slow and can cause mental stress for moderators. Our own language models are already able to take on this task. “This system reduces the process of developing and customizing content policies from months to hours.”
(Image: OpenAI)
In order to use a moderation system from OpenAI, the guidelines and examples are required. GPT-4 can look at this and then get started. The language model can also justify the decisions. The model learns from the process and creates further categories accordingly. Optionally, to handle large amounts of data at scale, one can use GPT-4’s predictions to optimize a much smaller model, writes OpenAI.
However, OpenAI of all people is said to have hired numerous clickworkers, i.e. cheap workers, in Kenya, who filtered unpleasant content from the training data for AI models for little money. They are said to have received less than two euros per hour to assess problematic content.
Advertisement
In Germany, content moderators from social media platforms have addressed the Bundestag to draw attention to their situation. They would have to look at posts with extreme violence for examination, and there was a lack of psychological support.
(emw)
Home
#OpenAI #replace #content #moderators #GPT4