
TIME claimed that the artificial intelligence development company OpenAI hired mainly Kenyan workers for 2 dollars per hour to filter harmful content on ChatGPT.
As we all know ChatGPT is one of the most remarkable innovations of 2022 which was released last November. Within a week, it had more than a million users. But like many other innovations, OpenAI has a dark side too according to TIME’s article.
According to TIME, illegal content in the chatbot is scanned by Kenyan workers working at OpenAI for approximately $2 per hour. Workers employed artificial intelligence to intervene in texts containing child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest.

To create a safe environment for their users, OpenAI followed in the footsteps of social media companies such as Facebook, which had already demonstrated that it was possible to build AIs that could detect toxic language such as hate speech, and help remove it from their platforms.
The premise, according to TIME, was simple: feed AI marked examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to detect toxicity in its training data and filter it out before it reached the user. It may also aid in the removal of harmful text from future AI model training datasets.
Beginning in November 2021, OpenAI sent tens of thousands of text snippets to an outsourcing firm in Kenya to obtain those labels. Sama is OpenAI’s outsourcing partner which calls itself an “ethical AI company” in Kenya and hires workers from Kenya, Uganda, and India to label data for companies like Google, Meta, and Microsoft.

TIME interviewed four Sama employees who worked on the project anonymously. Here are some of the worker’s words in TIME’s article.
“Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”
An OpenAI spokesperson confirmed in a statement that Sama employees in Kenya worked to remove toxic data from training datasets for tools like ChatGPT. Also according to TIME, “Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content,” the OpenAI spokesperson said. “Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.”

We see workers from the Global South working in the background of a system that is thought to be entirely technological. In addition to the fact that Global Southern workers are working in this field, it is also necessary to mention the low wages and the psychological impact of their work. They can access content such as rape, violence, and suicide without any filter, which is searched using ChatGPT. It is likely that many employees will experience and may have experienced traumatic effects during the process. According to one worker’s statement about controlling disturbing content, “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.”
When we continue to examine this in-depth research that TIME has done, it is also seen that research has been done between OpenAI and Sama on the working conditions and wages of the workers. So, according to documents reviewed by TIME, OpenAI signed three contracts worth approximately $200,000 with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. A few dozen workers were divided into three teams, one for each subject.
Regarding the mental problems and trauma development of workers that we mentioned a few paragraphs ago. Sama spokesperson stated workers have access to psychologists and trauma therapists. The spokesperson added: “we take the mental health of our employees and contractors very seriously. Our previous understanding was that [at Sama] wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalization, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so.”

Before moving on to the discussions on wages, Kenya does not have a minimum wage limit.
According to a Sama spokesperson, workers were asked to label 70 text passages per nine-hour shift, rather than up to 250, and they could earn between $1.46 and $3.74 per hour after taxes. The spokesperson would not say which job roles would pay at the top of that range. “The $12.50 rate for the project covers all costs, such as infrastructure expenses, as well as salary and benefits for the associates and their fully-dedicated quality assurance analysts and team leaders,” said the spokesperson.
Let’s step back and look at artificial intelligence, companies still need humans to label or train data for AI systems. AI ethicist Andrew Strait tweeted, “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent.”
Also, an AI expert Dave Monlander tweeted about why he didn’t want to work for OpenAI, “The job would consist in working 40 hours a week solving python puzzles, explaining my reasoning through extensive commentary, in such a way that the machine can, by imitation, learn how to reason. ChatGPT is way less independent than people think.”
I guess we can say that there are many ways for artificial intelligence to reach the level we dream of.
Note: All images are AI-generated by author of the text.