A concerning underbelly of AI (artificial intelligence) technological advancement is the farming out of data labeling to Global South countries.
For very little pay, the data labeling process includes identifying raw data, in many cases graphic and disturbing images—including suicide, child abuse, sexual assault, text files, videos, and more. Then additional labels and information are added so that a “machine learning model” can learn from it.
In May of this year, nearly 100 Kenyan tech workers, known as “data labelers,” wrote an open letter to U.S. President Joe Biden, ahead of a state dinner at the White House honoring Kenya’s President William Ruto.
The authors urged the U.S. president to help end “modern-day slavery” in Kenya’s tech sector. The signers of the letter, according to thehill.com, work as data labelers, content moderators and artificial intelligence (AI) workers for American companies like Meta’s Facebook, ScaleAI and OpenAI.
Fast forward to a CBS 60 Minutes segment from November 24, “How Kenya became the ‘Silicon Savannah,” where reporter Lesley Stahl is interviewing Kenyan civil rights activist Nerima Wako-Ojiwa.
During interview Wako-Ojiwa explains the tech workers’ desperation in a country with high unemployment has led to a culture of exploitation with unfair wages and no job security.
“It’s terrible to see just how many American companies are just doing wrong here,” Wako-Ojiwa said. “And it’s something that they wouldn’t do at home (in U.S.), so why do it here?”
Afterward the 60 Minutes narrator added, “The familiar narrative is that artificial intelligence will take away human jobs, but right now it’s also creating jobs.
There’s a growing global workforce of millions toiling to make AI run smoothly. It’s grunt-work that needs to be done accurately and fast. To do it cheaply, the work is often farmed out to developing countries like Kenya.”
Echoing Wako-Ojiwa, Stahl, reporting from Kenya said, “American Tech giants like Meta and Open AI have been contracting middleman and companies to hire Kenyon workers for their operations.” She added, “Those employees tell us that the work was mentally draining and emotionally harmful (and) there’s no job security and the pay was dismal.”
At an unemployment rate, among its youth population of 67 percent, the East African country has become one of the main hubs for this kind of grunt-work.
In the open letter, which can be found on foxglove.org.uk American big tech companies are taken to task for “systemically abusing and exploiting African workers.” The letter points to the fact that U.S. companies, “are undermining the local labor laws, the country’s justice system and violating international labor standards.”
In addition, it points out that “working conditions amount to modern day slavery. Any trade-related discussions between the U.S. and Kenya must take into account these abuses and ensure that the rights of all workers are protected.”
In the letter, and again as reported on 60 Minutes, data labelers discussed the horrendous conditions they are required to work under, for very little pay.
“We do this work at great cost to our health, our lives and our families. U.S. tech giants export their toughest and most dangerous jobs overseas. The work is mentally and emotionally draining. We scrub Facebook, TikTok and Instagram to ensure these important platforms do not become awash with hate speech and incitement to violence.
We label images and text to train generative AI tools like ChatGPT for OpenAI. Our work involves watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day. Many of us do this work for less than $2 per hour,” noted the letter.
The open letter and the 60 Minutes segment spent considerable time discussing the psychological damage of sitting for hours, days and weeks, watching such graphic, horrific images.
“These (American-based) companies do not provide us with the necessary mental health care to keep us safe. As a result, many of us live and work with post-traumatic stress disorder (PTSD). We weren’t warned about the horrors of the work before we started, explained the letter from the data labelers.
In many cases, the effects of watching graphic images, to render a safe system, result in PTSD, similar to “Explosive Ordinance Disposal.” This is a process where hazardous explosives are disabled or rendered safe.
This takes a tremendous psychological toll on the person whose full-time job is exposing himself to the inherent danger that comes with disabling a live bomb.
According to Time Magazine, the work of data labeling is simple: “Feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild.
That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.”
The question remains, but at what cost?
Follow @JehronMuhammad on X