‘The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility.’
By Brett Wilkins, CommonDreams.org
Noting the ubiquity of artificial intelligence in modern life, the United Nations’ top human rights official on Sept. 15 called for a moratorium on the sale and use of AI systems that imperil human rights until sufficient safeguards against potential abuse are implemented.
“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times,” UN High Commissioner for Human Rights Michelle Bachelet said. “But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”
The former socialist president of Chile added that AI “now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.”
Ms. Bachelet’s remarks came at a Council of Europe hearing on the Pegasus scandal—in which the Israeli firm NSO Group’s spyware was used to target activists, journalists, and politicians worldwide, sparking calls for a global moratorium on the sale and transfer of surveillance technology.
Her comments also came as the Office of the United Nations High Commissioner for Human Rights (OHCHR) published a report analyzing AI’s impacts on privacy and other rights.
According to OHCHR:
The report looks at how states and businesses alike have often rushed to incorporate AI applications, failing to carry out due diligence. There have already been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition. The report details how AI systems rely on large data sets, with information about individuals collected, shared, merged, and analyzed in multiple and often opaque ways. The data used to inform and guide AI systems can be faulty, discriminatory, out-of-date, or irrelevant. Long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways.
“The complexity of the data environment, algorithms, and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors, are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society,” the report states.
Evan Greer, director of the digital rights group Fight for the Future, said that the new report “echoes the growing consensus among technology and human rights experts around the world” that “artificial intelligence-powered surveillance systems like facial recognition pose an existential threat to the future human liberty.”
“Like nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectively regulated, it must be banned,” she continued. “Facial recognition and other discriminatory uses of artificial intelligence can do immense harm whether they’re deployed by governments or private entities like corporations.”
“We agree with the UN report’s conclusion,” added Greer. “There should be an immediate, worldwide moratorium on the sale of facial recognition surveillance technology and other harmful AI systems.”
“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared, and used is one of the most urgent human rights questions we face,” Ms. Bachelet asserted Sept. 15. “We cannot afford to continue playing catch-up regarding AI—allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact.”
“The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility,” she added. “Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”