African American engineer using laptop to examine the work of artificial intelligence of robot in the laboratory

Artificial intelligence, or AI, is increasingly becoming integrated into everyday life. 

People interact with AI through digital assistants like Siri, chatbots like ChatGPT, search engine and social media algorithms, online shopping such as on Amazon, Google and Apple Maps, rideshare applications such as Uber, streaming services, phone autocorrect and phone face identification.

AI is used across various industries, including banking, housing and healthcare. It is the underlying technology determining credit scores, and it can be the underlying technology behind home and apartment application acceptance or denial. 

As beneficial as AI has claimed to be, it is not without its faults, including the well-documented history of racial bias and discrimination. While the development of AI and its various uses can be exciting and new, there are also concerns from some professionals that as useful as aspects of AI can be, with pros also come cons.

---

The Final Call spoke to experts in the technology field about the good and bad of AI and what people should know.

What is AI?

Yeshimabeit Milner is the founder and CEO of Data for Black Lives. She defined AI as an advanced form of algorithms and data science. “At the heart of AI and machine learning are algorithms,” she explained, calling it a centuries-old concept. “Essentially, an algorithm is a step-by-step process to solve a problem or to answer a question.”

For example, she said, a recipe is an algorithm. “You have the list of ingredients, you have the steps in order to make the dish, but then you also have this question of, what does success look like?” she added.

“Do I want to make something that’s really healthy and I’m not concerned with taste as much, … or do I want to make something that’s really delicious and beautiful and I’m not really thinking about the health content?”

For her, it’s about looking at the inputs, the outputs and answering the question, “What are we optimizing?”

Ai Agriculture ai hand cultivate agriculture technology greenhouse. AI generated Image by rawpixel.

“That question is very informed by history, values and principles. So, when we take a step back and we think about AI, which is computational and a bit more advanced, we still have to understand that it’s actually not just inputs and outputs.

For a ChatGPT model, it’s not just loads and loads of text, and then the output is whatever you’re asking the model to perform or do for you, but it’s also the histories, the values that are dictated by the society that you live in,” she said.

“And for the United States of America, we very much understand the histories and the values of this country. …We see it reflected historically from chattel slavery.”

As technology develops and evolves with the advent of AI, understanding it and knowing its impact and implications will be key.

Barnar C. Muhammad has been in the information technology field for over 35 years. He described the burst of AI “like the advent of the internet itself.”

“When I first saw the internet, I instantly saw this is going to change the world. And I think we’re at the position now with AI,” he said to The Final Call. “With this AI thing, it’s something every single day.”

History of AI

Experts say AI was born in 1950 after British mathematician Alan Turing published a paper titled “Computer Machinery and Intelligence,” where he proposed a way to test machine intelligence, called the Turing Test.

The term “artificial intelligence” was coined five years later in a proposal for a workshop titled “Dartmouth Summer Research Project on Artificial Intelligence” organized by John McCarthy, then a professor at Dartmouth College;

The Longshot, an armed, unmanned aircraft by General Atomics, is displayed at the Air & Space Forces Association Air, Space & Cyber Conference, Wednesday, Sept. 13, 2023, in Oxon Hill, Md. (AP Photo/Alex Brandon)

Marvin Minsky, who was in Harvard University’s Society of Fellows; Nathaniel Rochester, who worked at the International Business Machines Corporation (IBM) and Claude Shannon of Bell Labs.

Afterward, AI research intensified, with early progress in the creation of industrial robots, chatbots and programming languages. In the 1980s, Geoffrey Hinton, a British-Canadian computer scientist known as a “Godfather of AI,” introduced an algorithm that helped the field of neural networks, which is a model patterned after the human brain to help machines recognize patterns and make decisions. He continued work in the AI field in the 2000s and 2010s.

Several Black scientists, mathematicians and engineers helped with the advancement of AI in the 20th century. Dr. Clarence “Skip” Ellis, a computer scientist, helped develop systems and techniques that laid the groundwork for interactive computing tools like Google Docs.

The work of Dr. Gladys West, a 94-year-old mathematician, helped with the development of GPS technology. Marian Croak, a 69-year-old engineer, pioneered Voice over Internet Protocol (VoIP), the technology that allows people to make phone calls through an internet connection.

The dangers of AI

“Does humanity know what it’s doing? … You believe they (AI systems) can understand? … You believe they are intelligent? … You believe these systems have experiences of their own and can make decisions based on those experiences? … Are they conscious? … Will they have self-awareness?”

Those are the questions 60 Minutes correspondent Scott Pelley asked Mr. Hinton in a late 2023 broadcast. In response, Mr. Hinton stated: no, humanity does not know what it is doing; yes, they can understand; yes, they are intelligent;

Yes, they can make decisions based on experiences the same way people do; no, he does not think they are conscious, yet, but yes, he thinks in time, they will be self-aware.  Something people seriously need to worry about is systems writing their own computer code to modify themselves, he explained during the interview.

Mr. Hinton acknowledged some of the benefits of AI, namely in health care. He also laid out the risks: unemployment, unintended bias in employment and policing and autonomous battlefield robots. He told 60 Minutes that now is the moment to run experiments to understand AI, for governments to impose regulations and for a world treaty to ban the use of military robots.

In terms of a path forward to ensure safety, “I can’t see a path of guaranteed safety,” Mr. Hinton said. “They might take over.”

When it comes to the dangers of AI, Barnar Muhammad’s immediate thoughts go to science fiction movies that show exactly that:

AI taking over. But for those people who have historically been oppressed, there’s another danger factor: the misuse of AI by human beings and inherent human biases programmed into AI. 

“For instance, Israel is using AI to target Palestinians,” he said. “The Israeli AI finds a target somewhere and just blows up one person, potentially killing a thousand people.”

Several media outlets have reported on Israel’s use of AI. In a 2023 article, NPR reported that, “Israel is using an AI system to find targets in Gaza. Experts say it’s just the start,” that the Zionist regime’s military says it’s using artificial intelligence to target structures in real time.

“The military claims that the AI system, named ‘the Gospel,’ has helped it to rapidly identify enemy combatants and equipment, while reducing civilian casualties. But critics warn the system is unproven at best—and at worst, providing a technological justification for the killing of thousands of Palestinian civilians,” NPR reported.

“‘It appears to be an attack aimed at maximum devastation of the Gaza Strip,’ says Lucy Suchman, an anthropologist and professor emeritus at Lancaster University in England who studies military technology. If the AI system is really working as claimed by Israel’s military, ‘how do you explain that?’ she asks,” the NPR article noted.

Black woman and an AI robot human technology robotics.

The article also quoted a warning from Heidy Khlaaf, engineering director of AI Assurance at Trail of Bits, a technology security firm, who stated, “AI algorithms are notoriously flawed with high error rates observed across applications that require precision, accuracy, and safety.” 

Ms. Milner uses her organization, Data for Black Lives, as a way to transform data from weapons of oppression to tools of social change. 

“For far too long, data has been weaponized against the Black community, from redlining to credit scores to facial recognition technology, insurance algorithms. So many examples throughout history and the present,” Ms. Milner said.

In a recent example of AI housing discrimination, in late November 2024, Mary Louis, a Black woman, won a $2.2 million settlement.

She sued SafeRent Solutions, a service that provides resident screening and verifies applicant income and employment for property managers, landlords and real estate agents, after receiving an email that her application had been rejected, according to the Associated Press.

Her lawsuit claimed that the service’s algorithm “discriminated against her based on her race and income,” that it “failed to account for the benefits of housing vouchers,” and that it placed “too much weight on credit information,” according to NewsOne, a Black news outlet.

“This is a really, really important case. … This is really precedent-setting,” Ms. Milner said. “It’s a federal violation to discriminate against somebody based on race, but a lot of people have been able to rely on the fact that you can’t necessarily sue an algorithm. But in this case, you can sue a data broker company, and I think that’s why this is really important.”

She is passionate about how AI is used in credit scoring technology and gave a breakdown of the Fair Isaac Corporation (FICO), credit scores and how they have been used to redline and segregate Black people.

“Black people are three times more likely than their White counterparts to be scored at 620, and it’s impossible to get a decent apartment in a metropolitan area with a credit score of under 700,” Ms. Milner said. “That’s one of the areas that we’re really seeing the weaponization of artificial intelligence technologies.”

She explained how some of the data points credit scores are trained on could include zip code, which then becomes a representation, or proxy, for race.

“Because of redlining, because of policies that go back to 1933 that have created the geographic fabric of this country, that created segregation in this country, zip codes have become a proxy for race in ways that you don’t need to know whether or not somebody’s Black or White.

But you can just see from their zip code what race they are, what ethnicity they are and often, too, what their income is,” she said. “So even though companies like FICO say that they’re not using zip code, they often are, but they have other proxies for race, even if they’re not using zip code or race.”

Another example of racial bias and discrimination in AI lies in the healthcare industry. In late 2019, researchers found that a medical algorithm sold by Optum, a healthcare company owned by UnitedHealth Group, underestimated the sickness of Black patients and favored White patients who were less sick. The algorithm relied on medical costs as a factor.

A visitor touches a humanoid robot hand on display at an AI exhibition booth during the The World Artificial Intelligence Conference & High-Level Meeting on Global AI Governance, in Shanghai China, July 4, 2024. (AP Photo/Andy Wong)

“If you look at who has access to health insurance in this country and who doesn’t, automatically you’re going to know who’s going to be prioritized for care. So that is why so many people who are really, actually in need of care and treatment were denied overwhelmingly, and they happen to be overwhelmingly Black,” Ms. Milner said.

Embracing AI

Despite the concerns of AI, Ms. Milner believes Black people should still embrace it.

“To be honest, Black people have actually always been on the forefront of cutting-edge technology, even though a lot of that history has been erased,” she said.

“I think when we understand our actual, real history as Black people, we’ve always been the ones that have created technology and used technology for good, for culture; not for warfare, not for harm, not for greed, but to make the world a better place.”

She is an advocate of not being afraid of AI but being aware and having discernment when interacting with the various technologies. 

Barnar Muhammad uses AI to enhance his daily life. He uses it as a runner and has also used it for gardening. “The most I use it is from a knowledge perspective. The internet itself kind of narrowed the gap to access of knowledge,” he said. “In terms of learning, you could speed up the process of learning a lot of practical things.”

Google Nest mini smart speaker. SEPTEMBER 17, 2020 – BANGKOK, THAILAND

He used urban gardening as an example. 

“AI could actually walk you through with a step-by-step guide. You could literally describe your yard, your space, what you have available, and AI could shorten the time frame in which you would learn exactly what to do to grow your own urban garden,” he said. 

He is an advocate for those in programming and information technology to master what is available and to then create internal AI systems.

“We’re at the point now that you could actually have localized AI on your own computer, if it meets the requirements, and we’re able to actually get in on some of this development and understand how it works.

That’s one thing. If we don’t understand how it works, we’re not going to be able to defend against the negative aspects of it, and we won’t be able to affect the future of it,” he said.

Ms. Milner also believes that more education and awareness of AI is needed.

“I think there’s a lot of effort on the part of corporations to really push and market it, but I think that right now we need to understand that it’s really in development still, and that it is limited,” she said.

“And we need to understand that at the end of the day, it’s human beings who are building and training these, so how do we make sure that the human beings are also representative of the actual population that these AI models are really meant to be used on and to serve.”