HUMAN VS ARTIFICIAL INTELLIGENCE
Beginning of artificial intelligence

A consultant in Silicon Valley once informed me that our pocket artificial intelligence (AI) is more intelligent than we realize. It communicates with our other electronic devices, sharing information across Bluetooth or Wi-Fi networks. This combination of learning algorithms and constant data gathering means that AI is constantly processing information relative to its programming. However, this seemingly benign technology has the potential to influence our daily decisions, as many of them are not conscious. Furthermore, the benefits of AI may persist for some while marginalizing others due to the imperfections of our society. AI is dependent on human programming and its learning is only as good as the quality of data it receives. This can lead to implicit bias and other subjective criteria that initiate prejudice in AI decision-making. Machine learning, a subset of AI, can result in fostering racial and gender profiling, as well as other hidden and normalized biases. If a society has a history of discriminatory practices, AI will provide a rear-view outcome, limiting equity-based progress with seeming legitimacy.In 2015, Jacky Alciné, a 22-year-old software engineer residing in Brooklyn, shared images of his friend on Google Photos. However, Google Photos' artificial intelligence misclassified over 80 photos of Alciné's black friend as "gorilla." Alciné took to Twitter to express his frustration, stating, "Google Photos, you messed up. My friend is not a gorilla." This incident is not an isolated case. Earlier this year, Uber faced consequences when its utilization of Microsoft facial recognition technology led to the dismissal of its own employees. The technology failed to recognize and authenticate non-white employees.
In instances where certain occupations have exhibited an imbalanced representation of women, the limited data available on women can result in negative preconceptions or even lead to the exclusion of resumes before the hiring process commences. This gender stereotyping perpetuates and reinforces gender biases, hindering the progress made through years of awareness, grassroots efforts, and activism. In 2018, Amazon acknowledged that its AI model exhibited a preference for male job candidates. The program had been trained on a ten-year database that predominantly featured male candidates, thus supporting the notion that men were deemed more favorable.
In the realm of healthcare, black individuals have historically faced discrimination and exploitation, leading to their underutilization of medical services. A study conducted in 2019 revealed that AI systems perpetuate these historical patterns by recommending fewer follow-ups, diagnostic tests, and evaluations for black patients compared to their white counterparts. Additionally, a 2016 study highlighted the racial profiling of pain thresholds, a belief rooted in the legacy of slavery, resulting in AI-based pain medication recommendations being lower for black patients in comparison to their white counterparts.
Concerns regarding the bias of AI extend beyond machine learning to encompass physical appearance, specifically the perception of attractiveness and standard appearance. Instances of negative outcomes have already been documented, with facial recognition systems developed by Google, Microsoft, and Amazon being found to misidentify individuals of color at a rate of 70% or higher.
In a recent interview, Kate Crawford, a Professor of communication and science and technology studies at the University of Southern California, and a senior principal researcher at Microsoft Research, provided an alternative perspective on AI bias, focusing on power dynamics. They emphasized that while ethics are important, they alone are not sufficient. It is more valuable to consider who benefits and who is harmed by AI systems, and whether these systems further empower already powerful entities. Examples such as facial recognition technology and workplace surveillance demonstrate that these systems often reinforce the power of corporations, militaries, and police.
To address the known and emerging limitations of AI, several actions must be taken. Firstly, data used in AI systems should embody the aspirational goal of social equity and equality. Additionally, routine evaluation of AI is necessary to ensure that its use aligns with its intended purpose. Furthermore, perspectives from diverse stakeholders should be included in the development and evaluation of AI. By doing so, AI can serve as a tool to expose and rectify social biases, thereby promoting ethical decision-making. However, assessing all aspects of AI is not straightforward, and some effects may only become apparent over time, by which point societal changes may have already occurred.
An emerging concern is the asymmetry between humans and machines in terms of trust, empathy, and responsibility. As AI is increasingly deployed as an intermediary between humans and specific actions, it is designed to learn from its experiences.
A recent research study revealed that humans are less likely to maintain politeness and trust in their communications with the absence of standardized regulations and the multitude of vendors, the potential for AI to cause significant social harm is a pressing concern. Unchecked AI has the capability to perpetuate historical inequities by reflecting the implicit biases and societal norms that have been ingrained within our culture. These biases and norms often go unnoticed by those who belong to the dominant racial group or those who are fortunate enough to have a certain level of socio-economic privilege. Consequently, regulating AI poses a challenge as it requires the intervention of individuals who can recognize the pitfalls of the technology and advocate for those who are more vulnerable.
Timnit Gebru, a former AI developer for Google, aptly expressed her concerns about the AI community. She emphasized that her worry lies not in machines taking over the world, but rather in the prevalence of groupthink, insularity, and arrogance within the AI community. Gebru highlights the fact that the people responsible for creating the technology play a significant role in shaping its impact. If a large portion of individuals are excluded from the development process, the resulting technology will only benefit a select few while causing harm to a vast majority.
One of the most formidable challenges in promoting ethical AI lies in the leading companies that are spearheading the integration of this technology. Giants such as Google, Microsoft, and Amazon possess substantial market presence and wield considerable lobbying power. However, the collective strength of customers and investors holds even greater potential for effecting change. It is through their combined efforts that ethical considerations can be prioritized and the negative consequences of AI can be mitigated.

Comments
There are no comments for this story
Be the first to respond and start the conversation.