Photo Credit: DALL.E 2023

Note: This is a short piece I recently wrote as part of a university-wide article that aimed at celebrating the 75th  anniversary of the Universal Declaration of Human Rights.

To understand the promises and risks of technological advancements, we shall first comprehend that we live in a “technopoly.”  In a technopoly—a term popularized by Neil Postman in his 1992 book “Technopoly: The Surrender of Culture to Technology”—society is characterized by an unmindful faith in the inherent goodness of technology. It is, therefore, not surprising that in a technopoly, discussing the risks of technology advancements are often tabu.

As technology advances, civilians and scholars often observe a mixture of promises and risks concerning human rights. Promises are unbounded and include improved quality of life, enhanced education and awareness, increased rates of social mobilization, and superior expression and communication, among others.  While the promises are unbounded, so are the perils.

One of the perils arises from the fact that many algorithmic decisions that affect our daily lives are being increasingly made by highly nontransparent algorithms. These algorithms are now widely accessible to many working in the field of AI and Machine Learning. I teach many of these algorithms to my own students, showing them their incredible performance in various domains. Being nontransparent here does not mean that we cannot comprehend them or teach our students how to develop or use them. But rather it means that their rules of reasoning are often not clear to various stakeholders, including their users or those affected by them.

Notably, it is a significant human rights concern when one’s life is affected by an algorithm with nontransparent rules of reasoning. As the privacy law expert Marc Rotenberg argued during a Knowledge Café event organized by UNESCO, “at the core of modern privacy law is a single goal: to make transparent, the automated decisions that impact our lives.” Emphasizing a human right concern, Rotenberg reminded the audience that:

“At the intersection of law and technology–knowledge of the algorithm is a fundamental right, a human right.”

Although the digital revolution has made the use of AI techniques that enable transferring large amounts of data to actionable decisions more important than ever, the overall trend has been less-than-desired. In fact, while early best seller books like “The Naked Corporation (2003)” discussed that transparency would soon alter all aspects of the economy and markets, we are facing paradoxical outcomes such as what the philosopher Shannon Vallor calls “Technological Transparency Paradox.” Summarizing this paradox, the Stanford Encyclopedia of Philosophy notes that

“Those in favor of developing technologies to promote radically transparent societies, do so under the premise that this openness will increase accountability and democratic ideals. But the paradox is that this cult of transparency often achieves just the opposite with large unaccountable organizations that are not democratically chosen holding information that can be used to weaken democratic societies. This is due to the asymmetrical relationship between the user and the companies with whom she shares all the data of her life. The user is, indeed, radically open and transparent to the company, but the algorithms used to mine the data and the 3rd parties that this data is shared with is opaque and not subject to accountability. We, the users of these technologies, are forced to be transparent but the companies profiting off our information are not required to be equally transparent.”

The rapid advancements in Generative AI, and especially Large Langue Models such as OpenAI’s GPT-4, Anthropic’s Claude, Google’s Bard, and Meta’s LLaMA 2-Chat, have intensified the concerns related to transparency. This is particularly the case, because these so called “foundation models” are primarily built using highly nontransparent AI methodologies such as “transformers.”

But transparency is not the only human rights concern when it comes to recently developed AI tools. These tools can be used for rapid spread of disinformation, which can undermine democratic processes, foment social division, and endanger individuals’ basic rights for access to genuine information. They can also exacerbate inequalities, especially given that access to them is unevenly distributed, leading to a significant digital divide where marginalized communities are left deeply behind. These tools could also displace large swaths of the workforce, leading to colossal economic instability, challenging the right to work, and threatening a minimum standard of living for many.

Finally, these tools can lead to significant discriminatory outcomes, potentially reinforcing existing social inequalities and violating basic rights to equity and non-discrimination.

In a technopoly, therefore, it is vital to carefully balance the enthusiasm for technological progress with critical oversight and well-designed regulations to ensure that advancements do not violate various aspects of human rights. In closing, however, I would like to end on a happy note. While these technologies could be extremely perilous, I believe with the right approach to oversight and careful regulations they will soon thrive to prove themselves as essential tools in improving many aspects of our lives.