Issue 6: Old / New / Next

There Is No “Artificial” Intelligence: A Conversation with the Initiative for Indigenous Futures

Gabriella Garcia

Contemporary theoretical debates about Artificial Intelligence have touched the full spectrum of social domain, from the ethical to the economic. How will AI-driven automation affect the future of work, for instance? Who is responsible when a self-driving car kills a pedestrian? Datasets inform criminal justice algorithms used to detect criminal recidivism in order to make bail decisions in the courtroom. Is it possible (probable?) that those datasets are intrinsically biased in that the sets themselves were informed by discriminatory legal policies that disproportionately penalized people of color and the poor? Do we have a moral obligation to recognize sentient robot personage?

One theoretical question continues to fly below the radar, despite its eponymous nature. What makes something “artificial,” and how do we determine “intelligence”? These are the motivations behind current research by the Initiative for Indigenous Futures, an alliance of over a dozen Canadian organizations dedicated to answering the questions of “how Indigenous people imagine our future, and the related questions of how we will build our way to that future.

Founded in 2014, IIF grew out of Concordia University’s 2006 partnership with Aboriginal Territories in Cyberspace, an “Aboriginally determined research-creation network” dedicated to establishing and fostering Indigenous visibility in online environments, as virtual territories advance in importance toward that of the real. In its five years, IIF has cultivated a research network of Aboriginal artists, technologists, theorists, and activists focused on interrogating the assumed drivers of technological change as a means to disrupt neo-colonial narratives of progress. 

IIF at MUTEK_IMG

This past August, IIF co-founder Professor Jason Edward Lewis and Lakota performance artist Suzanne Kite, a PhD candidate at Concordia currently assisting research at the Initiative, presented one of three keynotes at Forum IMG, a symposium dedicated to the exploration of the practical and theoretical impacts of creative digital technology presented at the annual MUTEK Montreal electronic arts festival. Each year, the thematically-curated forum addresses a critical industry issue through panels, workshops, and exhibitions, reimagining the festival concept from an entertainment-only presentation, to a conference about education.

This year’s topic, Imagining Our Digital Futures, strongly focus on the urgent need to address rapid technological development without any accountability. This particular theme hits hard in Montreal, ever since the city became a leading global hub for the AI and machine learning industry, with much of the funding allocated for research. Among panels covering everything from the dark future of data surveillance, to the complexities of defining authorship in the age of algorithmic art, the IIF keynote stood out as the most imaginative, maybe even hopeful, presentation describing the potential digital future.

“We’d like to begin by acknowledging that this event is taking place on unceded Indigenous lands…” So began Kite and Lewis’ keynote, Making Kin with the Machines. The Indigenous land acknowledgement has emerged as a well-intentioned, but oft-trite, kickoff custom for Canada’s cultural events, a gesture of atonement for past genocides that ignores the continuing systemic violence experienced globally by First Nations people. Those words resonate as they offer a way to discuss possible futures, rather than rhetorically bandaging history. The keynote continues a discussion that Kite and Lewis introduced in the essay of the same name written alongside Hawaiian indegenous historian Noelani Arista and Plains Cree performance artist Archer Pechawis, which was one of the winners of the “Resisting Reduction” essay competition, and was later published by MIT’s Journal of Design and Science. The essay challenges the current anthropocentric models being used define the development of advanced computational systems, and instead propose a “kinship” model that dismantles any notion that humans matter more than the environments with which they create.

Making Kin with the Machines combined ideas from three different North American and Pacific Indigenous traditions, as represented by the authors, each challenging the Enlightenment’s nature-culture divide in which the human holds itself separate, and above the organic in order to observe the natural world from a scientifically-appropriate distance(1). The distinction is informed by a reason-driven framework that requires an ‘othering’ which makes holistic cosmologies problematic to the Western philosophical movement. Thus, indigenous thought became a problem to be solved in the name of humanist progress, with the enlightened expansionist bestowing the “gift” of rational thought, by way of “civilizing the savage.”(2) 

It is precisely this assumption, which continues to inform current social models and modes of production that IIF calls into question, by illustrating the fundamental disconnect between those developing algorithmic technologies and those—human and non-human alike—who are forced to endure the consequences of such technologies without having any say in their development. 

An illustration of the moon coming out of a human head, surrounded by butterflies

A Good Way Protocol

“The only thing that’s interesting to me about the Western engagement with AI and machine learning is that we had to write a new language in order to communicate with rocks,” said Kite. She is speaking of coding languages, zeroing in on the fact that inherent to computational development is the creation of ways to speak, and listen, to an object. Within the Lakota ontology in which Kite was raised, there is no division between the “active” subject and the “latent” object. There is an ecological value of the whole, where human and nonhuman histories are interwoven into a pattern of action and consequence. The latter is a being that which holds a cosmo-terrestrial record far beyond any historical document possibly developed by the former. This cosmology, the acknowledgment of the inanimate object as an intelligent part of the tellurian community, Kate suggests can help structure our evolving relationship with AI twofold. If intelligence—or non-human interiority—is recognized in every segment of AI development, it can help technologists create an ethical framework toward building AI all the way down to the raw materials sourced for its physical components, as well as eliminate defining any part of its development as an “artificial” entity. 

“Our [Lakota] elders gave us clear values. There are seven sacred rites, seven sacred virtues, that we are committed to perform over the course of our lives that guide us toward how to do something in a ‘good way,’” Kite explains. This ‘good way’ protocol toward the development of AI is the foundation of Kite’s contribution to Making Kin with the Machine, as well as that of her own art-based research. “The agency of stones connects directly to the question of AI, as AI is formed from not only code, but from materials of the earth,” Kite writes, “to remove the concept of AI from its materiality is to sever this connection.” 

By ignoring non-human interiority, we continue the exploitation-based production protocol that has been the driving engine behind Western colonial capitalist ontology, and it is marked by blood every step of the way. When the metal coltan, for instance, is reduced from “being” to “resource”, we can ignore the civil wars and gross human rights violations caused by mining to meet demand; we can package this relationship neatly with the media-ready term “conflict minerals” and more easily dehumanize as “collateral damage” those killed in these conflicts or in the process of collecting said minerals(3)

This will continue to manifest in the computational biosphere all the way up and through AI development unless it’s interrupted by protocols that acknowledge the world as an interconnected whole. “If you build AI in a completely good way, then you don’t have to worry that the cyborg monster is going to come murder everyone because it’s not built from the ground up within a hellish world-ending ontology.”

White Supremacy: It’s Not Just for People Anymore!

Lewis described just how this inherent nature-culture divide trickles into the social implications of AI as it becomes more prevalent in both public and private sectors of society. “Trying to make this really clear distinction between what is real and what is artificial, holding up the human as separate and above the rest of creation by terming it artificial, allows us to distance ourselves from [AI],” Lewis says, “when we see it as a tool, we don’t have to think about it as a product of human effort.” This, he argues, allows both those developing and implementing intelligent technologies to hand off responsibility to the machine without questions of human accountability. It is furthered by the acceptance of the machine learning “black box,” in which engineers provide no transparency about how the models they design produce results upon which ML-informed decision-making ultimately relies(4). Surprising the audience Lewis asks, “what sort of engineering practice do we have where we can build this thing over here that people’s lives depend on, but don’t really understand how it works?”(5)

IIF sees an opportunity to interrupt presumed methods of development by amplifying the whole spectrum of global Indigenous voices in the techno-scientific conversation. “As Indigenous people, as a brown person myself, we’ve been at the pointy end of technology where technologies have been used against us. Part of the way they’ve been able to be used against us to get status is by devaluing us as human beings,” Lewis says. “What kind of AI entities do I want in my life that are going to help me and my communities thrive, as opposed to creating yet another tool that will be used to suppress us, to constrain us, and to once again claim that we are not full members of the human?”

This concern—that technological development within a Western ideology that promotes conquest through violence both brute and epistemic will only breed more of the same—affects humanity as a whole, not just those currently on the “pointy end.” While it might be the oppressed always on the frontlines pushing against oppressive technologies, technology always breaches beyond to be established as a tool for the subjugation of the masses as a whole. “The historically-oppressed know what’s coming only because it happens to them first and worst,” Kite observes, “but your average cis white man will be oppressed by the same tools. It seeps into everything. It’s poison.” Being as deep in the trenches as we already are, how can we prevent the furthering of systemic oppression so efficiently promoted by the deployment of algorithmic reductionism removed from human accountability. Recently, evidence of this dilemma clearer than ever, validating the prescient fears of the oppressed. 

On September 19th, Kate Crawford, AI Now Institute Founder and Director, and artist-activist Trevor Paglen published Excavating AI, a damning research paper illustrating the alarming fallibility of datasets used to train object and facial recognition AI systems(6). ImageNet, one of the most widely used databases for recognition training, categorized persons under clearly-biased labels such as “rapist,” “call girl,” “yellow person,” and “hermaphrodite”. (ImageNet purged over 600,000 photos of people after the paper was published). IBM’s Diversity in Faces training sets used facial symmetry and craniometry methods to classify subjects by race, a practice reminiscent of the pseudoscientific methods used to justify biological determinism that led to the field of eugenics. Adam Harvey and Jules LaPlace called out Duke University for a photo repository illicitly populated by surveillance footage of its students—the majority demographic of which are cis white men(7).

Despite responses toward accountability, any damage caused by the biases of these sets is now deeply entrenched in the algorithms they trained; there is simply no way to post-facto isolate and remove malignant clusters from a neural network. A lack of protocol reverberates beyond human imagination. 

An illustration of what looks like a crumbling statue of a human head with a flower coming out of it

Listening to the Machine

This moment is particularly ripe for disruption because it’s the first time that we are creating something that talks back to us in a way that simulates parts of the human emotional spectrum. “[AI] can see patterns and connections that we can’t see and make them visible to us,” Lewis says, “And it gives us a place to think through what ‘humanness’ is—not because AI is on its way to becoming human, but because it creates an interesting mirror to think through how we organize ourselves as humans.” From the outset, AI requires a deep interconnectedness between human and non-human entities, creating Indigenous ontologies that celebrate this interconnectedness especially adept at assessing what we see in that mirror and how we engage with these newly-visible patterns toward a more ethical communal future. 

Machine learning has put the human in an object-receptive position that goes beyond observation, indicating a potential gateway to introducing holistic ontologies into Western scientific and technological practices. “It feels really magical when your machine learning model does something you didn’t expect, but it’s just the first step,” Kite says. “It’s almost forcing objects to be listened to on your terms.” Kite regularly explores the practice of listening—or her “desperation to hear something,” as she calls it—in her own art, exemplifying a move to a good-way protocol throughout her process. Kite’s recent interactive sculptures, which join traditional Lakota symbology and practice with physical computation and machine learning, physicalize listening by producing sound or visual output based on the circular communication between audience/interactor and the material components of the piece. Thus, Kite illustrates the result of reciprocity between the human and non-human, informed by an ontology in which both hold equal value as sacred, by creating something together. 

Lewis is coming at the research from a different angle by facilitating conversations about potential uses and locally-informed development of AI in response to his hope to create technologies that help communities on his native Hawaiian islands thrive. For instance, could the revitalization of Indigenous fishponds, which were all but destroyed by colonial settlers, be assisted by machine learning models trained on the knowledge of elder fishmasters? Lewis says that some locals stewarding fishponds are currently trying to develop technology that can automate calculating the fish population in order to alleviate the fishmaster workload. “Counting the fish… seems like a really simple issue, right? But there are technical issues that get in the way of detection, like murky water and weird light refraction,” Lewis says. Beyond that, there are intuitive aspects of pond stewardship that are difficult to digitize, such as the individual age or school affiliation of a fish, which masters can determine based on their knowledge of each specific fishpond ecosystem as a whole. In attempting to address these gaps in cultural knowledge, Lewis highlights one of AI’s major flaws: what parts of humanity are being left out of the machine learning training base for the sake of flattened datafication? The answer might be so overwhelming, that the industry in its current state has chosen to ignore it for the sake of rapid growth.

“How do you instantiate generational communication computationally?” Lewis asks. “There are things in caring for a territory, like observing migratory patterns or large scale change that is difficult to see from our timeframe, that these systems can be really helpful for. But for me, the question is how to develop them in a way that they are part of the community, in a way that keeps the data local to the people that you want to use it.” 

Footnotes

1. Latour, Bruno. Reassembling the Social : An Introduction to Actor-Network-Theory, Oxford University Press, Incorporated, 2005.

2. Horsman, Reginald, and Reginald HORSMAN. Race and Manifest Destiny : The Origins of American Racial Anglo-Saxonism, Harvard University Press, 1986.

3. Nathan, Dev, and Sandip Sarkar. “Blood on Your Mobile?” Economic and Political Weekly, vol. 45, no. 43, 2010, pp. 22–24.

4. Finn, Ed. “Algorithm of the Enlightenment.” Issues in Science and Technology, vol. 33, no. 3, 2017, pp. 21–25.

5. Mbadiwe, Tafari. “Algorithmic Injustice.” The New Atlantis, no. 54, 2018, pp. 3–28.

6. Crawford, Kate & Paglen, Trevor. “Excavating AI.”

7. Harvey, Adam, & LaPlace, Jules. “Megapixels Project.”