How can we feel AI?

How can we feel AI?

Reconnecting artificial intelligence to life and living

Local Disturbances, given the world we live in, often explores the impacts of AI on human cultures and the natural world (and of course these things are inseparable). This post and the previous one focus on general approaches so that we might start looking more deeply into specific projects and techniques that could support us going forward to make better decisions about algorithmic systems and the world that is emerging.

The main thrust is that we need to find more and better ways to talk about, design, and critique AI systems if we hope to avoid repeating the worst excesses of our current society. Our love of efficiency and growth is informing most of the decisions made about AI and assumptions are being buried deep into the solutions we develop.

There is a need to distinguish between general knowledge of these invisible sociotechnical forces and a particular knowledge of how they will locally manifest in life and living. There is a difference between knowing about fire as a phenomenon and the experience of cooking with it or being burned by it. Much of the focus has centered on sharing ideas about the properties of the “fire” or sorting through the terrain after the fire has passed. We hope to focus on the exploration of the lived experience of “cooking” or being “burned” in AI.

We must also uncover ways of talking about these issues that don’t require the adoption of Western frames and ways of seeing the world. When technical or scientific propositions have passed themselves off as the “whole” of the world, we have experienced catastrophic consequences for the world beyond technology. Climate change, alienation from landscapes and bodies, ecological destruction, and social Darwinism all extend from scientific theories dominating new aspects of human activity.

The risks and opportunities are real. Machine learning is becoming ubiquitous, defining health, human migration, economic activity, policing, prediction, and environmental decision making. In response, ethical arguments are made to combat the hegemony of narrow assumptions. The explosive growth in AI ethics is a testament to the perceived need and urgency. Ideally, policies and public attitudes will evolve to address the changes and a new balance will be established. The pace of AI implementation is breakneck. The ability of activists, community members, and ethicists to keep up will be sorely tested.

The speed by which artificial intelligence is gaining ground is not the only concern.

Theories – scientific or ethical – invite abstraction. Abstraction is often necessary but there is also a need to reconnect concepts to the roots of being - our bodies, our landscapes, and our cultures. The ‘ideas’ we hold about AI need to be tested and refined through direct experience.  Approaches centered in ethics, for example, lead to generalizable principles. These principles can become a basis for law or public opinion but may do so without regard for the embodied event performed by an actual, contextualized subject.

At UKAI Projects, we argue for an approach that assembles multiple perspectives and lived experiences to generate prototypes that embrace a complex dialogic around AI. We’ll be unpacking the idea of ‘dialogic’ in future posts, but broadly it suggests multiple ideologies and viewpoints brought into relation without insisting on resolution. Rather than finding the “right path”, we bring multiple conceptions of algorithmic bodies, lands, and communities to bear on a living and multi-epistemic stance toward algorithmic culture. By ‘seeing’ the issues from these various points and relationships, we can integrate and assemble the different paths available and choose the ones most appropriate to the context within which we find ourselves.

The climate movement has long been criticized for being too “white”. Obviously, climate change affects all of us and the lack of diverse perspectives on the issues limits the range of action available. Research through the David Suzuki Foundation found that 3.7% of environmental non-profits in Canada engage directly with racialized communities in dealing with the impacts of human-created climate change. However, impacts are unfairly and disproportionately experienced by these groups. Racialized communities are statistically more likely to live in poverty, consume less, and therefore emit less pollution. One reason for this exclusion is that both the issue (climate change) and its responses (climate action) prioritize Western ideological positions that can exclude other ways of knowing the world. In a project that arose in response to this research, community members describe the barriers to participation in the climate movement, even though their work in affordable housing, transportation, or food security intersect closely.

Similarly, we are seeing theories of AI inscribed into phenomena that directly impact humans and are designed for this purpose with little effort to diversify and complicate how those impacts are understood or shared.

The currently dominant ideology of AI in North America holds that a small technical elite can and should develop technologies that will replace human agency and judgment. This theory of AI understands individuals as objects requiring optimization and control. And John Cheney-Lippold, author of We Are Data, offers that “when our embodied individualities get ignored, we increasingly lose control not just over life but over how life itself is defined".

Even the idea of “intelligence” is grounded in assumptions about what qualifies as rationality and its effective application. The idea of intelligence came out of the desire to establish a criterion for participation in public life in ancient Greece. The emotional, the irrational, the less erudite were excluded, and continue to be.

Ethicists, philosophers, and artists have begun the work of “objectifying” critical issues around AI and we are beginning to map out the impacts of what happens when technical theories become embedded as moral or aesthetic theories. There is both a gap and an opportunity in the field.

Communities are presented with objects of theoretical cognition, such as ethical frameworks, contemporary works of art, or collaborative manifestos. For too many, these objects make algorithmic events inaccessible. Or communities are provided immersive, aesthetic encounters with AI - dystopian, utopian, or otherwise - but then lose their own position in relation to it.

How might we produce a surplus of seeing, and thereby be perceived from contexts and points of view to which we do not have access?

The current approach to imagining and representing AI prioritizes the scaffolding of previous interpretations. A speaker tells me that an automated vehicle failed to recognize Black faces in its operation. Based on my existing mental models, I immediately understand that this is unacceptable, and I argue for greater vigilance in the creation and regulation of these automated systems. At no point am I provided a direct encounter with a material event. Rather than embodying or transforming my perspective I am asked to apply my existing perspectives to the issue at hand. In many cases this is sufficient. However, the changes underway too often escape the boundaries of our existing set of experiences and are likely to prove inadequate to understanding and responding to changes underway.

Perhaps the current listlessness in engaging with opportunities and threats of AI is less a product of indifference or hypocrisy and more the inaccessibility of the ‘event’ under discussion.

By centering a dialogic approach to materializing our relationships with AI, diverse perspectives can be brought into relationship with one another to build a dynamic and multi-centered structure of representation.

AI is driven by a spoken or unspoken need to understand people and the world as ‘finalized’ objects of an automated process. Exploration must be multi-subject and multi-epistemic if we hope to create structures of meaning, creation, and action. And by placing subjects into these relational structures of interpretation we become answerable for what happens next. As long as these structures remain unfinalized, we continuously undersign ourselves to the decisions, actions, and speech conducted in our name.

AI is a move toward the monological, where scale and efficiency become moral positions driving development. If the idea of progress - the promise of a better future and ever-faster ways to get there - is no longer valid, then AI offers some sense of control, at least to avoid the worst disasters.

We see the cultural conversation around AI being supported through dialogic and genuine encounters among multiple socialized consciousnesses, both in our design processes and in the prototypes constructed. We focus on the possibility of change rather than on the specific changes themselves. We hope to show that this can be done through a focus on the body, on how we are organized by the natural world, by culture, through parody, renewal, and revival. We are seeking an interruption of the monological, organized around collective exploration (and laughter).

We would love to engage many more in these conversations and to collectively explore possible uses and limitations of artificial intelligence. Reach out if you are interested.

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.