Not Every Story Needs a Villain

Not Every Story Needs a Villain

UKAI Projects · Local Disturbances - Shorts #18 - Not Every Story Needs a Villain

These short missives are an effort to make sense of the implications of algorithmic technologies (like AI) on how culture is produced and appreciated. Local Disturbances extends from UKAI Projects, a non-profit arts organization based in Toronto. At UKAI, we hope to create a space for conversations and activities directed at the less obvious consequences of turning more and more decisions over to machines.

The word ‘ukai’ comes from Japan and refers to a practice of using cormorants to catch fish in the river. The birds swim alongside a boat. There is a metal ring around their necks that prevent them from swallowing what they catch. The rings are attached to strings that are held by an “ushou” or fisher-person standing in the boat. A burning harness of pine draws sweetfish to the surface of the river where they are gobbled up (but not swallowed) by the cormorants. When their throats are fully engorged they are pulled into the boat and flipped over and the fish flow into a bucket for use by the local community.

The first time I was exposed to this, I didn’t know what to make of it. Rather than lingering in that discomfort, I felt compelled to “sort” it and told a friend I found the practice a bit cruel. Just like the algorithmic systems I write about, I needed it to go into one box or another or for it to be completely invisible in order for events to proceed.

In an occasionally overwhelming world, it makes sense to look at a thing and then quickly decide what it means so that we can move onto the next thing. Often we do this without reflection. We borrow the patterned beliefs of a group to which we belong or the broader culture around us and use them as a map to make sense of the world.

That’s good. That’s bad.

But these maps are abstractions. They are not the terrain.

Last month I was invited to speak at an event at the University of Ottawa entitled “Beyond Big Data Surveillance”, the culmination of a seven-year research project. The speakers were doing amazing work and I learned a lot. However, after only a few speakers and panels, I was in no way uncertain about who the heroes and villains of the stories being told were meant to be. The people in the room were the good ones. The bad ones were (thankfully) not in the room. It was our task to legislate or otherwise mitigate the excesses of the bad ones.

The discourse was defined by these polarized positions, often centred in abstracted Western ethics or concepts of utility or privacy or care.

My talk was part of a panel exploring how the public understands these systems and how to support greater engagement. I argued for introducing ambiguity, uncertainty, and the potential for wonder into our experiences of events that might otherwise be quickly categorized and sorted. I argued that by reducing the conversation to abstracted ethics, we are falling into the trap of these automated systems. Algorithms are mono-epistemic. To read the world, the world and the people in it need to become objects amenable to measurement, categorization, optimization, and selective action. As I have written previously, we are being encouraged to model our own behaviour on the machines we make

I provided examples of approaches that invite people to put down the maps so that they might inhabit the world the maps are meant to represent, and then be able to more meaningfully engage in changing the path we’re on.

We might do this by:

  • complicating our over-reliance on experts and expertise in making sense of what is going on
  • encouraging polyphony – producing and sharing as many potential responses as possible, so that others might occupy those responses and draw their own conclusions
  • celebrating personal, vernacular ideologies contesting those put in place by institutions (that often benefit directly from the existing discourse around surveillance and algorithmic culture) – be they academic, corporate, or government

I shared several projects that are either in the world or will be shortly. All of these projects are explicitly and intentionally partial and in a continuous process of opening and re-opening.

I don’t think we should be combating systems of abstraction with other forms of abstraction. When we do, only the experts get to take part. I hope we can return these issues and these decisions to individuals and communities. Art becomes one way to generate the raw materials necessary for these processes of sense making. Moreover, we need to be OK with the fact that much of what emerges may well be unreadable to us.

Local acts of resistance will necessarily be embedded in local forms of knowledge. To insist that responses in Malawi and Beijing be legible to researchers in Ottawa or Berlin is to replicate patterns of enforced abstraction. The drive to make the world legible is what got us into this mess. A point of view requires a point to look out from and where I’m looking out from differs greatly from those in Beijing in Berlin in Malawi in Toronto. Through an increased emphasis on our bodies, our landscapes, and our stories of movement and migration we might invite others to inhabit this shared terrain of governance and contribute positively to better uses for the technologies we make.

Algorithmic systems are already creating culture. A menu on Netflix demonstrates what happens when a computer system analyzes what is already known to be popular then slightly varies that pattern (sometimes very, very slightly). The product is a predictably marketable version of something familiar.

AI-generated images are visually remarkable, but this perhaps puts too much emphasis on the technology and not enough on the art. The images don’t “say” anything as they are not the product of any particular point of view. In the absence of something aesthetically interesting to say, the fallback becomes didactic attacks on the ethics of the technology being deployed in art’s name.

AI aesthetics are defined by the assumptions about art that capitalism makes. What if we invited thousands to bring their own assumptions to the table? What if we introduced new beliefs at the centre of how these technologies are used

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.