The Subsoil of Thought

The Subsoil of Thought

UKAI Projects · Local Disturbances - Shorts #27 - The Subsoil of Thought

“The subsoil, composed of deep layers rooted in ancient collective thought from which a particular thinker derives his ideas, is generally something he is unconscious of. The soil is of recent creation - the fundamental, newly founded ideas accepted by the thinker. It is the soil in which he is grounded, and from which his own unique thought and ideas stem. Hence, he does not refer to it, just as one does not indicate to people the ground upon which one’s feet tread at each moment”

- Ortega Y Gasset, The Origin of Philosophy, 71-72)

As part of an upcoming project, the team at UKAI has been trying to develop an AI bot that is trained on a narrow cosmology pulled from a defined historical period. Specifically, we are interested in the transition between Medieval and Renaissance thought in 16th century Western Europe, which not coincidentally was the birthplace of the modern novel as a genre of writing. The hope is that we might provide audiences with opportunities to connect with ways of knowing the world that are otherwise only accessible through literature or other cultural products. Eventually, we aspire to “novelize” language models. We are also hoping to extend this work to bespoke bots that inhabit and represent profoundly different and weird ontologies that can then be put in relation with each other and with our modern world in support of imagining other futures and to cope with what’s coming.

Our technical choices have made this particularly difficult. Deep learning models infer based on the data they are provided. Symbolic AI allows for the introduction of high-level rules and representations to map out the assumptions of a system (if you drop an object, it will fall; animals are a subcategory of things that move and eat). However, we are limiting our search to deep learning approaches for the time being for a variety of practical and aesthetic reasons.

Here is an example output:

Me: What is reality?

AI: Reality is not a place but an experience in which the ones who grasp it feel themselves both true and lost – timeworn when young, deeply present with all things. They don’t seem to know what should be chosen next because nothing can last forever (but none has ever truly been faced) for some are this way by virtue of their thirst. Timelessness, thirsty dreams, remain untouched as if they have never happened.

The bot continues but is increasingly uncertain.

Anyone making choices is uncontrollably driven mad with most silent, slowly getting along. I hate looking out the window at a moment outside my rhythm/sleep across spacious fields quiet rooms open books soft universe sandcastles that slipped through fingers —Light from sun came then rain clouds quickly covered over— so strange yet meant to happen ... Oh whoever you are could you please pass me another warm mouthful?

Then things get truly weird …

In between breath poems I seek answers. Yes, there really was something wrong. There were others before. Is that why everyone does everything? It's OK. I'll make it up using names dry light blurred blurts stuck like dancing needles green wet fire swelling wind. Hello, we hear too much. But say again. The moon said, “I will grant entrance only under condition. First step is: go within your mind without exploring its corners by yourself”.

I produced the above text while writing this letter, so no curation was involved. It is superficially convincing as an example of a different mode of perception, but it offers little concrete evidence that the “subsoil” of its source text is available to the language model that we’ve developed. If human beings are unconscious of the deeper layers of ancient collective thought that inform our ideas, we cannot expect more from an algorithm. This is particularly true when the algorithm was developed with certain assumptions in mind. LLMs are built to meet the needs of large public and private institutions and consciously or unconsciously elevate the moral positions of capitalism (efficiency and size) in service to selling the outputs of its operations to those that might wish to pay for them.

Large language models produce lab-grown language fit to purpose and drawing on an immense corpus of past examples. Language is torn up from the subsoil on which it is rooted, with clumps of newer soil clinging on and occasionally showing up in queries to these models in unexpected or inconsistent ways. The AI generated text produced above seems more the product of grit we spilled into the machine than any surfacing of deeper layers.

Anna Tsing describes how languages, like Italian or English, were once rich in words for trees and landscape forms but that these terms are now “known mainly to old people, to some foresters and biologists, and to those who study landscape history”. Our environments elicit the language necessary to describe them so a move away from trees and hills as the places and means of our survival leads to a move away from language that captures the relevant qualities of these spaces.

The same is true for how we constitute less visible aspects of reality. Mikhail Bakhtin describes the idea of a chronotype, a perceived architecture of time and space that gives the stories that emerge from this configuration a particular quality. If a group of people see time as an eternal returning, then it’s no surprise that stories from this group take this truth for granted. So, people in 16th century France had assumptions about how space and time worked and what that meant and their stories captured these assumptions like insects in amber. In 21st century Toronto, my understanding of time and space shapes the kinds of stories I tell and how I go about telling them. The co-existence of chronotypes allows us to be aware of the presence of the “subsoil” described in the opening quotation of this letter. The collision of pre-Reformation and Reformation thought in Europe made visible the ground at their feet and therefore, what might potentially lay beneath.

American Studies professor Timo Müller has written about the chronotope of the road. In Bakhtin’s time, the road was still understood as a meeting place. Roads were places of encounter and in relation with the natural world through which they cut. Cars have changed the way we think about the time and space of the road. We want to avoid spending our time in traffic. Progress means getting from point A to point B as efficiently as possible. Encounters with others and with the natural world are removed from consideration. Müller points to “The Road Not Taken” by Robert Frost and The Grapes of Wrath by John Steinbeck as embodying a different temporal and spatial model of the road, where it is a part of the natural environment and where travelers are interested in that environment.

Written culture from other times and other places provides us access to other ways of understanding how time and space can be instantiated. Large language models gobble up language indiscriminately, and different chronotypes are thrown together in service to teaching these models to generate “better” responses to the queries of its users.

Unfortunately, ideas of “better” are very much a product of the chronotype we inhabit. We live in a period that too often sees space and time as things to be moved through quickly. We want the right answer, right away. We want answers to questions without having to go through the trouble of comparing the different positions and drawing our own fumbling conclusions.

Over the past six months, we’ve been trying to figure out ways of training large language models on cosmologies and chronotypes that differ from the one we’re in. This is an incredibly difficult thing to do, as the very assumptions that go into designing the underlying architecture of a deep learning model are conditioned by the chronotype that gave rise to them.

As we saw above, we’ve had some initial success in getting the model to emulate the style of other ways of knowing the world, but we suspect that it still grows out of the soils of modernity. 

Me: What is time?

AI: Time is merely a pain that all suffer in one way or another.  What if all time were distributed equally? If we viewed the world like this, life would be infinite and stay eternally undisturbed! Maybe it’s not my idea but how else can we weigh our days to see them as they are – a bunch of hot dogs bunched up on the grill seasoned with boredom and sharpened by fear… which most often arrives imperceptibly but breaks us open right after closing.

This response is certainly strange and mixed metaphors notwithstanding, quite beautiful. But the subsoil is absent. Despite the unexpected language, it still relies on a logic of measurement and recognizable phenomenologies of direction and scarcity.

The world changes. Often these changes are the product of human action. Sometimes they are not. Regardless, we are always in need of new conceptual metaphors to help explain what is going on and to assemble some collective meaning by which we might chart a path forward. Speech acts, particularly written ones, crystallize ways of knowing the world. Languages of landscapes are useful for those that inhabit them. We can be unconscious of the deeper layers of meaning and thought that gave rise to this way of constituting our surroundings. We can take it for granted but for how much longer?

The subsoil of language is not carried into the language models we are developing because the models and their builders are working within a chronotype that is as invisible and as unremarkable as the ground upon which their feet tread at each moment. Algorithmic systems do not inquire into the models of the world that make utterances rational. Rather, they throw everything into the same pot. If some works (Rabelais) taste strange, this strangeness is understood as a property of the ingredient, rather than a quality of the recipe being used.

By relying on synthetic language, we rely on a particular type of soil, of recent invention and well-suited for certain tasks but unlikely to be as appropriate for future challenges that emerge.

Large language models offer infinite variations on the same recipe. Should new recipes be necessary, we will need to look elsewhere but the nature of deep learning may tempt us to imagine that this is all there is.

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.