The Artificial Beholder and Our Digital Dreams 

An exploration of early stage machine learning, visual processing, feedback loops and the nature of consciousness

A version of this research project was published in Anthways, Goldsmiths’ Anthropology Post-Graduate Research Journal in 2023. DOI: 110.5281/zenodo.8352718

Early in my academic career, I had attempted to use my organic neural network (brain), to synthesise a piece of writing using the following parameters:


1) Anthropological essay
2) On the the topic of generative art, cybernetics, dreams and intelligence

My first attempt went as follows:

In March of 2016, Google Deep Mind’s AlphaGo beat top Go player Lee Sedol. In the wake of this paradigm shift in the growing capabilities of Artificial Intelligence, new and novel uses of synthetic media began to appear. In the midst of a year dense with political upheaval, we gazed upon these odd creations of early machine learning and recognised something not entirely unexpected and yet nevertheless uncanny; a distorted reflection of our own subjective experiences.

Though useful, I ultimately rejected this iteration. I can see the problem at hand; my long term training data has consisted of creative fiction and opinion pieces and likewise my newer input data has consisted largely of popular science articles and technical papers on cognitive function, whereas the output content and style required for this task is anthropological and academic in style and substance. I attempt another iteration:

Much of the synthetic media that failed to succeed as art sparked a cultural discourse around its very shortcomings. These strange mishaps, experimentations and occasional viral artefacts generated an ongoing human machinic mimesis, asking us to examine our own biases and cognitions, including some of the perennial mysteries of the mind, such as dreams, memory, hallucinations and ultimately what it means to see, think, tell stories, and be human.

Though still not the level of stylistic and qualitative excellence I had hoped for, it is perhaps closer. In an attempt to grow the dataset for this essay, I decide to incorporate a second neural network, this one artificial in nature, provided by InferKit.com. It writes:

If we can use artificial intelligence to see ourselves in a different light and show us that there’s always more to life than our own experiences, maybe we can finally be the first generation of humans to be truly holistic. One issue that persists, however, is that one must introduce the term ‘intelligence’ at least loosely to differentiate between man and machine. Bacon was smart – or at least he thought he was. He believed a) that man was the only form of life with consciousness and b) that consciousness was essential to understanding and transcending life. Is it possible he had a self-fulfilling prophecy?

While uncannily relevant on a surface level and oddly eloquent in delivery, this iteration ultimately also fails on account of being factually nonsensical. We can see that my own organic neural network is attempting to build statements that can, if cited properly, be traced in origin; a lineage of thought, publication, consensus, refutal and scholarship stretching backward into the epistemological shallows of history. I lack the algorithmic brute force to analyse all of these data points but have the contextual cultural and institutional knowledge gained through many years of
iterative supervised learning in general knowledge to contextually understand if something is nonsense or not. On a good day I can tentatively say that I am both conscious and intelligent, but if that changed, would anyone be able to prove it?

The notions of intelligence and consciousness have been related but it has long been established that one does not inevitably lead to the other. The end goal in developing intelligent machines is not to create consciousness, and yet the question remains, would it arise on its own? Turing sidestepped divisive philosophies of mind and intelligence, deciding instead that the problem of other minds can only be solved relationally; if a human intelligence cannot differentiate whether another entity is human or machine, via an imitation game, the Turing test of intelligence has been passed. John McCarthy elaborated on Turing’s definition of artificial intelligence as machines that can perform tasks that are characteristic of human intelligence (McCarthy et al, 1955). Many hold the view that our brains already function in much the same ways that computers do; the mathematician John von Neumann claimed that the human nervous system is ‘prima facie digital’ (1958, p.44) while cognitive neuroscientist Jack Gallant states that ‘the mammalian cerebral cortex is a multi scale biological computing device consisting of billions of neutrons, arranged in layered, local circuits’ (2020). AI Researcher Kate Crawford refutes the notion that with enough training general, human equivalent intelligence can be achieved ‘without addressing the fundamental ways in which humans are embodied, relational, and set within wider ecologies’ (2021, p.4-5). Various theories of mind explored by anthropologists outline cultural differences in relation to how we perceive our sense of self and the inner workings of others, such as Charles Taylor’s notion of ‘porosity’ (2007) relating to the way individuals distinguish between mind-body boundaries and the efficacy of the supernatural. From his field work with the Urapmin, Joel Robbins likewise speaks about ‘opacity’; our ability or lack thereof to infer the intentions and thoughts of other minds (2004). For a culture that refuses to make any inferences regarding the thoughts or intentions of others, the Turing test may hold no relevance.

Even so, it is our main Western scaffolding on the matter, and I propose that using its relational framework of interactive feedback ties deeply into infrastructures of consciousness. Feedback loops are the foundation of cybernetic and systems theory, first put forward by Norbert Weiner in 1948, set out to ‘find the common elements in the functioning of automatic machines and of the human nervous system’ (Weiner, 1948). Cybernetics has long been associated with the organisation of the mind, societies, ecologies and machines. Gregory Bateson talks about the ways cognition and environment are entangled in these feedback loops, stating that ‘we believe that consciousness has feedback into the remainder of mind and so an effect upon action’ (Bateson, 1972, p.295). Professor of Cognitive Philosophy Andy Clark speaks of the importance of the mind-body ‘scaffolding’ problem, writing that ‘the looping interactions between material brains, material bodies, and complex cultural and technological environments’ are integral to the notion of who we are, that we ‘create these supportive environments, but they create us too.’ (2004, p.11). Likewise scholar of cognitive science Douglas Hofstadter proposes the idea of ‘strange loops’ as the architecture underlying the ongoing process our minds engage in to emerge as an “I” at the centre of our experience as humans (2000). The very notion of a mind truly comprehending itself in this sense is likewise brought into question by Donald Hoffman who states that ‘no conscious agent can describe itself completely. The very attempt adds more experiences to the agent…so on in a vicious loop of incompleteness’ (2019, p.194).

I theorise that narrative creation might be at least one junction in which the relational, experiential and cybernetic approaches to consciousness meet; the stories we tell to others and the stories we tell ourselves, about ourselves. When early machine intelligences undertake this same task, the results often seem to reflect our own experiences back at us in ways both uncanny and absurd yet deeply familiar. In 2016, filmmaker Oscar Sharp and NYU based AI researcher Ross Goodwin submitted the short film ‘Sunspring’ to the Sci-Fi London Film Festival. Though filmed and acted by humans, Sunspring was written by an LTSM (Long Short Term Memory) recurrent neural network, a type of deep learning neural network often used for text recognition. Trained on a curated archive of science fiction scripts that Goodwin could find online, the source data consisted largely of movies from the 80’s and 90’s and TV series like X-Files and Star Trek (Newitz, 2021). At the centre of the film are three characters who engage in an emotional tet-a-tet that feels familiar yet doesn’t actually make any sense. While the sentences of dialogue seem coherent enough on their own, they do not string together into anything resembling an actual narrative or through-line. Commenters on YouTube have compared the film to ‘watching movies as a small kid, before I really knew what they were saying but still enjoying the characters…’ or the way it sounds when ‘you watch a movie from a language you’re just learning and don’t understand…’. Others have compared watching Sunspring to the experience of replaying a dream:

“this feels exactly like my dreams sometimes. It got me to thinking about how through our day, we take in all of this information in images, text, sounds…we then shut down at the end of it all and our brain tries to make sense of the data we collected, or at least we recall some of that data in random spurts. This is what makes up our dreams….raw data…that we process at random. I think this is what is happening with this AI. We push in so much data that the end result feels like a dream…its odd and off-putting in so many ways, but then it also feels somewhat natural and clear.”

(User Nate Fin, YouTube)

Comments like these show how intuitively viewers correlate the ways that human and machine learning overlap. Why we sleep, never mind why we dream, is still elusive to science, leaving dreams, much like consciousness itself, in the realm of the subjective. Dreams have generated many mythologies; ancient societies considered the dream a vehicle for divine inspiration. Joseph Campbell writes of dreams as ‘a personal experience of that deep, dark ground that is the support of our conscious lives, and a myth is the society’s dream. The myth is the public dream and the dream is the private myth’ (2012, p.48). Speaking in general terms, modern science tells us that during REM sleep, commonly associated with the dream state, our motor functions and the hippocampus shut down, while the rest of the brain regions, especially the amygdala, light up, correlating with many subjective, reported dream experiences that are sensorially abundant but logically incoherent (Sander van der, 2011). But what is happening when something like Sunspring is being constructed by a machine intelligence? Natural Language Generating (NLG) algorithmic networks like GPT-2, (an autoregressive language model), or the LTSM used to create Sunspring, work in different ways, but generally speaking they draw on vast datasets, either curated/labelled for supervised learning or scraped from the internet, for unsupervised learning. What creates this disjointed, dreamlike effect are two things; lack of memory in the algorithm and a limited dataset from which to draw knowledge. As AI researcher Janelle Shane elaborates:

“Dreams are notoriously incoherent, switching settings and mood and even characters midstream. These neural-net dreams, however, don’t maintain coherence for more than a sentence or so — sometimes considerably less. Characters who are never introduced are referred to as if they had been there all along. The whole dream forgets where it is. Individual phrases may make sense, and the rhythm of the words sounds okay if you don’t pay attention to what’s going on. Matching the surface qualities of human speech while lacking any deeper meaning is a hallmark of neural-net-generated text” (2020, p.50-51).

The resulting absurdist and often unintentionally humorous creations of AI driven texts have become culturally familiar enough to generate their own memes, as parodied by comedians like Keaton Patti, who has repeatedly claimed to have fed an ambiguous AI 1000 hours of a certain genre of television to create new and novel iterations, such as Hallmark Christmas movies (Fig.1).

Fig. 1. Keaton Patti, Twitter, 2018.

In truth, these pieces are written by Patti himself, imitating the rudimentary AI that is trying to imitate humans, an apt example of cybernetic feedback loops creating connections between individuals, computer systems and society, collaborating to create a new genre of humour that criticises both the limited capacities of so called intelligent machines as well as the limiting narrative conventions of our own, human generated media. These storytelling cliches can be considered self replicating iterations of larger cultural archetypes, part of our so far unique myth-making abilities (see Campbell, 1968). We internalise these archetypes by simple virtue of having engaged with them all our lives through social absorption, which leads to the inevitable conclusion; is how we write and tell stories truly all that different from how machine learning systems compile original content from our collective digital landscape? Data artist Kim Albrecht’s project ‘Visualising Memes: Culturegraphy’ showcases the way that humans use data to create new artefacts, by means of culture references, influences and connections. Illustrated in a dense, interactive line graph framework, she showcases the myriad references used in the making of the artefact and all the ways the artefact has been in turn referenced by others in the creation of further artefacts. I suggest that these social, cultural and linguistic infrastructures tie in deeply with all aspects of how we construct our conscious experiences, including the ways we see and understand our environments.

Beyond the objective nature of relational and cultural intelligence we are left with the subjective; our inner experience of the world around us, a situation deeply shaped by our embodiment, sense perception and memory, what David Chalmers refers to as ‘qualia’; the hard problem of consciousness (1995). Aligned with the assumption that consciousness is an emergent property culminating through layers of embodied cognitive evolutionary adaptation to our environments, Hoffman argues that our sense perception and in turn memory functions evolved in service of our fitness rather than in service of modelling any objective ‘truth’, stating that ‘perception may seem effortless, but in fact it requires considerable energy…so evolution has shaped our senses to be misers.’ (2019, p.XVII), likening our day to day experience of reality to that of a computer’s desktop, rather than the impenetrable cascade of binary code underlying its operation. Aldous Huxley espoused this same theory many decades prior, writing that:

“To make biological survival possible, Mind at Large has to be funnelled through the reducing valve of the brain and nervous system. What comes out at the other end is a measly trickle of the kind of consciousness which will help us to stay alive on the surface of this particular planet. To formulate and express the contents of this reduced awareness, man has invented and endlessly elaborated those symbol-systems and implicit philosophies which we call languages” (Huxley, 1954, p.8).

To break down these structures of understanding, such as what happens in cases of neurological damage or the use mind altering substances, can call into question the very notion of how our senses shape our realities. Integrating the rise in sheer data and the limits of human perception, Hito Steyerl expands on this idea, writing that ‘contemporary perception is machinic to a large degree. The spectrum of human vision only covers a tiny part of it. Electric charges, radio waves, light pulses encoded by machines for machines are zipping by at slightly subluminal speed’ (2017, p.38). Image recognition and categorisation remains a stumbling block for machine learning algorithms for various reasons (Spratt, 2017), including flaws in data sets and poorly set parameters. Notably, one of the most effective methods of machine driven media synthesis utilises the feedback loop. Generative Adversarial Networks were first proposed by Google’s Ian Goodfellow in 2013 and first radically utilised by Alec Radford et al in 2015 to generate realistic renderings of faces and bedrooms (Fig.2) using a convolutional neural network, in which a generator creates random samples and a discriminator determines which are real. A form of internal Turing test.

Fig. 2. Radford et al, 2016.

Radford’s results were revolutionary at the time of publication but flawed, exemplifying what became quickly associated with synthetic imagery; distortions and hazy boundaries between space and object, a sort of pixelated, surrealist impressionism. Could the inherent shortcomings of machine vision help us grasp a raw form reality that lies beyond our cultural systems based scaffolding? In early 2019 an image likely created on ArtBreeder, a publicly available platform that uses BigGAN and StyleGAN models to synthesise visuals from pre-existing data sets, was posted on Twitter with the caption ‘I bet you can’t name one thing in this photo’. The image then appeared on Reddit with the description ‘This picture is designed to give the viewer the simulated experience of having a stroke (particularly in the occipital lobe of the cerebral cortex, where visual perception occurs.) Everything looks hauntingly familiar but you just can’t quite recognise anything’ (Fig. 3).

Fig. 3. Reddit, 2019.

It has been theorised that the reason this image disturbs the viewer is because it fails to cohere some of our most basic cognitive functions, such as distinction making and part-whole systems, as boundaries between objects seem to lack clear distinction (Cabrera, 2021), as if someone was tasked to reconstruct an image from memory without any comprehension for what is actually present in the image. Huxley writes about a similar subjective experience while under the influence of mescaline, describing that ‘place and distance cease to be of much interest. The mind does its perceiving in terms of intensity of existence, profundity of significance, relationships within a pattern’ (1954, p.6). He goes on to say how ‘table, chair and desk came together in a composition that was like something by Braque or Juan Gris, a still life recognisably related to the objective world, but rendered without depth, without any attempt at photographic realism’ (1954, p.7). While he speaks largely about the experience of a collapse of visual categorisation, its opposite is more often associated with psychotic or pharmacologically-induced hallucination.

It is commonly said that human brains are pattern seeking machines, in search of information that will help predict future scenarios more accurately. This process of human minds finding familiar patterns in randomness is referred to as pareidolia and it has been theorised that if the feedback loop of observation and categorisation is disrupted, runaway errant processes can take place (Bateson, 1972 p.302), such as over pronounced visual identification (Keshavan, Sudarshan 2017, p.178–81). In much the same way, Google’s DeepDream attempts to find patterns in data noise, something that its researchers have termed “deep dreaming” and “inceptionism” (Mordvintsev, 2015). DeepDream’s aim is to find and enhance patterns in existing imagery, creating over-processed visuals, a sort of ‘algorithmic pareidolia’ (Ibid). This process starts with a neural network case of Deep Dream, and then set loose on unrelated visual data. In essence, if a flower bears a slight resemblance to a dog, the network’s ‘dreaming’ will process that image, making it look more like a dog, which in turn makes the network see the dog even more strongly on the next pass and so on until everything takes on the appearance of dogs. The imagery from DeepDream (Fig. 4) has been routinely compared to the visual distortions experienced with drugs such as LSD and psilocybin and the psychedelic art created by humans attempting to translate these experiences into visual form (Fig. 5). It has also been compared to the paintings of Louis Wain, best known for his increasingly abstracted paintings of cats that seemed to mirror his losing battle with schizophrenia (Fig. 6).

Fig. 4. DeepDream dog visualisation, 2015.

Fig. 5. Psychedelic painting by Alex Grey.

Fig. 6. Progression of schizophrenia in Louis Wain’s paintings.

These extremes of vision and systems breakdown between missing patterns and pattern overstimulation reiterate the structures that conspire to create reality and ourselves; feedback loops, pattern recognition and meaning making. Robert Lanza and Bob Berman go a step further, theorising that reality itself would collapse without consciousness. Quoting Heisenberg’s uncertainty principle, ‘A path comes into existence only when you observe it’, they state ‘there is neither time nor motion without life. Reality is not there with definite properties waiting to be discovered but actually comes into being depending upon the actions of the observer’ (Lanza, Herman, 2010, p.101). This implies far deeper repercussions around the growing complexity, dare it be said intelligence, of machine learning systems and alternative, emerging ways of sensing, processing and operating in the world. While DeepMind’s AlphaGo had beaten a world class Go player based on studying an immense database of existing gameplay and human supervised learning, its successor, AlphaZero had been left to teach itself various different board games with no human intervention. In just three days it succeeded, generating its own knowledge, free of the constraints of human logic, defeating its predecessor 100 games to 0 (Ouellette, 2018). The existential risks and opportunities inherent in such a result can be unnerving. But AI systems, at least for the time being, function as tools, an opportunity for us to not only better understand our own minds but to step outside of them and consider radical alterities. The InferKit neural network I’d previously engaged for this essay tells me that ‘as a philosopher, I should be objecting to these arguments, but I’m not. I admit to worrying about a brain that can think in such a way, because I can’t fathom it being smart enough to build its own skyscraper.

As David Beech and Andy Clark say, we have always been cyborgs. ‘The human is not a fixed, natural and eternal configuration of properties, qualities and capacities but has itself been perpetually reformed and reconstituted by its own products including the development of technologies’ (Beech, 2019, p.64). He goes on to reference the Dada movement, which arose in the context of an increasingly industrialised and mechanised Europe, disfigured and disoriented by technologies of destruction during the First World War.

Voltaic arc of these two nerves that don’t touch
Near the heart
We note the black shivers under a lens
-Tristan Tzara
He looks at me, and then he throws me out of his eyes.
-Sunspring

Tzara had used methods such as the cut-up technique; isolating and rearranging text extracted from newspapers and novels into nonsensical and occasionally oddly prescient iterations (Beech, 2019). The method had been adopted by William S. Burroughs in the 60’s and David Bowie in the 80’s, who’d said of the method that ‘if you put three or four dissociative ideas together and create awkward relationships with them, the unconscious intelligence that comes from those pairings is really quite startling sometimes’ (Bowie, 2016). Likewise, the free association and integration of old and new memories we experience in dream states have had profound effects on the creative process of artists and scientists throughout the ages who have found novel solutions to long ruminated problems through dreams. As written in a 1926 letter, Margaret Mead recounts a particularly striking dream, in which fellow anthropologist Franz Boas is speaking to a circle of colleagues about a mass of brilliant blue jelly he had created. Conflict ensues in the group over what its use could be. ‘I took some of the astonishingly blue beauty in my hand,’ Mead recounts ‘and felt with a great thrill that it was living matter. I said “Why it’s life — and that’s enough” — and he looked so pleased that I had found the answer — and said yes “It’s life and that is wonder enough.” (Mead, 2006, p.327).

Bibliography

Albrecht, Kim. “Visualizing-Memes: Culturegraphy.” culturegraphy.kimalbrecht.com, n.d. http://culturegraphy.kimalbrecht.com/.

Bateson, G. 1972, Steps to an Ecology of Mind, University Of Chicago Press, Chicago.

Beech, D. 2019, Art and Postcapitalism: Aesthetic Labour, Automation and Value Production, Pluto Press.

Cabrera, D. 2021, ‘The DSRP of Creepy AI Images’, blog.cabreraresearch.org, March 31, 2021, viewed 10 August 2023, https://blog.cabreraresearch.org/the-dsrp of-creepy-ai-images.

Campbell, J. 1968, The Hero with a Thousand Faces, Pantheon Books, New York.

Campbell, J., Moyers, B.D., & Flowers, B.S. 2012, The Power of Myth, Turtleback Books.

Jung, C.G., Fordham, M., & Read, H. 1993, The Collected Works of C. G. Jung 16 the Practice of Psychotherapy, Routledge & Paul, London.

Chalmers, D. 1995, ‘Facing up to the problem of consciousness’, Journal of Consciousness Studies, vol. 3, pp. 200-219.

Clark, A. 2010, Natural-Born Cyborgs : Minds, Technologies, and the Future of Human Intelligence, Oxford University Press, Oxford.

Crawford, K. 2021, Atlas of AI : Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press.

Freud, S. 1997, Interpretation of Dreams, Wordsworth Editions.

Gallant, J. 2020, ‘Research – the Gallant Lab @ UCB’, The Gallant Lab at UC Berkley, viewed 10 August 2023, https://gallantlab.org/brain-decoding/.

Steyerl, H. 2019, Duty Free Art: Art in the Age of Civil War, Verso, London.

Hoffman, D. 2019, The Case Against Reality: Why Evolution Hid the Truth from Our Eyes, Penguin.

Hofstadter, D.R. 2000, Gödel, Escher, Bach: An Eternal Golden Braid, Penguin, London.

Huxley, A. 1954, The Doors of Perception, Chatto And Windus, London.

BBC News. 2016, ‘How David Bowie Used ‘Cut Ups’ to Create Lyrics’, YouTube video, January 11, 2016, viewed 10 August 2023, https://www.youtube.com/watch? v=6nlW4EbxTD8 .

Keshavan, M.S., & Sudarshan, M. 2017, ‘Deep Dreaming, Aberrant Salience and Psychosis: Connecting the Dots by Artificial Neural Networks’, Schizophrenia Research, vol. 188, pp. 178-181.

LaFrance, A. 2015, ‘If You Give a Robot Acid’, The Atlantic, September 3, 2015, viewed 10 August 2023, https://www.theatlantic.com/technology/archive/2015/09/ robots-hallucinate-dream/403498/.

Lanza, R., & Berman, B. 2010, Biocentrism: How Life and Consciousness are the Keys to Understanding the True Nature of the Universe, BenBella Books.

Linden, S. v.d. 2011, ‘The Science behind Dreaming’, Scientific American, July 26, 2011, viewed 10 August 2023, https://www.scientificamerican.com/article/the science-behind-dreaming/.

Mead, M., Caffrey, M.M., & Francis, P.A. 2006, To Cherish the Life of the World: Selected Letters of Margaret Mead, Basic Books, New York.

McCarthy, J., Minsky, M.L., Rochester, N., & Shannon, C.E. 2006, ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955’, AI Magazine, vol. 27, no. 4, pp. 12.

Mordvintsev, A. 2015, ‘Research Blog: DeepDream – a Code Example for Visualizing Neural Networks’, web.archive.org, July 8, 2015, viewed 10 August 2023, https://web.archive.org/web/20150708233542/http://googleresearch.blogspot.co.uk/ 2015/07/deepdream-code-example-for-visualizing.html.

Mordvintsev, A. 2015, ‘Research Blog: Inceptionism: Going Deeper into Neural Networks’, web.archive.org, July 3, 2015, viewed 10 August 2023, https://web.archive.org/web/20150703064823/http://googleresearch.blogspot.co.uk/ 2015/06/inceptionism-going-deeper-into-neural.html.

Neumann, J.V. 1958, The Computer and the Brain, Yale University, New Haven.

Newitz, A. 2021, ‘Movie Written by Algorithm Turns out to Be Hilarious and Intense’, Ars Technica, May 30, 2021, viewed 10 August 2023, https://arstechnica.com/ gaming/2021/05/an-ai-wrote-this-movie-and-its-strangely-moving/.

Ouellette, J. 2018, ‘Move over AlphaGo: AlphaZero Taught Itself to Play Three Different Games’, Ars Technica, December 6, 2018, viewed 10 August 2023, https://arstechnica.com/science/2018/12/move-over-alphago-alphazero-taught-itself-to play-three-different-games/.

Patti, K. 2020, I Forced a Bot to Write This Book : A.I. Meets B.S., Andrews Mcmeel Publishing, Kansas City.

Radford, A., Metz, L., & Chintala, S. 2015, ‘Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks’.

reddit 2019, ‘R/Interestingasfuck – This Picture Is Designed to Give the Viewer the Simulated Experience of Having a Stroke (Particularly in the Occipital Lobe of the Cerebral Cortex, Where Visual Perception Occurs.) Everything Looks Hauntingly Familiar but You Just Can’t Quite Recognize Anything.,” https://www.reddit.com/r/ interestingasfuck/comments/bghyv5/this_picture_is_designed_to_give_the_viewer_the/.

Robbins, J. 2004, Becoming Sinners, University of California Press, Berkeley. Shane, J. 2020, You Look like a Thing and I Love You, Wildfire.

Taylor, C. 2007, A Secular Age, Harvard University Press, Cambridge, Mass.

Sunspring n.d., ‘Sunspring | a Sci-Fi Short Film Starring Thomas Middleditch’, YouTube video, viewed 10 August 2023, https://www.youtube.com/watch? v=LY7x2Ihqjmc&t=2s.

Weiner, N. 1948, ‘Cybernetics’, Scientific American, vol. 179.