by Betti Marenko
Central Saint Martins, University of the Arts London
The 2018 AAAI Spring Symposium Series
Abstract
The issue I explore with this position paper concerns dominant cultural scripts around Artificial Intelligence (AI) and the need to imagine different narratives in light of machine learning’s autonomous performativity. The aim is to offer a philosophical reflection, not only to sidestep narratives of techno-determinism, dystopia and existential risk to mankind, but also to speculate on how to imagine a (more) benevolent AI based on uncertainty and the co-evolution of humans and technology. The paper presents the speculative methodology I call FutureCrafting: a forensic, diagnostic and divinatory method that investigates the possibility of other discourses, equally powerful in building reality, constructing futures and having tangible impact. FutureCrafting is speculation at the juncture of design and philosophy, pivoting around the open-ended figuration of the what if…? It articulates collaboration rather than competition, coevolution rather than antagonism, and privileges the indeterminate and the imaginative. To conclude, the paper makes reference to the non-human intelligence of the octopus and to how this can inform a more imaginative AI.
Algorithm Narratives
As the cultural object of our present, the algorithm foregrounds a dominant techno-deterministic narrative that portrays computation as an almost mystical notion (Finn 2017) or even as a theocracy (Bogost 2015). In such a narrative, rationalization and logic coexist with deep myth – the ancestral belief in invisible forces. On one hand, we, users/content providers, like to believe that algorithms are efficient, logical, and clean procedures (they are not). On the other, we embrace a faith-based approach, the same conviction that ancient seekers would have had in the murmuring of an oracle.
Algorithms create reality in ways that are both alluring and evident, opaque and controlling. We use them “as pieces of quotidian technical magic” (Finn 2017, 16). We trust them with our many choices partners, music, books; we are given or denied credit, job, insurance; we are fed tailored search results and social media updates. And yet, we hardly understand how they work; indeed, not even the programmers know. The simplistic notion of algorithms as procedural problem-solving entities, i.e. what turns questions into answers (according to Google) does no longer suffice. In particular, it cannot account for the uncertainty growing at the core of computation (Parisi 2013, 2017). New narratives are needed, that can turn uncertainty into an asset rather than reducing its ambiguity and providing explanations that rely solely on human-centered models.
AI Speculation
The importance of speculation emerges when we consider that Machine Learning’s (ML) way of working is highly inductive, unlike traditional deductive AI approaches. ML starts from real observable behaviors expressed and captured in the the form of data. From here, verifiable models of given behaviors are built; a range of tasks (clustering, classifying, categorizing, matching) is performed; then, similar future behaviors are predicted.
With ML performing a continuous automatic revision and refinement of models based on a constant supply of fresh data, we enter a meta-digital phase (Parisi 2017), where new levels in the automation of registration, mobilization and communication are taking place. As the operative mode of AI shifts from validation to discovery through inductive data-retrieval and recursive training, at the core of this process we find uncertainty, indeterminacy, and unknowns. When the machine no longer simply searches for information but combines and recombines data to train itself, contingency enters the process and must be accounted for. This has profound implications on current AI narratives, and it must inform how to imagine and conceptualize near future AI.
Digital theorist Luciana Parisi (2013; 2017) argues that if AI is rooted on uncertainty, then it must be understood as a non-conscious form of cognition, possessing its own non-human way of learning. To clarify: this does not mean to advocate an overbearing machine rationality antagonist to humankind. Rather, it means to acknowledge that what machines can do does not coincide necessarily with how they think. What is needed, then, is a speculative critique of ML, inspired by abductive reason the formulation of interrogative hypotheses (such as what if…?) and finely attuned to the contingent, the unpredictable and the uncertain (Marenko 2015). This is speculation in action – FutureCrafting – a method that prioritizes imagination over direct observation, and that aims at capitalizing on the indeterminate. Speculative approaches to design (Dunne and Raby 2014) and the field of ‘design fiction’ (Coles 2016) have shown how to deploy design to suggest alternatives to the existent, ranging from the possible to the implausible, so to provoke debate, critique and reflection. Though FutureCrafting resonates with (and stems from) similar concerns and is likewise engaged with expanding the remit of what design can do, it puts however greater emphasis on the theoretical framework that supports its methods. Acknowledging a legacy of philosophical ideas, concepts and discourses is a crucial aspect of FutureCrafting, one that both grounds and propels forward its endeavor. The practice of contesting received notions of technology, inventing new modes of human-machine interaction, and speculating on different futures, cannot be disjoined from the risky business of operating at the edge of thinking. Here is where the power of the imagination in seizing alternative possibilities becomes a radical tool for change and acquires political valence. The challenge then would be: how to exploit the potential of digital uncertainty in ways that feed into new collaborative models of human-machine interaction? (Marenko and Van Allen 2016).
The Robot Does Not Exist
French philosopher and technologist Gilbert Simondon’s work is illuminating here (2017). His notion of technogenesis, that is, the evolution of technical objects, is based on the idea of the co-habitation of humans and technology. Technical objects, including algorithms and AI, are always the temporary concrete expression of a morphological spontaneous evolution, which depends neither on natural processes nor on human design exclusively. Far from evolving in isolation, technical objects are the result of a process where internal parts converge and adapt according to a principle of internal resonance. This process (concretization) describes a coming together of functions by which the object acquires an internal coherence that propels it beyond the intention of its inventor. Even though they are designed and made by human beings, then, technical objects have a life of their own (Schmidgen 2012).
This argument is important for two reasons. First, it provides an epistemological shift that fully integrates technology into culture. The boundary between the natural and the artificial, the animate and the inanimate, the human and the non-human becomes blurred. Put differently, we can say that humans are always already among machines and, more broadly, among everything that is not human. Likewise, technical objects and, more broadly, everything that is not human, are always already among, and co-evolving with, humans. The second implication of Simondon’s technogenesis is that it helps us frame and understand how technical objects, as they evolve, acquire autonomy – a valuable insight to use to conceptualize ML and to speculate imaginatively on AI. Indeed, this means something else too: that to talk about ‘artificial’ intelligence is incorrect. There is only one intelligence, constantly morphing and evolving. Perhaps this is the real meaning of what Simondon wrote in 1958: “The robot does not exist”.
Conscious Exotica
But how can we exercise our human imagination to speculate on alternative AI narratives? An interesting viewpoint is presented by computer scientist Murray Shanahan who poses provocative questions concerning what he calls ‘the space of possible minds’ where humans could encounter radically alien and exotic forms of cognition (2016). By stating that “there’s no reason to suppose that a human’s capacity for consciousness could not be exceeded by some other beings”, he takes the reader on an imaginative journey exploring this possibility.
What matters greatly is the method. In describing his experiment as “fanciful”, Shanaham shines a light on the significance of adopting a speculative frame of inquiry when dealing with AI’s uncharted territories. He positions a number of diverse human and non-human entities on a diagram whose two axes maps human likeness (H-axis), and capacity for consciousness (C-axis). A creature like the octopus, for instance, scores high on the C-axis (it is cognitively sophisticated), but low on the H-axis (it is quite hard to comprehend from our human perspective).
“The most exotic sort of entity would be one that was wholly inscrutable, which is to say it would be beyond the reach of anthropology” (Shanahan 2016). In other words, humans would need to think in radically nonanthropocentric ways, even reappraising what human consciousness is. It may be, continues Shananan, that a shift is required, from a monolithic notion of consciousness made of memory, world and self awareness, capacity for empathy, emotional and cognitive integration to a disaggregated, more distributed form of consciousness. To successfully speculate on imaginative AI, then, one route is to bypass the need to mimic human biology and to look instead at what non-human intelligences have to offer.
Cephalopod Cognition
Recent research on cephalopods, and the octopus in particular, show that these creatures may be specialists in distributed control systems (Grasso 2014, Godfrey-Smith 2016). Some types of octopuses (the common octopus Octopus vulgaris), possess fewer neurons in the brain than in the peripheral nervous system. With two thirds of its neurons located in the arms, the octopus has effectively two brains. Its neural system is exceptionally decentralized. Its arms are autonomous agents. Thanks to such a decentralized information processing system, the octopus can provide an innovative perspective on neural architecture and efficient distributed cognition (Laschi 2016). The octopus’s brain does not issue top-down commands for every small movement of the arms. While the brain initiates motion, the lower motor centers control the precise neuromuscular activity. Experiments have shown that a severed arm will continue to act, search for food, and once found, it will bring it to the place where the mouth is supposed to be. Even more remarkably, the octopus’ limbs do not need comprehensive direction to produce the desired movement, but respond to environmental stimuli in adaptive ways. Each one of the eight arms can be taken as a complex distributed information processing structure, able to act and problem-solve autonomously. For instance, while the octopus is busy checking a cave, a tentacle can be engaged with prodding a shellfish.
As a paradigmatic example of embodied and distributed cognition, it is no wonder that the octopus has become a model for soft robotics and AI research. This has led to the first entirely soft octobot recently developed by Harvard scientists (Burrows 2016). Also, inspired by the octopus’ behavior, roboticist Alfonso Íñiguez (2017) has designed a system with a CPU that does not spend resources in micromanaging coprocessors, exactly like the octopus’ central brain does not spend resources in micromanaging its arms. The potential of mimicking the complex neural system of the octopus is also studied by the U.S. defense contractor and industrial corporation Raytheon (2016), conducting robotics experiments with a network of machines that work together in a semi-autonomous way through coordination by a central command unit and a pack of independent agents. Applications are envisioned in the design of self-balancing biped robots thanks to the central brain’s ability to delegate (Íñiguez 2017). There are parallels here with ‘edge computing’ advanced on-device processing and analytics (Talluri 2017) where AI computation is pushed to the edge of the network (rather than the cloud) as close to the sensor/actuator as possible.
As perhaps the closest form of alien intelligence that we can study, the octopus is the blueprint for the development of an autonomous AI with neural networks that adapt to, and learn from, the environment. It could offer the seed of a new narrative rooted on non-human consciousness.
FutureCrafting
Scholarship at the intersection of design and sociology indicates the need to combine speculative design methods with humanities methodologies to capture social events that are ontologically open, processual and emergent (Michael 2012, Smith 2016). I would argue that AI’s future narrative landscape demands a speculative approach. Expanding on this “inventive problem-making” (Michael 2012) FutureCrafting reconceptualises contingency and rethinks uncertainty by treating them both as a material to work with, rather than as a risk or a threat to avoid.
FutureCrafting gives shape to the future, and does so here and now. Future is about speculating, but avoiding the trap of escaping into a fantasy of what the future could or should be. Instead, FutureCrafting captures the future, grabs it and brings it back to the here and now, so to inform the present. Which is the Crafting part: crafting pertains exquisitely to the now. FutureCrafting is speculation by design, a performative rather than descriptive strategy, whose interventions are designed to prompt, probe, and problematize, to inject ambiguity and even the non-rational and the non-sensical. To borrow philosopher Isabelle Stengers’ words on “speculative methodologies”, FutureCrafting is a practice that “affirms the possible, that actively resists the plausible and the probable targeted by approaches that claim to be neutral” (Stengers 2010, 57).
Framed in this way, FutureCrafting is a strategy and a stratagem to conjure new figures of thought. It provides a set of tools at once forensic, diagnostic, and divinatory. It is forensic because it concerns things taken as witnesses so to articulate the existent. It is diagnostic because it invents explanatory hypotheses in an interrogative fashion – as said, it relies on abduction, and it is unconstrained by a priori theory or a posteriori verification. It is divinatory, because it attracts future images around which new thoughts can coalesce.
FutureCrafting gives priority to imagination over direct observation, searches for the least familiar hypotheses, those with no verifiable answer, and leans toward the production of what is not there yet. It is driven by the question what if? Precisely because it has affinity with practices bent on divining, predicting and conjuring, it is a fine instrument to probe what ML is doing today and will be doing tomorrow.
Bio
Betti Marenko is a design theorist, academic, and educator. She has a background in philosophy, sociology and cultural studies, and a decade of experience in design education. Her interdisciplinary approach brings together design studies, continental philosophy and the analysis of digital cultures to investigate the relationships between design, society and technologies, and their role in shaping possible futures. Betti’s work features regularly in international conferences, collections and peer-reviewed journals such as Design and Culture, Design Studies and Digital Creativity. She is the co-editor of Deleuze and Design (Edinburgh University Press 2015, with Brassett) the first book to use Deleuze and Guattari to provide a new theoretical framework for the theory and practice of design. She is Contextual Studies Programme Leader for Product Design, Central Saint Martins, University of the Arts London, UK.
Statement
I am currently writing a book titled Digital Uncertainty. Between Prediction and Potential in Algorithmic Culture, which investigates the new contingent logic of planetary computation and its impact on society, publics and subjectivities. The book looks at the effects of the growing autonomy and unpredictability of digital technologies, machine learning algorithms and AI. By connecting philosophy and computational theory to design, my research brings a holistic interdisciplinary approach to the issue of digital uncertainty and launches a debate on its unexplored potential. I am interested in bringing into dialogue AI developers, interaction and speculative designers, programmers and engineers, to provide new insights around digital experience, interrogate current theoretical positions and inform interdisciplinary debates on human-machine interaction. The symposium will offer this opportunity.
References
Bogost, I. 2015. The Cathedral of Computation. The Atlantic https://www.theatlantic.com/technology/archive/2015/01/the-cathedral-of-computation/384300/
Burrows, L. 2016. The First Autonomous, Entirely Soft Robot.
Harvard Gazette, 24 August. http://news.harvard.edu/gazette/story/2016/08/the-first-autonomous-entirely-soft-robot/
Coles, A. ed. 2016. Design Fiction. Berlin: Sternberg Press.
Dunne, A. and Raby. F. 2014. Speculative everything. Design, fiction and social dreaming. Cambridge, Mass. and London: MIT Press.
Finn, E. 2017. What Algorithms Want. Imagination in the Age of Computing. Cambridge, Mass. and London: MIT Press.
Godfrey-Smith, P. 2016. Other Minds. The Octopus and the Evolution of Intelligent Life. London: Wlliam Collins.
Grasso, F. W. 2014. The Octopus with Two Brains: How are Distributed and Central Representations Integrated in the Octopus Central Nervous System? In Darmaillacq, A., Dickel, L., and Mather, J. eds. Cephalopod Cognition. Cambridge: Cambridge University Press. 94-122.
Íñiguez, A. 2017. The Octopus as a Model for Artificial Intelligence – A Multi-Agent Robotic Case Study. In Proceedings of the 9th International Conference on Agents and Artificial Intelligence, 2: ICAART, 439-444, Porto, Portugal.
http://www.scitepress.org/DigitalLibrary/PublicationsDetail.aspx ?ID=QNu8OYOoE1c=&t=1
Laschi, C. 2016. Robot Octopus Points the Way to Soft Robotics With Eight Wiggly Arms. IEEE Spectrum.
https://spectrum.ieee.org/robotics/robotics-hardware/robotoctopus-points-the-way-to-soft-robotics-with-eight-wiggly-arms
Marenko, B and Van Allen, P. 2016. Animistic Design: How to Reimagine Digital Interaction between the Human and the Nonhuman. Digital Creativity. Special issue: Post-anthropocentric Creativity. Stanislav Roudavski and Jon McCormack eds. London: Routledge. 52-70.
Marenko, B. 2015. When Making becomes Divination: Uncertainty and Contingency in Computational Glitch-Events. Design Studies 41. Special issue: Computational Making. Terry Knight and Theodora Vardoulli eds. London: Elsevier: 110-125.
Michael, M. 2012. De-signing the Object of Sociology: Toward an ‘Idiotic’ Methodology. The Sociological Review, 60(S1):166183.
Parisi, L. 2017. Reprogramming Decisionism. e-flux 85 www.e-flux.com/journal/85/155472/reprogramming-decisionism/
Parisi, L. 2013. Contagious Architecture. Cambridge, Mass. and London: MIT Press.
Raytheon 2016. Synthetic Smarts. With Learning Robots and Emotional Computers, Artificial Intelligence becomes Real. www.raytheon.com/news/feature/artificial_intelligence.html
Schmidgen, H. 2012. Inside the Black Box: Simondon’s Politics of Technology. SubStance 41(3,129. Madison: University of Wisconsin Press. 16-31.
Shanahan, M. 2016. Conscious Exotica. Aeon https://aeon.co/essays/beyond-humans-what-other-kinds-of-minds-might-be-out-there
Simondon, G. 2017. On the Mode of Existence of Technical Objects. Minneapolis: Univocal.
Smith, R.C. et al. eds. 2016. Design Anthropological Futures.
London: Bloomsbury.
Stengers, I. 2010. Cosmopolitics I. Minneapolis: University of Minnesota Press.
Talluri, R. 2017. Why Edge Computing is Critical for the IoT. NetworkWorld. 24 October https://www.networkworld.com/article/3234708/internet-of-things/why-edge-computing-is-critical-for-the-iot.html