26-27 February we held a workshop centred on the work of Prof. Katherine Hayles (Duke University). The event brought together over 20 speakers from five countries, with Prof. Katherine Hayles, Prof. Donald Mackenzie (Edinburgh University), Dr Luciana Parisi (Goldsmith, University of London), Prof. Celia Lury (Warwick University) and Dr David Berry (Sussex University) delivering keynote addresses.
The aim of the event was to engage Katherine Hayles’ work for the very many questions it provokes and addresses for our times: Do algorithms compute beyond the threshold of human perceptibility and consciousness? Can ‘thinking’ and ‘learning’ digital devices reflect or engage durational time? Do digital forms of cognition radically transform workings of the human brain and what humans can perceive or decide? How do algorithms act upon other algorithms, and can they learn recursively from each other? What kind of sociality or associative life emerges from the human-machinic cognitive relations?
The papers delivered during the event and discussions that followed touched upon many themes, including, but not limited to, temporality, anticipation, speed, cognition, automation, feedback and recursive function, interface, decidability and decision, technogenesis, personalisation/depersonalisation, materiality, and resistance to algorithmic thinking.
The theme of temporality, evoked in a variety of ways, featured prominently in many papers and discussions. Thus, in her keynote address on high frequency trading (HFT) algorithms ‘Future anterior, derivative writing, and the cognitive technosphere’, Prof. Hayles emphasised the importance of temporal inequalities for HFT, as HFT simultaneously created and depended on such inequalities. She suggested that derivatives operationalised the future anterior, by creating a ‘fold in time’ and bringing the anticipated future into the past, whereby future was “stapled to the present through an act of writing”. This theme was closely connected to that of anticipation, which also characterised the other two interventions on HFT (i.e. ‘What’s inscribed within algorithms? The case of “futures lag” in high-frequency trading’ by Prof. Donald MacKenzie (Edinburgh University) and ‘Crowding of adaptive strategies: swarm theory and high-frequency trading’ by Dr Ann-Christina Lange (Copenhagen Business School)). Thus, anticipation appeared to be one of the important features that made swarm behaviour of adaptive HFT algorithms possible, enabling them to relate to one another and produce collective effects (Lange). At the same time, anticipation was shown to also be something enabled by algorithms in a variety of other settings, ranging from syndromic surveillance (Stephen Roberts, Sussex University) to the use of computer generated camouflage (Silvia Mollicchi, Warwick University), and from fire and rescue service (Dr Nathaniel O’Grady, Southampton University) to commercial personalisation algorithms (Prof. Celia Lury, Warwick University).
The ability to anticipate directly depends on the speed of algorithmic calculations, which is crucial for HFT, where speed has had transformational effects on financial transactions (Hayles), with the stock exchanges having to adapt themselves to trading algorithms (MacKenzie). The drive to achieve the highest speeds possible calls for specific algorithmic features (e.g., “algorithms need to be simple to be fast” (Lange)) and specific materialities (e.g., location of data centres, microwave signals versus fibre optic connections (MacKenzie)). Not surprisingly, according to Dr Kristene Unsworth and Dr Kelly Joyce (Drexel University), speed represents the most valued characteristic when designing algorithms. The issue of speed also points to human limits. Thus, in the face of constant acceleration of algorithmic rhythm, “bodily desires cannot be accelerated beyond a spasm” (Dr Nanna Bonde Thylstrup, University of Copenhagen), and, in HFT, we have a “mode of social being that eludes human senses” (Lange). The Operating System (‘Samantha’) in the 2013 film ‘Her’ has “algorithms [that] are automatic and ultra-rapid, surpassing the ability to remain within systems of human temporality or effect determination” (Lee Mackinnon, Goldsmith, University of London). This raises important questions about the nature of, and differences between, human and algorithmic cognition.
Prof. Hayles stressed the importance of viewing cognition as a spectrum and the flexibility, adaptability and evolvability of cognitive agents, while distinguishing between cognisers (actors) and cognitive support (agents). She also argued in favour of acknowledging the existence of technical nonconscious cognition possessed by technical agents and systems at the heart of contemporary financial markets. In her keynote address, ‘Critical computation: digital philosophy and General Artificial Intelligence’, Dr Luciana Parisi (Goldsmith, University of London) also emphasised that “machines have cognitive capacity”, and today we can witness a general shift to the model of unconscious cognition, raising the question of whether algorithmic automation can be considered a mode of reasoning. In turn, Silvia Mollicchi provided an excellent analysis of complex algorithmic cognitive capacity by using the example of the Macropattern and Micropattern algorithms for military camouflage, which interpret “all data available on the environment/weather/lighting, as well as on conscious and subconscious perceptive capacities for target identification” in order “to produce gears that minimise the visibility of human presence”.
In terms of cognitive capacities of algorithms, many marketing narratives go much further, as the talk by Lukasz Mirocha (University of Warsaw) on IBM Watson demonstrated. Indeed, Watson, according to its creators, was to be seen a ‘brain box’ and was said by to possess complexity, expertise, objectivity, but also imagination and senses. Such narratives, however, are need of critical examination. Thus, according to Robert Jackson (Lancaster University), we need to challenge those who “associate computation and big data with some degree of utopian magic that can significantly improve any human endeavour” by drawing attention to some fundamental limitations that characterise computation, limitations both physical and symbolic, “including unsolvable limitations of interpretation in-between cognition and code”.
According to Dr Tobias Matzner (University of Tübingen), in order to better understand how algorithms are used today, we need to combine two prominent, but often separate, narratives: the narrative that positions computers on a continuum with humans and understands the former to be capable of transcending the latter (i.e. computer as the perfect human) and the other narrative that sees “computation as the diametrical other of the human” (i.e. computers as cold and inhumane). Indeed, as examples as diverse as that of parametric architecture, smart CCTV and targeted advertising demonstrate, algorithms do perform human cognitive functions to achieve specific human aims, but “they are used particularly because they are different than humans, working in a focused, rational, objectivised manner, without tiring, emotions, prejudices”. Furthermore, in his keynote address ‘Thinking without algorithms’ Dr David Berry reminded us that “computation does not deploy itself”, but “requires human labour”, and invited us to consider “whose cognition is encoded in software”. Indeed, as Pip Thornton (Royal Holloway, University of London) reminded us, “all algorithms are necessarily tainted with the residue of their creators”.
At the same time, more and more often, we are faced with the consequences of algorithmic automation, which, as a minimum, informs/supports decision-making in a variety of domains, from HFT (Hayles, MacKenzie, Lange) to targeted advertising (Matzner, Lury), and from emergency response (O’Grady) to results generation by search engines (Pip Thornton), but also pushes at the boundaries of decidability itself. Thus, for Hayles, with HFT, we need to consider new forms of decision; for O’Grady, “autonomous algorithmic analysis augments the capacity of the analyst, who is … enabled by a specific kind of output”; for Matzner, we need to consider how the practice of algorithmic decision actually reinstates a liberal human subject, who is to check and verify the decision procedure, while for Jackson, with the “computation as a new foundation of society”, “decisions become ungrounded”, as validation replaces verification.
Algorithmic ‘cognitive support’, to draw on Prof. Hayles’ terminology, is based on feedback loops and utilisation of the recursive function and calls for new human/machine interfaces and points to novel directions for technogenesis. Thus, according to Dr Nathaniel O’Grady, today “the relations between human and computer are facilitated through, and exist on, different levels and registers”, and it is complex spatio-temporal intersections between elements of human/machinic assemblages that create conditions of possibility for contemporary modes of governing emergencies. Similarly, in the case of military camouflage (Mollicchi), algorithms have to address time and space simultaneously, and technogenesis plays an important role at the intersections between machine recognition and writing of the algorithm and between the algorithm and a pattern. According to Dr Michael Dieter (Warwick University), “design pattern methodologies are central to more contemporary expressions of human-computer-interaction (HCI), including technical practices like user-experience (UX) and user-interface (UI) design, and are very often used to support the production of social media platforms, corporate dashboards and mobile apps”. When used as a general method, design patterns, an outcome of complex technogenesis themselves, have profound consequences, for example, in terms of influencing user behaviour. In this respect, Dr Thylstrup discussed the role of event frequency and recursive loops in generating Internet addiction. Influencing behaviour was also among many issues brilliantly explored in Prof. Celia Lury’s (Warwick University) keynote address ‘This time it’s personal: individuation, numbers and the default social’, in which she examined personalisation as a mode of individuation, but a generalised individuation, based on exclusive inclusion and inclusive exclusion, with bringing into adjacency at the heart of the process.
Prof. Hayles’ theorising of technogenesis implies a renewed focus on materiality, a topic that many of our speakers addressed. Thus, Dr Parisi emphasised that “algorithms are inscribed into their data environment”, while Dr Berry talked about “algorithm [as] an instrumental rationality, materialised in some sense”, and used the example of Amazon warehouse to demonstrate the transformation of a warehouse into a database, with materiality emerging as a “materialisation of the code”. As a way of understanding the materiality of algorithms, Dr Unsworth and Dr Joyce used Hayles’ concept of the post-human “as a vehicle to theorise the teams that work with and design the algorithms” and identify key values that inform their work. Prof. MacKenzie stressed the importance of material infrastructures, such as data centres, and such parameters as speed of the signal, for making HFT possible, while Dr Lange talked about the materiality of the order book in HFT. Silvia Molicchi explained how different camouflage algorithms together construct complex materiality of military gear, while Dr O’Grady stressed the role of materiality in algorithm/human assemblages and talked about “spatial configuration acquir[ing] new symbolic significance”.
While all interventions challenged algorithmic rationalities, e.g., by pointing out to their limitations, Dr Berry called for “thinking without algorithms” as a way of making “possible alternative modes of thinking”.
The event has resulted in contributions to a special journal issue of ‘Security Dialogue’ and a planned special section of ‘Theory, Culture & Society’. We will provide publication updates when available.