‘Inhuman Power: Artificial Intelligence and the Future of Capitalism’ by Nick Dyer-Witheford, Atle Mikkola Kjøsen and James Steinhoff reviewed by Bruce Robinson

Reviewed by Bruce Robinson

About the reviewer

Bruce Robinson is a retired lecturer in Information Technology who has worked in AI and has had a …

More

Driven by a massive growth in computing power and in data on which to work, artificial intelligence (AI) applications are becoming pervasive in the economy and in everyday life. What is the likely outcome of the current AI boom? Will AI have deep effects on capitalism and the prospects for replacing it? Inhuman Power examines these questions and seeks to understand AI more widely. Its coverage is wide with a bibliography of over thirty pages and it fuses Marxist analysis of machines and labour with a detailed examination of current AI technology. (The introduction contains crisp accounts of both.) As such, it is essential reading for anyone interested in a critical analysis of current technological developments.

Inhuman Power is not just critical of what Dyer-Witheford, Kjøsen and Steinhoff call ‘AI-capital’. They also believe that AI poses a challenge to Marxism in that, if artificially intelligent machines do present a real threat to the uniqueness of human powers, we must question ‘assumptions about the labour theory of value, the continued centrality of struggles at the point of production or even the confidence that capitalism cannot survive the abolition of its human waged workforce’ (8).

The book begins by announcing a polemical intent: against those who think that AI’s development can be separated from the drives and limits of capital; against those on the left who believe that nothing radically different is going on from the previous boom and bust AI cycles; and against the left accelerationists who argue that AI should be embraced even in its current form as it will enable an eventual utopia. Instead, despite emphasising that there is no certainty about the path the development of ‘AI-capital’, the authors see it taking us to the edge of an abyss threatening, not the end of capitalism as the accelerationists imply, but a negative end to human waged labour driven by what Dyer-Witheford (2015) calls the ‘cybernetic drive’ of capital to automate. Capital is pursuing its aims more and more impersonally and relentlessly through machines. Yet the adoption of AI will be uneven and subject to the vagaries of capital investment, creating a ‘slow tsunami’ of ‘market-driven technological change gradually flooding out the labour market, driving remunerated work to diminishing […] islands of human-centric production’ (143).

This perspective becomes the central theme of the book and the many doubts and qualifications that are mentioned are set aside however to provide a more black and white analysis. Alternatives are too narrowly presented as between their framework of a gradual move to the abyss and those characterised as ‘Apocalypse Now’ or ‘Business-as-usual’ (87-91). More likely is that AI, while having an important impact on labour processes and employment (though not as radical as often assumed), will eventually bump up against distinct limits rooted in the technology itself and the nature of computation, in the nature of human labour, and also in the political economy of capitalism – automation beyond a certain point poses problems both of finding markets and assuring smooth, responsive labour processes.

The first of the three central chapters of the book deals with the history and current state of AI- capital. AI is becoming one of what Marx called the general conditions of production, a foundational element of the infrastructure, such as electricity, transport and communications, taken for granted as providing a basis for production – the ‘means of cognition’ (31) . The further development and control of the technology will be in the hands of the existing tech oligoplies. This rests on the assumption that ‘capital’s current love affair with AI is not broken up by performance failures and commitment nerves’ (46) – an open question. The authors acknowledge that ‘many [AI technologies] will fail […] an AI bubble will probably burst’ (146; see also 44-6), though this seems of little consequence for their overall assessment beyond a passing remark that ‘the AI revolution might subside with a digitally voiced whimper’ (46).

The term AI-capitalism is also used to describe ‘actually existing AI’ as a new stage of capitalist development succeeding post-Fordism and characterised by ‘narrow AI’ restricted to specific domains, most commonly in the form of machine learning (ML) systems using platforms and the Cloud as delivery mechanisms. A future stage of ‘fully developed AI capitalism’ is also proposed based on developments of AI already under development but yet to be delivered (50-51), involving ‘hyper-subsumption in which capital’s autonomizing force manifests as AI’ (21).

The second chapter uses the autonomist conception of class composition to look at changes in work and labour markets. The theory’s assumption that, as a result of labour’s irreplaceable role in production, class ‘recomposition’ takes place as workers ‘perceive the cracks and weaknesses in capital’s latest methods of control’ (70), has ceased to be valid  as the drive of capital to replace living labour enters a new stage powered by AI. The alternative to machines of employing cheap global labour is fading (74). The result is ‘surplus populations’ which now – with AI and automation – face the prospect of being permanently superfluous to the needs of capital.

It is not that there is no resistance – there are seven areas of struggles ‘which challenge the current trajectory of AI-capital’ (102-7). These struggles, though all related to aspects of AI, lack a unifying perspective and point of attack, something due not merely to organisational weakness or differing emphases. The organised left and unions have failed to develop strategies for dealing with AI.

The authors rightly reject getting too involved in the game of predicting job loss numbers and note that AI creates certain jobs – precarious, on-call, global – in the processes of its own implementation, taking on tasks which AI cannot perform by itself, such as the labelling of images for ML systems and the recognition of undesirable content. Such work takes place behind the scenes to make AI work smoothly and in the manner intended.

This raises two questions. Firstly, is it always in the interests of capital to replace labour with machines based simply on their relative costs? Even AI-capital has to worry about having a labour process that ensures reliable, seamless production and can adapt flexibly to the market. Given the limitations of AI, this requires human labour. Is such ‘ghost work […] in automation’s last mile’ (Gray and Suri 2019,  ix) transitional in a period where AI is still developing: ‘Infrastructural AI [saves] the human cognitive apparatus for whatever machines cannot yet handle’ (61; emphasis added)? Or does it reflect human capacities that machines cannot replace?

The third chapter addresses this question with an examination of the implications of Artificial General Intelligence (AGI), the goal of AI ‘with capacities for reasoning with general knowledge and doing a variety of tasks in diverse and unfamiliar domains’ (110). As AGI is ‘a technology that has yet to, and might never, see the light of day’, the chapter is best thought of as ‘more science fiction than science fact’ (111), intended to question Marxist assumptions about labour and the uniqueness of humans, asserting that there is ‘an isomorphism between the theoretical notion of AGI and Marx’s concept of labour and labour power’ (110), thus raising the ‘possibility of a capitalism without human beings’ (111).

This argument takes two paths: the first, a transhistorical comparison of the capacities of AGIs and humans; the second, an argument that AGIs’ role in capitalist production can be equated to variable rather than fixed capital, thereby constituting labour-power and producing value, becoming ‘doubly free’ proletarians.

The book argues that Marx underestimated the ability of animals to undertake ‘things previously held to be uniquely human [and] the same holds for machines’ (120). The distinct nature of human activity is then reduced to adaptability or a capacity to generalise based on limited data. This is taken to be Marx’s position and used to ‘posit an isomorphism between general intelligence [as in AGI] and Marx’s concept of labour power and labour’. If this is true, it follows ‘that AGI, almost per definition, is capable of performing labour’ (126).

However Marx’s concept of labouring capacity points to the subjective elements of labour which form its use-value, and require human embodiment . They are counterposed to formal, logical, objective knowledge and action and include experiential skills, individual modes of action and non-objectifiable genuine living knowledge, often highly contextualised to the environment in which the worker acts and which are crucial to the viability of labour processes (Pfeiffer, 2014).

Further, human general intelligence differs from domain specific skill in more ways than adaptability or an ability to generalise. Braga and Logan (2017) list ‘curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor’ as human qualities AI systems do not possess. While some of these may not be necessary for them to function in capitalist labour processes, human powers of conceptualisation, will and conscious goal-directed activity, emphasised by Marx, are and remain outside the scope of machines for reasons rooted in both the nature of computation and in the capacities of human beings.

Despite s qualifications, Inhuman Power too often takes AI as its proponents present it. For example, cognition and perception are ascribed to actually existing AI (60-62), whereas domain-specific machine learning systems, the currently dominant form, are instead best just seen as machines for pattern recognition based on inductive reasoning (which has well known fallacies and biases) and lacking semantics (Pasquinelli, 2017). Stating that AI simply accomplishes what humans do albeit in different ways (62) neglects an important distinction between performance and underlying understanding which seriously affects human-AI interaction. Better algorithms or more computing power do not overcome these limits to AI.

The conclusion to Inhuman Power raises the question of whether there can be a ‘communist AI’. Starting from the position of ‘neither halting AI (Luddism) nor intensifying it’ but instead removing the drive to replace human labour and expropriating AI-capital (153-4), the authors promisingly talk of ‘working class steering of AI development.’ (154)This points to the centrality of a politics that remains rooted in production and of alternative forms and paths of technological development. AI would necessarily have to change from being centred on producing machines outstripping human beings to becoming focused on the creation of artefacts and techniques that complement, enable or, when rationally justified and democratically decided, reduce human labour in areas that require intelligence.

Such a human-centred focus to technology sits uneasily with the book’s ecological post-humanism in which humans form an equal part of an undifferentiated ontology alongside nature and machines (160). In the context of AI, this conception concedes too much to the capabilities and ontological status of machines when a refocus on humans as central to labour processes is a crucial part of a critique of AI-capital.

While this review raises disagreements with the book’s central perspective, it is valuable and marks a step forward in Marxist accounts of AI. The range and depth of material used makes it a good reference point for anyone seeking an up-to-date account linking AI and Marxism. It also raises a number of important issues for debate, particularly in its challenges to both Marxism and to the dominant assumptions on the left about AI. It is to be hoped that they will be taken up and that Inhuman Power will spark more informed discussions about AI that will benefit Marxists, radical technologists and those directly facing AI-capital.

6 October 2019

References

  • Braga, Adriana, and Robert Logan 2017 The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence Information 8 (4): 156 https://doi.org/10.3390/info8040156
  • Dyer-Witheford, Nick 2015 Cyber-Proletariat London: Pluto Press
  • Gray, Mary L., and Siddharth Suri 2019 Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass Boston: Houghton Mifflin Harcourt
  • Pasquinelli, Matteo 2017 Machines That Morph Logic: Neural Networks and the Distorted Automation of Intelligence as Statistical Inference Glass Bead 1 (1) https://www.glass-bead.org/article/machines-that-morph-logic/?lang=enview
  • Pfeiffer, Sabine 2014 Digital Labour and the Use-Value of Human Work. On the Importance of Labouring Capacity for Understanding Digital Capitalism Triple-C 12(2): 599–619

One comment

  1. Thanks for an excellent review.

    I want to take issue, not with the reviewer, but with the authors.
    The reviewer writes:
    “… if artificially intelligent machines do present a real threat to the uniqueness of human powers, we must question…” This is a sound summary of the essence of this book, it seems to me. But there is no questioning of the fundamental assumptions ‘implicit’ in the quoted phrase.
    I refer specifically to the phrase “artificially intelligent machines”. Artificially is well understood. It means, roughly, the ability to construct something by someone(s) who is (are) proficient and experienced in the relevant human arts. Machines too is a word well known. To me it connotes a device made by humans to achieve a well-designed objective. But, and this is a fundamental but, what precisely is meant by “intelligent”? I have never heard or read a coherent explanation of what practitioners mean when they measure “human intelligence” . What exactly are they measuring? I can understand when someone measures the concentration of glucose in the blood. But intelligence? The problem lies in the current inability to define intelligence in humans adequately. And fundamentally, IMHO, this arises because all attempts to define human intelligence assume that it is solely a biological trait. While, on the other hand, to me, it is self evident that whatever human intelligence is, it has both biological and human social components. Until this is properly recognized there will be no progress in this field.

    Apart from this fundamental criticism of the book, I will repeat my expression of gratitude to the reviewer. IMHO, the review is spot on. I agree with everything the reviewer has to say about this book.

Leave a Reply to Sydney Cancel reply

Your email address will not be published. Required fields are marked *