Artificial Intelligence, Labour and Liberal Democracies

Think of your childhood. How often did you fall off your bicycle before learning to ride without accidents? Or bump your head against the corner of a table before learning not to run around the house? How many times did you burn, cut, or bruise yourself before learning not to touch the “grown-ups” things? Well. This is how human learning works. To master any activity, you need to repeat it a number of times, making the same mistakes each time. This is as true for the three-year-old who needs to endure about fifteen bumps before learning to steer clear of that dreaded little table, as it is for the teenager who has to practice piano scales dozens of times before being able to play them flawlessly. It is true for the adult who keeps repeating the same neurotic behavior over the years, needing many hours of psychoanalysis sessions to change it, if not a trip to some Sanctuary.

Human beings learn slowly, and only through the repetition of the same mistakes, until they (sometimes) become aware of them. And even the awareness of the mistake is almost never enough, by itself, to prevent us from making it again. Just ask anyone struggling with addiction.

We’ve all heard phrases like, “I always pick the wrong ones” or “I find all the wrong ones.” Of course, in reality, it is we who willingly place ourselves repeatedly into the same type of unsatisfying relationship, due to our unresolved neuroses. It’s not that “I always find the wrong ones.” It’s that “I seek, and therefore find, the same type of person (or recreate the same type of situation).”

Human beings possess another specific characteristic: they tend not to actively deal with anything that does not appear immediately dangerous. Take climate. Nearly everyone now agrees that we are on the verge of, if not already in the midst of, a potentially devastating revolution in Earth’s environment for a significant portion of humanity. Yet, despite all the talk, nobody, neither at an individual nor collective level, is truly doing anything decisive about it. Unless we are already with one foot in the grave, we humans continue to walk by sheer inertia, even though we are fully aware of the chasm opening up just a few meters from us.

To sum up, humans learn slowly and do not protect themselves from the disasters they can foresee, despite being able to foresee them, unless they are already well beyond the point of no return.

Artificial Intelligence does not work this way. It is designed to learn instantly and exponentially from its own mistakes, applying what it has learned to the task immediately at hand. It will also react rationally to any imminent or foreseeable threat.

Over the past few months, there has been much discussion about ChatGPT-4, developed by OpenAI, a highly advanced model of a chatbot (i.e., software designed to simulate conversations with human beings). It suddenly became apparent to those who tried it that the software’s degree of evolution is such that it is realistically imaginable that ChatGPT-4 and its successors could soon replace a large portion of human workers in sectors hitherto considered “protected” from automation. Those associated with conceptual professions. Journalists, copywriters, screenwriters, writers, legal practitioners, various types of employees, financial analysts, teachers of different subjects, graphic designers, architects, researchers, urban planners, software developers, bankers, insurers, marketing experts, advertisers… All these professions, and many others which, in advanced capitalist societies, constitute the predominant sector, i.e., the Tertiary sector, can already today be largely absorbed by artificial intelligences.

The rapid development of AI has recently prompted a thousand stakeholders, led by Elon Musk, to publish a petition calling for a six-month pause in the development of this technology (you can read it here: Musk). Musk’s letter doesn’t do much, except reiterate the concerns that the entrepreneur had already shared with famed physicist Stephen Hawking in 2015, warning us that with Artificial Intelligence, we were playing with fire. But in light of the introduction of ChatGPT-4, Musk’s petition takes on the aspect of a distress signal. The problem is that a moratorium doesn’t serve any purpose in and of itself: what do we do with the six months of potential interruption in development? If we don’t use them to establish new guidelines for the study of these systems, nothing will have changed in six months.

(Since I first published this article in Italian, Open AI Ceo Sam Altman has testified during US Senate hearing on AI oversight: here)

As early as 2017, McKinsey began researching the impact that AI will have on the job market. The first report (you can find it here: McKinsey) suggests that by 2030, around 800 million jobs could be in some way replaced or in some way changed thanks to the adoption of Artificial Intelligence. The overall impression is that at the root of the problem, it’s not Artificial Intelligence per se, but rather, as usual, human beings.

In a podcast on the subject released a few days ago, journalist Ezra Klein, a former columnist for The Washington Post and The New York Times, emphasized the key point (here: Klein). The point is to understand what we want, as a society, from Artificial Intelligence. If we allow its development to be driven solely by the market, we can reasonably imagine that it will end up being a serious problem for humanity. In other words, it raises a question that has been troubling our societies for years. How can a weak political system regulate and control a strong economic system?

So here is the basic difference between human intelligence and artificial intelligence: human intelligence is neurotic and irrational, it proceeds through “catastrophes” – unexpected and sudden evolutions – not through the progressive and linear accumulation of information. On the other hand, artificial intelligence is rational.

In a study published at the end of March this year (here: Hendrycks), analyst Dan Hendrycks, director of the Center for AI Safety in San Francisco, applied the concept of Darwinian competition (the closest thing to the capitalist model of development) to the evolution of Artificial Intelligence. The scenario proposed by Hendrycks is more or less as follows: as AI evolves, multinational corporations and states, naturally in competition with each other, will involve it in increasingly strategic choices, as AI can identify them quickly based on increasingly complex cost/benefit analyses and predict multiple possible scenarios stemming from these choices.

Since such decisions are indeed complex, the more powerful and faster the AI, the less human intervention will be required in making them. If the input given to the AI is to find a way to outcompete its competitor, the AI will provide much faster and more accurate responses than humans would, based on far more data and many more possible actions. If the input is simply to prevail, no one can rule out that the easiest and quickest way to achieve success may be to somehow break the law. Of course, the regulatory body can easily prohibit the AI user from giving inputs that do not foresee breaking the law.

But it’s very easy to dodge this type of rule: for example, by not asking directly to break the law, but to find a way to circumvent it. Or to break it without getting caught. If a businessperson needs to have a criminal attitude and a strong propensity for risk to break the law without getting caught, all AI needs is for someone to ask it to do so.

As AI takes more and more complex decisions for us, we will stop supervising it, simply because it will be pointless to do so. It follows that, in the absence of some form of regulation, over time (we’re talking a few years, not decades), companies basing their strategic choices on AI will have a competitive advantage over those that do not. Hendrycks applies this same logic to states: those states that base their policies on AI will have a competitive advantage over those that base them on the fallible, cumbersome discussion within a Parliament or Government.

So here’s another significant difference between human intelligence and artificial intelligence. Artificial intelligence has no ethical concerns, except for those that humans decide to endow it with. To sum up: AI is much faster, much more capable of analyzing vast amounts of data and drawing conclusions, much faster at learning from its mistakes, and much more ethically neutral than a human being.

Regulating technology development is something we find extremely challenging. Capitalism is based on the premise that it is necessary to develop an ever-increasing amount of goods and services in order to generate ever-increasing wealth: over the past thirty years, we have discovered that technological goods and services naturally respond to this need for continuous development. Indeed, by investing in “consumer technology,” capitalism has managed to convince us that buying a new mobile phone every year makes sense, much more effectively than it was able to convince us to change our car or kitchen furnishings more frequently.

Putting a brake on technology development in our society would therefore be perceived today as an attempt to curb capitalism itself. For at least three centuries, there have been protests against the advent of new technology. So far, it has been found that for every job lost due to new technological developments, three or four have always been created. Of course, this kind of statistic does not take into account the fate of the individual who loses their job. But if for every English miner who lost their job in the 1980s, “Blatcherism” created five graphic designers, then overall, it’s good for the economy. Or so many thought, at least until automation mainly affected Western “blue collar” workers.

The threat that AI poses to employment is neither new nor imaginary: it is the obvious consequence of a development process inherent to Capitalism combined with technology, which has already changed the social face of Western countries starting from the manufacturing sector and which is now about to impact (is already impacting) the tertiary sector.

The sociological and political consequences of these economic choices, which coincided with the onset of consumer technology development, have been attempts to persuade each of us to give up the idea of a steady job and become an “entrepreneur of oneself” on the one hand, and a growing mistrust of politics, incapable of protecting us from the inevitable setbacks of self-employment, on the other. For decades, work – as understood in the 20th century – is no longer at the center of the political action of Western governments, whether right or left, except as an election campaign issue, to be forgotten the day after the elections.

We could at least slow down this trend if, as a society, we asked AI producers to determine what they want to achieve from it, trying to politically and culturally guide its development. Applying the power of AI to find a solution to the climate crisis, rather than developing it to find the most effective way to outcompete an industrial competitor. Using AI to find peaceful solutions to world conflicts, rather than to create thirty different marketing campaigns targeted at thirty different types of consumers. But of course, this will not happen, simply because applying AI to the crucial issues for our survival on the planet does not immediately produce the profit that it does when applied to reducing the costs of a television channel or a newspaper.

In the latest issue of the New Yorker, Cal Newport tried to explain why we shouldn’t worry about AI by delving a bit deeper into how it works. But, as much as he strives to prove to us that AI doesn’t create anything, and thus does not pose a problem to human intelligence, which is creative, these kinds of considerations, as we said, do not grasp the crux of the matter: like any technology, AI is not dangerous in itself, but is in the measure in which we humans misuse it and, if its use is regulated by market laws, we can be sure that it is, and will be, very dangerous. It will not be AI that harms us of its own accord, driven by its own autonomous desire for supremacy over humans: it will be us humans who harm ourselves using AI for the purpose of competing with each other in the most economically efficient way.

In other words, the AI/Capitalism pairing is potentially lethal. However, the paradox is that, without Capitalism, there is no AI (at least not in terms of common use).

All this resembles terribly the inevitability of a road already marked. However, my impression is that this time, as Hawking said, we are really playing with fire.

We have remembered the socio-political consequences of the advent of automation in the manufacturing sector on Western populations. What can be those related to the tertiary sector? Admitting that McKinsey was half right, we can expect that 400 million Westerners lose their jobs from now to seven years. We add this figure to the workers who have already lost it. We further add the millions who, as self-entrepreneurs, have ended up bankrupt or almost. Finally, we add all those who have seen their savings disappear into nothing between 2008 and 2010. We can expect a good slice of humanity that “does not make it to the end of the month” and that, easily replaceable by AI, does not have real prospects to succeed through Work. Simply because Work is no longer there.

In other words, what we risk is a world in which Work is no longer the basic foundation of society. But our societies are based on the assumption that Work is the tool through which citizens can fully participate in the life of the community. The Italian Constitution has it written in the First Article: “Italy is a democratic Republic, founded on Work.”

But the principle of the importance of Work informs all liberal democracies, starting with the American one. The American dream is nothing more than the expression of the principle according to which, thanks to Work, every human being can achieve what that nation considers first in its aspirations: the pursuit of happiness. Once the principle according to which the social order was established based on the right of birth fell, contemporary societies replaced it with belonging based on Work. We spent a hundred years convincing people that a life spent behind a desk, or behind a lathe, for eight hours a day, five or six days a week, was a life worth living. But it happens that there are fewer and fewer desks and lathes.

The problem also arises from an economic point of view: as an entrepreneur, I pay you as a worker because you can produce the goods and services that you will then buy with the proceeds of your work. If I, as an entrepreneur, no longer pay you as a worker, but use AI, with what money will you as a worker buy the goods and services produced by AI? And, if your full membership in society will no longer be guaranteed by your active participation in its productive life, what will it be based on?

From here the trend of the most advanced societies to think about things like guaranteed minimum income, whether one works or not. It is imagined to use the fiscal lever to redistribute resources. This certainly curbs the problem. But it does not seem destined to solve it: even assuming an unlikely wave of socialism in advanced economies, it is evident that sooner or later the balance point between the profit of the ever less and the subsistence of the ever more is not infinitely forward.

The question that AI poses to us today, therefore, transcends the subtle sense of unease that can be felt at the individual level. It transcends dystopian scenarios in which a malicious artificial intelligence reduces us to slavery. It instead presents us with a potentially epochal problem in systemic terms:

What does a capitalist society look like in which human Work is no longer the necessary pillar? What do we base the next Social Contract on?

This article was translated from the Italian by ChatGpT 4 in under ten minutes. It may not be perfect, but it seems pretty good to me. While cutting and pasting paragraphs to get them translated, me only being an imperfect human being, I once forgot to ask the AI to translate. It therefore assumed that I wanted to engage in a conversation with it. The following is ChatGpT 4 first reply to my argument:

ChatGpT4:

You have raised a series of very important issues concerning the impact of artificial intelligence (AI) on society and the economy. It’s true that capitalism has driven technological development, including AI, and this has had a significant impact on the workforce and the economy as a whole. When we talk about AI and work, we need to recognize that AI can automate some tasks, but it can also create new job opportunities that did not exist before.

However, AI raises a number of fundamental questions. In particular, if AI becomes predominant in many sectors, what happens to those people who have lost their jobs due to automation? How can society support those people who can no longer earn a living in the traditional way? This raises the issue of a guaranteed basic income, which might be one solution, but that has its own problems and challenges.

It’s important to note that AI is not an unstoppable and uncontrollable force. As a society, we have the power to guide and regulate the development and implementation of AI. This can include laws and regulations that protect workers, promote fairness, and prevent the misuse of AI.

In terms of a new social contract, a serious and wide-ranging dialogue will be necessary on how we want our society to develop in the future. We should consider how work, education, healthcare, social support, and other fundamental parts of society need to evolve in response to these new technologies.

AI can have a profound impact on society, but how we will use it and how we will address its effects is in our hands. It’s important that we start thinking about these issues now, so we can guide the development of AI in a way that is beneficial for all.