artificial intelligence (and artificial consciousness?)

There is no commonly accepted definition of intelligence among philosophers or/and scientists, let alone the fact that it is accepted that it is not exclusively a human characteristic. Most agree, however, that a basic characteristic of intelligence is its flexibility to some degree or its generality.
This issue, of the flexibility or/and generality of intelligence, divides the creation of “artificial intelligence” into two subsets. Narrow artificial intelligence, and general artificial intelligence. The first (which can also be called machine learning) already has many applications, and because it attracts the most technical interest and the most funding, it will acquire more and more. General artificial intelligence, on the other hand, remains, in its technical manifestations, a challenging puzzle1.

The term “artificial general intelligence” (AGI) was first proposed in 1997 by American physicist Mark Avrum Gubrub in an article titled Nanotechnology and International Security. There, Gubrub wrote among other things:

By the term upgraded general artificial intelligence, I mean artificial intelligence systems that can match or even surpass the human brain in complexity and speed, and that can acquire, manage, and justify on the basis of general knowledge, essentially utilized in every phase of industrial or military operations that would otherwise require human intelligence. These systems may have the human brain as their model, but this is not necessary, nor do they need to have “consciousness” or other capabilities that do not strictly pertain to their use. What matters is that such systems will be able to be used in place of human minds in tasks ranging from organizing and operating a mine or factory to piloting an airplane through data analysis, or designing a battle…

Mark Avrum Gubrub had and has close ties with the military research arms of the United States, so the idea of artificial intelligence, if not of generals, certainly of “artificial colonels,” was appealing. Although some research programs were created regarding this kind of general artificial intelligence, by the mid-2000s this was not the area that attracted attention (and funding). Narrow artificial intelligence was preferred, because the creation of the relevant algorithms and the structuring of databases were very specifically determined by a particular purpose.

The issue of data makes the big difference between narrow and general artificial intelligence. In all its applications and versions, the former is “data-intensive”, which often must be a continuous flow and updating. From a technical point of view, the continuous processing of ever-increasing data is a solved issue. And moreover, with spectacular results. However, the possibility of data shortage (temporary or permanent) remains a problem (and not only theoretical): even the best-trained machine cannot function (or will make mistakes) if it lacks critical data. Chaos theory has proven that “critical” can be data that would otherwise be considered fifth-category.

For a “smart” machine with narrow artificial intelligence to successfully recognize a category of common objects, such as bottles, it needs images from every type of bottle and from every different visual angle. The human mind does not behave this way. It only needs to see one, two, or three bottles and it will be able to recognize any other, even if broken. This ability of human (and more generally: animal) thinking is called generalization: from minimal sensory data it can satisfactorily form general mental categories. It is the opposite of “data intensity”2. And this is the challenge of building general artificial intelligence: creating mechanical cognition that will be able to construct (on its own behalf) a “general perception of whatever reality” from few data. This is what Gubrub wanted in 1997.
The most important research programs for AGI bear the signatures you would expect: Google’s DeepMind, Elon Musk’s Open AI, the European Human Brain Project with over 150 “partners” of all kinds, Microsoft’s Maluuba, an Uber program, etc. So far, developments are to some extent theoretical, and AGI tests are oriented toward specific “exercises.”

Meanwhile, starting from the most complex narrow artificial intelligence towards the creation of general artificial intelligence, there have been some particularly significant surprises. The most characteristic case is that of Alpha Go by DeepMind. This software initially defeated Fan Hui, the European champion of the board game Go in October 2015 and, subsequently, in March 2016, it defeated the world champion Lee Sedol with a score of 4-1.

The ancient Chinese game of Go has been considered the holy grail for artificial intelligence research because it is one of the most complex games ever created by human civilization. Algorithms that play chess (once considered the pinnacle of artificial intelligence) can use brute computational power to quickly “search” all possible moves at each stage of the game, choosing the best one. This is impossible for Go since there are infinite possible configurations of the board.

To be “trained” to play Go, AlphaGo was equipped with a combination of deep neural networks and the research software “monte carlo tree search”. The architecture of its structure consisted of 3 neural networks specialized in image analysis, one of which was related to “evaluation” and the other two to “decisions”. These 3 networks were “trained” using data concerning the separate moves from 160,000 Go games by top players. The evaluation network “learned” this way to estimate the probability of winning after each specific move on the board, while the decision networks learned what move to choose according to the board configuration. Subsequently, these two decision networks were “enhanced” by playing against each other for 30 million games, storing the best moves from these games. With this endowment, the machine should demonstrate some form of “creativity”.

It showed it, but to everyone’s general surprise. On the 37th move of the second game against Sedol, the machine did something that surprised everyone. The commentators of the match noted how strange the move was, and some thought that the neural networks had made a mistake because no human would think to do such a thing. The DeepMind engineers had neither predicted nor “loaded” such a move, and Sedol thought unusually long about what his response should be. But the move proved correct: with the benefit of its data (and with the configuration of its neural networks) Alpha Go had “thought” and progressed beyond its data.

Nevertheless, it was not considered an acceptably successful demonstration of general artificial intelligence: the data endowment was heavy and enormous to represent the way people learn to play Go. AlphaGo had been “trained” with data from more than 30 million games, while Lee Sedol had played at most 50,000 in his lifetime. The difference is not merely quantitative. It concerns the way human thinking generalizes and makes abstractions.
If it is already difficult to provide a strict definition of intelligence, defining imagination is ten or a hundred times harder. However, some cognitive psychologists involved in artificial intelligence research argue that the human mind generalizes because it imagines. Here, then, is a field of glory: artificial imagination.

The start up Vicarious PFC has built a system aimed at distinguishing between humans and bots on the internet. According to the company, its system possesses “artificial imagination,” which translates “to the ability of software to make estimations about how information would look in a different context from the one it belongs to.” In other words, algorithms must create different “scenarios” by incorporating certain data AND choose which one is preferable – for now, in very simple matters, such as finding the answer to the question “what is this?” (shoe) using limited data from shoes, socks, and chairs.

There is also a completely different approach. For general artificial intelligence to exist (say its supporters) that is functional in any environment, machines/algorithms must have some “sense/awareness” of themselves, and of their difference from anything else around them. Some kind of consciousness then… Artificial consciousness?

Robotic “barista” at the winter Olympic games in Beijing.
the attack of the terms

Whoever wants and can follow the relevant technological developments should be sure that if there is talk of “artificial intelligence”, “artificial creativity” and “artificial imagination”, it is only a matter of time before “artificial consciousness” emerges. A certain definition of it, surely.

This imposes the need to turn back. Not to provide definitions of intelligence, imagination, creativity, or consciousness that are immune to any form of mechanization. But to see who defines and redefines such concepts, and why. And, additionally but equally importantly, what meaning these definitions acquire as ideologies for extensive social use.
There is at least one precedent, particularly enlightening: the concept of power. Although empirically considered a commonplace notion, it is not such; not even in physics, where it holds a central place. What is common, for example, between muscular power and mental power, so as to be versions of the same narrowly defined concept? Nothing. And in no case can mental power (or the “power of love”) be measured in newtons.

Despite the fundamental differences in meanings, we can communicate using the word “force” in relation to living beings. There is a historical (and political) cut: mechanical force. After the invention of the steam engine, the first steam-powered machines (mechanical tools) were deified by 19th-century ideologues as the birth of the “iron man.” The machines themselves were not anthropomorphic; however, their use was intended for jobs that were previously done by human beings. Workers and laborers.
Therefore, attributing a human characteristic (the “force”) to machines that were destined (as the industrial philosopher Andrew Ure said in 1835) to crush the disobedient hand of workers (that is, of “labor force”) was not an innocent borrowing of concepts either from the vocabulary of science or from everyday language. It was an ideological/political choice to undervalue human characteristics (in relation, then, almost exclusively to the organization of labor) to the extent that machines could either replace or control human labor. Practically both together.

Something similar happened with electronic brains. If human (and more generally: animal) mental activity consists of nothing but computations, then the mechanical brain is rightfully entitled to its name. But is that so? No. To the extent that (and at every step where) it became possible to mechanize certain intellectual tasks previously performed by humans, attributing to these machines the qualities of thought (even computational thought) was not inevitable. It was an ideological/political choice that undervalued the corresponding activities of human thinking.
In both of the above historical examples—the steam engine and the brain—the processes of such undervaluation have not been the only ones unfolding. Also taking place were processes of “revaluation” (both symbolic and substantive) of fixed capital—and, consequently, of capital in general. Machines that are “powerful,” machines that are “intelligent,” are not entities born privileged and multiplying through some mysterious endogamy. They are capital: fundamental components, that is, of a process of exploitation and subordination called capitalism. The “revaluation of technical achievements” that serve capitalist exploitation/accumulation is undoubtedly something far more than a play on words.

Why should something different apply to artificial “intelligence”, “creativity”, “imagination” or even “consciousness”? Why should something different apply to machines that shape new “ecosystems” characterized as smart? Not only does it not apply, but the dichotomy of undervaluation / overvaluation is launched to great heights.
The idea of machines that possess a certain “artfulness” (even a designed one) so as to be able to “direct” not themselves but human activities, is one of the favorites of capitalism’s bosses. Attributing to them abilities considered intellectual and, moreover, of increasingly higher complexity, shapes in the most perfect way so far what Ure dreamed early on: the threat that bosses now possess such capable machines that they can do without us (living labor with the ability to negate).

One of the issues that has occupied today’s “industry philosophers” is the following: with intelligence, with creativity, with imagination, even with consciousness… can responsibility be attributed to this electromechanical “ecosystem”? The more philanthropic of these philosophers, primarily having robotic (+ artificial intelligence) weapons in mind, answer “no”. This practically means that developments in programming, in “smart” algorithms and in data processing should be self-restricted just before the threshold where “smart” machines would make decisions uncontrollably.

But the capitalist use of these machines (and only capitalist it could be!) moves in the opposite direction. Not for machines to make decisions unchecked; not for machines to have “responsibility”; but for the methods of bosses and their specialists behind and within the operations of these machines to be unchecked; for their own responsibilities to be hidden behind circuits and algorithms!
Examples are not lacking. If, for example, the human species as it reached the 21st century is considered “inadequate” and needs biotechnological upgrading, this is not due (says the dominant rhetoric) to the designs of bosses but to the fact that machines have evolved so much that the human species continuously loses ground in its relationship with them! It is the machines-as-machines that demand, require, impose what is called “transhumanism” – not the capitalism of the 4th industrial revolution!!
And if the senses and rhythms of human bodies need intensive improvement, if whatever naturalness remains is now considered desperately slow and incompatible, this is not due to the acceleration of capitalist accumulation/circulation, but to the fact that machines-as-machines “work” with nanoseconds! As drivers (“drivers” just of capitalist 20th century), humans are bad; they make mistakes, are careless, cause accidents… “Smart” cars will relieve them of this irresponsibility. As senses they are inadequate; they definitely need sensors everywhere. As memory they are useless (or even dangerous); they need terra bytes of digital memory. As is well known, even the human immune system is useless – it needs to be artificial.

The more complex, “creative,” “intelligent,” and all the other human traits a machine appears to exhibit, the more magical it is considered. From Hero of Alexandria’s hydraulic automata in the 4th century B.C. to the impressively intricate and finely tuned automatons of the Western world in the 15th, 16th, and 17th centuries, “magic” was the most common result. The difference being that, over time, this technical “magic” did not remain confined to the realm of aesthetics and entertainment.
This is where technological, capitalist metaphysics can produce (and does produce) tangible social outcomes, far beyond the standard applications of “smart” machines. It suffices for people to believe that a machine is intelligent, sentient, for any question—political or philosophical—regarding intelligence to be dissolved, and even worse, for the priority of the human species over machines to be undermined.

A complex algorithm that cannot distinguish dogs from cats is not considered “stupid”! It’s simply not built to do this job. Another complex algorithm that distinguishes dogs from cats—should it be considered “intelligent” just because? It’s simply built to make this distinction. However, the fact that the user perceives the machine only in terms of its function and not its structure (which they otherwise ignore) creates the impression of a machine “cleverness,” an “inherent” capacity.
Just as Marx had commented on commodity fetishism, likewise with machine fetishism—the attribution of being to it—is on one hand the goal of the machine’s masters, and on the other, a self-devaluation of the human. If machines are “beings,” then surely they can be smart, affectionate, and friendly—but then human beings would be measured by this mechanical “life” and constantly found wanting.

“Reception” robot at the winter Olympic Games in Beijing.

It is not a matter of misunderstanding! It is a demonstration of the machines’ unparalleled superiority, so much so that humans should worship them. Even if they “copy” them.
A few decades ago, so many that no one wants to remember them and so few that they are only the historical backdrop of “artificial intelligence”, a mathematician, one of the pioneers of cybernetics (the inventor of the term, after all), Norbert Wiener, author of the reference work Cybernetics and Society, the human use of human beings, had the audacity or courage to write among other things:

… Here I want to interject an important element: words such as Life, Purpose, and Soul are completely unsuitable in pure scientific thinking. These terms acquired their meaning through our recognition of the unity of a specific group of phenomena, and in reality they do not provide us with any suitable basis for characterizing this unity. Whenever we encounter a new phenomenon which participates to some extent in the nature of those we have already named “phenomena of Life” but does not conform to all the associated notions that define the term “life,” we face the problem of either broadening the word “life” to include them all, or defining it in a more restrictive way so as not to include them.
We have encountered this problem in the past when examining viruses, which exhibit some properties of life—such as enduring, multiplying, and organizing—but do not express these properties in a fully developed form. Now that certain behavioral similarities are observed between machines and living organisms, the question of whether the machine is alive or not is, for our purposes, a matter of semantics, and we are free to answer it either way, whichever serves better. As Humpty Dumpty says about some of his most important remarks: “I pay them extra and make them do whatever I want.”

If we wish to use the word “Life” to cover all phenomena which move against the stream of increasing entropy, we are free to do so. However, then we would include many astronomical phenomena which have only a shadowy similarity to life as we normally know it. Therefore, according to my opinion, it is best to avoid all indefinite expressions that raise questions, such as “life”, “soul”, “vitalism” and the like, and to simply say that as far as machines are concerned, there is no reason why they should not resemble human beings in representing centers of decreasing entropy within a framework where overall entropy tends to increase…

We are not free! Words and meanings have acquired owners; and they are the ones who pay to do and say what they want!
We must protect, save, and liberate life—and its words.
Otherwise, machines will have consciousness… the consciousness of their masters / owners.

Ziggy Stardust

  1. The basic elements regarding research on artificial intelligence come from the book Inhuman Power by Nick Dyer-Witheford, Atle Mikkola Kjosen and James Steinhoff, Pluto Press, 2019. ↩︎
  2. The ability to generalize, a capability of the living undoubtedly (and not only of the human species), is certainly interesting. It is probably related to the fact that life as such must “cope” with its environment in some way, with a certain “economy of stimuli.”
    Beyond that, however, especially for our species, we can wonder whether the ability to generalize is always and everywhere the same, or whether it is influenced by cultural movement (of the species). In conditions of generalized virtuality and fragmentation, for example, such as the current and future ones, will this ability be the same compared, say, to what it was half a century or a century ago? Or is it already falling into a usually superficial empirical assembly of impressions, useless or even dangerous for everyone except those in power? Could it be that we live in conditions where the human, living ability to generalize is becoming increasingly impoverished, allowing its mechanical version to emerge at some point as more capable? ↩︎