Do machines live?

The same year that Norbert Wiener published his treatise on the “human use of human beings,” in 1950, another exceptional figure in the history of computer science, Alan Turing, posed the question whether machines can think. Turing, a clever man—at least in the scientific fields he worked in—wanting to avoid philosophical discussions about what “thought” and “intelligence” are, transformed his original question into whether machines can mimic thinking; and, ultimately, into whether machines, by “answering questions” and “conversing” with humans, can convince them that they too are human.

From the mid-1960s to this day, the Turing test is one of the international procedures (in the form of a competition) that monitors the development of artificial intelligence. In the latest instance of this technological tournament, at the competition held by the Royal Society in London in 2014, a Russian-built program (Eugene Goostman), which portrayed a young Ukrainian speaking English as a second language, convinced one-third of its conversation partners that it was human. Since Turing had set the persuasion threshold at 30%, the fact that 33% of participants were deceived was considered proof that the “Turing test” demonstrated, for the first time, that machines can think.

The successive shifts that Turing cleverly chose in his 1950 question, from thinking to imitation of thinking and from imitation of thinking to successfully deceiving a percentage of interlocutors, have the advantage that they detect the answer always synchronically and always within a specific environment, where it is expected (probable) that some mechanical / computer program can deceive some people, convincing them that it thinks. However, there is a second pole: do the human interlocutors (who may be deceived or not) think? What do they think? How do they think?

Although it is reasonable that the final judges in the Turing test and in every similar question about the “intelligence” of new machines will be humans, it is not equally clear that the way thinking or intelligence is determined is not an ahistorical and supra-social constant. Different subjects, under different conditions and in different cultural environments, assess with different (even completely different) criteria what “thinking” or “cleverness” is. A mathematician, for example, who struggles for months to solve a complex problem (and solves it) while being unable or indifferent to even eat (thus depending on his wife’s care) might be considered extremely clever by his colleagues and utterly foolish by his food. Or a Turing who plays a crucial role in breaking the encryption of Nazi military messages in World War II but later admits his homosexuality to police officers (at a time when homosexuality constitutes a crime, thus his admission would get him into serious trouble) could be regarded as extremely smart or a fool, depending on the circumstances.

The historical, social/class, and cultural relativity of “thought” and “intelligence” is, therefore, foundational; and it is the invisible basis upon which the assumption that there are “intelligent” machines stands. If human thought changes because it indirectly or directly adopts criteria of mechanical self-confirmation (even if it ignores that this is precisely what it is doing), then it is certain that machines are intelligent, and they will become increasingly smarter; and the Turing test is not necessary for this. Stated otherwise: if in the comparison of two terms one of them constantly degenerates, then the other will appear increasingly improved.

The same applies to the question of whether machines live. Wiener in 1950 offered a basis for recognizing and accepting this oxymoron, according to current experience, of mechanical life: its anti-entropic action. This was a strict physico-mathematical basis that has no particular value in daily life in the modern world. However, there is another process that indicates answers: fetishism. The belief that some human remains can perform “healing miracles” may mobilize self-suggestion to such an extent that it becomes self-confirming. The fetishistic belief of the same origin that modern cybernetic machines perform “miracles” has a greater probability of self-confirmation: these machines are effective at the moment they are asked.

Far from the entropic anxiety and philosophical heights, what life is can be emptied, stripped bare, impoverished and degenerated so much that new machines can actually live. With excellent health and efficiency (what else tends to become of life through the measurement of youth and performance?); to live for a long time, if they are regularly serviced.

Ziggy Stardust
cyborg #03 – 06/2015