putting consciousness on the anatomical table

If one takes stock of the declarations of scientists (and their funders), the last terra incognita that the techno-scientific spirit is called upon to conquer is nothing less than consciousness itself. There is, of course, a long series of discussions and debates within the history of Western philosophy around the nature and structure of what could generally be rendered in terms such as consciousness, thought, mind, cognition, etc. What “now” – that is, in recent decades – seems to be generating waves of optimism that the time has finally come to solve once and for all the thorny “problem” of consciousness is that consciousness has now fallen within the jurisdiction of science, which has managed to develop the appropriate methodological and technological tools to study cognition “objectively.”
The substantive branch of study that will thus enlighten us on such issues that have occupied humanity for millennia has found a home under the name “cognitive science,” which is naturally accompanied by the corresponding conferences, journals, university departments, and study programs to ensure the required institutional entrenchment (and the privileges that come with it). Strictly speaking, it is not a field of research that can yet demonstrate (at least not yet) the same degree of theoretical coherence found in more traditional sciences, such as physics. Nevertheless, a distinct core of theoretical commitments can be identified, which allows the involved scientists a level of mutual understanding and could be encoded as follows: the subsumption of the “problem of cognition”—as well as all contributing disciplines such as linguistics, anthropology, etc.—under the computational paradigm. Therefore, it is hardly surprising that the problem is often translated into an (alleged) equivalent: is it possible to build machines that think? This is the fundamental question of so-called “strong” artificial intelligence, and an affirmative answer—especially regarding how it could be achieved—would automatically pave the way for understanding human intelligence as well.
Intelligence, consciousness, cognition, thought: are all these terms we have used so far equivalent and interchangeable? Do they refer to the same “thing”? It depends on each person’s perspective. For artificial intelligence and cognitive sciences,1 any differences seem rather insignificant, with “intelligence” taking the lead primarily. However, from a historical-genealogical perspective, “intelligence” constitutes merely the final and very recent product of a long process of intersections, selections, and exclusions of previous concepts; a process that had little to do with some “innocent” scientific curiosity. What historical memories, then, can a concept like “intelligence” carry within it? Is it a mute “thing,” a self-evident object for scientific study, or can it “speak”?
the Turing test
In his famous article “Computing Machinery and Intelligence,” Turing carefully avoids delving into (and getting lost in) philosophical-historical labyrinths around the concept of intelligence, starting as follows:
“I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think.’
…
Instead of attempting to give such a definition, I shall replace this question with another which is closely related to it and is expressed in relatively unambiguous words.
In its new form the problem can be described in terms of a game which we will call the ‘imitation game.’ It is played with three players: a man (A), a woman (B), and a third person (C), the interrogator, who may be of either sex. The interrogator is alone in a room, isolated from the other two players. The object of the game for the interrogator is to determine which of the other two people is the man and which is the woman. He knows them only by the letters X and Y, and at the end of the game he will say either ‘X is A and Y is B’ or ‘X is B and Y is A.’ The interrogator is allowed to ask questions of A and B of the following kind:
‘C: Will X please tell me the length of his or her hair?’
If we now suppose that X is A, then A must answer. The object of the game for A is to deceive C and lead him to make an incorrect identification. His answer might therefore be:
‘My hair is cut short in the French style, and the longest strands reach about twenty centimeters.’
To avoid the risk that the tone of voice might help the interrogator, the answers should be written, or even better, typewritten. The ideal solution would be for the two rooms to communicate via teleprinter. … The object of the game for the third player (B) is to help the interrogator. The best strategy for B, that is, the woman, is probably to give honest answers. She could add phrases to her answers such as ‘Don’t listen to him, I’m the woman!’ But this would be of no help, since the man could also make similar remarks.
Now let us ask: ‘What would happen if in this game the role of A was taken by a machine?’ In this case, would the interrogator make incorrect identifications as often as when the game is played between a man and a woman? These questions replace our original question, ‘Can machines think?’2
Within certain circles associated with artificial intelligence, there appears to be a tradition of “simulating” distinctly human activities and states through games, and more specifically through a particular category of games that involve some form of deception (the “prisoner’s dilemma” game is another such case frequently used in game theory). Such a choice by itself could raise suspicions regarding the (epistemological, not necessarily moral) legitimacy of the “simulation.” However, we will not dwell on this issue here. Let us assume that it makes sense to use this game as a criterion for intelligence. Turing himself may (believe that he) avoids giving clear definitions of the terms “machine” and “I think,” but it is not difficult to derive, even indirectly, these definitions by tampering with the game at critical points and observing when it collapses—i.e., ceases to make sense according to its original purposes—and when it can continue to function unimpeded.
Within the framework of the game, the isolation of the players (or of the player and the machine) from the interrogator appears to serve methodological purposes, in the first place. Isolation is not itself an end goal, but functions as a mechanism of justice and protection for the one player (or the machine) who has the disadvantage of not being what they pretend to be (for the man pretending to be a woman and for the machine pretending to be a conscious being). From this perspective, therefore, it should not constitute any fundamental feature of the game. And yet, this initially methodological rule might perhaps conceal within it certain ontological-type consequences. Suppose that a next step is added to the game: the unveiling. After the end of a match, the real identities of the players are revealed. How would the one who has the role of the interrogator react if they learned that the player they identified as human was ultimately a machine? In the best case of an open-minded interrogator, they could continue the discussion with the machine (if they found it interesting); let’s say about the beginnings of the art of cinema. However, they would not be able to watch an Eisenstein film together with the machine. The objection here would be that this is also possible; for example, the machine could “read” the movie file while it is playing. And yet still, the interrogator would not be able to go for a walk at the cinema together with the machine. And if the machine were a robot? Wouldn’t that also be possible? Correct. But could it pay for a ticket? Or realize that it doesn’t need to pay because the cashier is a friend of the interrogator and lets them pass without a ticket? Or…, or…, or generally become involved in the whole spectrum of human activities (let’s say it has erotic desire and reproduces)?
To not tire it too much, one could answer affirmatively to all these “or”s, even to the last one. Only in this case the initial game has now lost its meaning and the criterion of intelligence has been stretched so much that it coincides from every aspect with the criterion of human existence: a machine possesses intelligence when it is human. Once the restriction of isolation is lifted, it seems almost impossible to stop the chain of additional criteria. However, to the extent that it is accepted that ultimately isolation is a crucial element for the game to retain its meaning, then it automatically implies that a basic criterion for intelligence is dialogue and indeed in its “written”, bodiless form.3
Continuing his article, Turing proceeds to examine the kind of machines that would be suitable to take part in the imitation game; and these, of course, are none other than digital computers. And since these machines were a relatively recent technological achievement (the article was written in 1950) and it could not be taken for granted that readers knew what they were about, Turing attempts a description of their basic operating principles:
“We can explain the basic idea behind digital computers by saying that the purpose of these machines is to perform all the operations that a human computer could perform.4
The human computer is supposed to follow specific rules; he has no right to deviate from them in any detail. We can assume that these rules are provided to him in the form of a book, which changes each time he is assigned a new job. He also has an unlimited amount of paper to use while performing his calculations.
…
The book of rules with which we have equipped our human computer is, of course, an invention that facilitates our work. In reality, human computers simply remember what they need to do. If someone wants to build a machine that mimics the behavior of a human computer while performing a complex operation, then he must ask him exactly what he does and then translate his answers into the form of a table of instructions.”
What would happen, then, if the machine were not a digital computer? If by this term we refer to its construction materials—that is, to an electronic computer—then replacing the machine with another one made of different materials would have no consequence on the game. It would be equally fair to use an electromechanical computer (as indeed the first computers were) or even one constructed from hydraulic components. Under one condition: all these machines encode a number of internal states in their material elements (in electronic computers this is done via the voltage of their electronic components; in hydraulic ones it could equally be done via pressure), as well as the rules governing the transitions between states in the form of a table of specific, unambiguous instructions. If this condition is satisfied, then the construction material of the machine is irrelevant.
However, there is a crucial difference between a human computer and a digital computer (regardless of hardware), even if it is considered that the book of rules contains unambiguously defined instructions: the ability of denial. For the human computer, Turing says that “it is assumed that he follows specific rules; he has no right to deviate from them,” without however referring to how this “assumption” is ensured. Taking the human/digital computer analogy one step further, it is clear that the existence of a supervisor would ensure the compliance of the executor of operations (Turing himself continuously uses the term “compliance”).6 Only that, in the case of the machine, the supervisor is not an external element to it, but part of its internal architecture (in technical terms, it is called the control unit). Which means that, for the analogy to be accurate, the executor must essentially have internalized compliance—and there is a safe way to ensure the internalization of compliance: indifference to the content of the symbols and instructions that the executor is called upon to handle. Or, in other words, a kind of intellectual numbness.

cognitivism and critiques
The above may create in some people the sense of moral-evaluative judgments. Words such as obedience, supervisor, indifference are naturally prone to trigger defensive reflexes. However, it is more than likely that Turing did not have such intentions nor did he see anything problematic in their use. A contemporary introductory handbook to the research field of “mind” and “intelligence” could repeat exactly the same things at a more abstract philosophical level and cleansed from references to “anthropomorphic metaphors”:
• Thinking consists in the algorithmic manipulation of mental symbols.
• Symbols, as internal representations, have arbitrary (more precisely, conventional) meaning (as in language).
• The mind possesses internal states that are causally connected to one another through algorithmic rules…
• … and it is independent of the physical medium in which it is implemented, or otherwise there exists a formal equivalence among systems that could be characterized as sentient.
• Independence from the medium does not mean “spirituality.” Digital computers themselves are an example of how symbol manipulation is possible without presupposing “spiritual” substances.
The above series of “official” propositions (or aphorisms perhaps) lie at the heart of the modern paradigm of cognitive sciences, that which is sometimes referred to as “cognitivism.”7 8 If we chose the more difficult path that starts from Turing’s article, almost 70 years ago, instead of presenting the axioms in their “pure” epistemological form, one reason (beyond narrowly defined historical interest) is also to show the following important point (in our opinion): what today appears as “simple” scientific positions and hypotheses have origins and are rooted in specific historical and social conditions, which bequeath to scientific theory “metaphors” and “patterns of reasoning” that subsequently crystallize into “pure” and abstract positions. The conventionality of symbols now belongs to the scientific community of linguistics (and by definition to computer science). The indifference towards the contents handled by Turing’s executor essentially says the same thing. However, it looks excessively similar to what some would call “alienation of labor.” But we must start further back.
Before we move on to purely genealogical matters and for the sake of completeness, it is worth noting that cognitivism does not constitute an unassailable paradigm. Despite its dominance, criticisms have been raised against it even from within the cognitive sciences. Many of these focus precisely on the problem of meaning, attempting to show that, in order to solve this problem based on the assumptions of cognitivism, it would have to be assumed that there is another mind within the mind that ultimately integrates internal representations into a meaningful whole (something like Descartes’ homunculi). Thus, the problem is merely displaced and not resolved, since the internal homunculus would also need to be explained. Another line of criticism focuses on the role of the body and its absence from the cognitivist program. For example, theories of the extended mind view even the objects of external space as part of the mind. Without delving particularly into these theories, they at first glance appear to be desperate attempts that easily lead to an unbridled idealism (“everything is in the mind”)9. The theories of “embodied cognition,” which also stem from the criticisms of philosopher Hubert Dreyfus, appear more interesting; they attempt to demonstrate the role of the body in cognitive processes. There is some ambiguity in relevant discussions regarding how broadly the role of the body should be considered. In some cases, it seems to mean simply that the body’s biological functions influence cognition. In others, “the role of the body” even includes what we might call the “cultural context.” Of course, in this case, by opening up to broader environmental factors, the entire cognitivist approach is invalidated (it resembles somewhat the evolution of genetics and the introduction of environmental factors). Hmm, in another fifty years or so, and with a bit more funding, all these “philosophers of mind” might perhaps discover that consciousness can also be class-based (we don’t discuss this with scientists, it’s the law’s slumber)! Incidentally, when Dreyfus published his first criticisms of artificial intelligence, almost 40 years ago, he received furious (and crude) attacks and still provokes discomfort, even though he is not some radical thinker in the socio-political sense. His references are to Merleau-Ponty and Heidegger.

from the soul to consciousness
From one perspective, attempting to search through the history of ideas to find correspondences with contemporary concepts is a futile endeavor. As expected, it is not at all uncommon for terms belonging to today’s vocabulary to be completely absent from a certain historical point and beyond. Even in cases where they can be found, understanding them requires their integration into the conceptual universe of that time (or, speaking more philosophically, their historical concretization) rather than their linear projection into the past—although this is precisely what many conventional historical narratives do, in an effort to show that all past discussions were simply a preparation that legally culminates in the present.
A typical example of a concept that appears at a specific point in time and from that point on underwent multiple transformations to arrive at its current meaning is that of consciousness. As a word, it is completely absent from ancient Greek literature. If one “must” identify a concept closely related to that of consciousness, it would rather be the “soul,” which, however, even in this case, is separated by a vast gap from what we would today denote either by the concept of “consciousness” or that of “soul.” A crucial point of differentiation in relation to later developments is that the ancient Greek soul did not necessarily carry spiritual connotations referring to immaterial substances or, in cases where it did, this immaterial aspect had an essentially different composition compared to the immaterial substances of Christianity. A dominant perception of the soul considered it a kind of worldly force, inherent within material things themselves, even in those which, by modern terms, we would call inanimate (the classic example here is magnets, which were considered animate due to their ability to move other objects without direct contact). For Aristotelian theories, on the other hand, and regarding the human being himself, the branch of psychology was simply a part of physics, as it applied specifically to living beings. As the force that shapes the body, it maintained an inseparable bond with it, and for this reason, the concept of an individual soul surviving after death was naturally rejected. Just as the body gradually decays and declines after death, so too must the soul follow the same path. As a force of entelechy, on the other hand, it was also what provided purposes to the body, which consequently meant that no special theory of psychological motives was necessary. What, in a sense, had a transcendent and timeless composition for Aristotelian theories was Reason [Logos]. However, it is important to understand here that Reason was not an individual possession of isolated subjects. On the contrary, it referred to a kind of divine Intellect, supra-personal and universal, which provides the primary principles for the order of the world.
The exception to the rule that rejected notions of individual soul was, of course, Plato, who “first” accepted that the soul has the ability to survive after death (although he most likely borrowed these ideas from Orphic and Pythagorean currents). Even in Plato’s case, however, the soul did not constitute the center around which spiritual activity revolved, but only an intermediary principle between the sensible world and the Logos. The existence here too of the Logos as the ultimate and universal reference point is of crucial importance for the following reason: the purpose of the soul was never, in the ancient world, some kind of individual self-realization or individual salvation. Therefore, neither was the ethics that emerged able to be confined to individual frameworks, but on the contrary, it always looked toward the public sphere. Within the ancient city where political action was part of daily activity (for free citizens, of course), every notion of virtue had to be connected with politics and therefore with the public sphere. Nor could it be detached from action itself, as something radically distinct from “internal” motives and thoughts. “You are what you do publicly” (and not “you are what you think privately”) – thus could be concisely (and slightly paradoxically) rendered the then-dominant perception of the role of the soul.
Not coincidentally, the introduction of the word “consciousness” comes from the philosophical movement of stoicism (with the Latin term conscientia). Why was the birth of consciousness not a mere coincidence at that moment? The transition from the political model of the city-state (with active political participation) to that of the empire (with the resulting concentration of political power) had as a consequence significant shifts in broader perceptions of morality (and not only). Stoicism, by proposing as a moral ideal the state of ataraxia in the face of external worldly events—which in any case lie beyond an individual’s control—automatically takes a first important step toward the individualization of morality. According to the Stoic view, although the individual cannot control the external world, they do have the capacity for internal control: the ability to exercise self-restraint and to curb passions so as not to be carried away into situations beyond their control. If it still isn’t clear how consciousness enters the picture, the reason is simple: you probably understand the word “consciousness” as a neutral, epistemological term, such as that supposedly studied by “philosophers of mind.” However, consciousness was born as a moral term, referring to that kind of awareness present (and distinct) within each individual, which is required to examine and control the passions of the soul, according to the demands of Stoic philosophy. It was precisely the sense of a daily, moral struggle within each person that led to the perception that there exists something called (moral) consciousness.
The prevalence of Christianity naturally gave a decisive impetus toward the further personalization of morality and the soul; here, a representative position is held by Augustine (4th–5th centuries AD). For early Christians, epistemological questions concerning the natural order of the world were set aside to make room for purely moral issues. The role of material objects in the world is essentially limited to merely providing the means by which the soul will achieve individual salvation. If there is any law governing the world, it is God’s moral law, not the worldly law of nature. Augustine (many centuries before Descartes) treats the sensible, natural world as a continuous source of uncertainties upon which nothing solid can be built. Therefore, the only source of certainty can be consciousness—which is now entirely internalized—turned toward itself. And just as God, now as a person (far removed from the impersonal, Aristotelian divine Intellect), stands outside the world as its creator, so too does the individual, created in the image of God, now stand not within, but opposite to the world. The first deep chasm between the thinking subject and the objective world has been created.
The Middle Ages moved more or less along similar lines of thought. What is interesting here, however, is that it preserved to a small extent some of the older conceptions of the soul as a formative force. Especially towards its end, when the Christian world began to encounter Arabic thought with its dynamic Aristotelianism (which in some cases was so far ahead of the Christian world that it had reached explicit atheistic positions), it was forced to bring certain epistemological issues back to the foreground, attempting to incorporate some Aristotelian patterns of thought into Christian doctrine (though in a highly diluted form, of course).10 However, the game was already lost. As the Middle Ages came to an end, the soul as the internal, active force of the individual re-emerged more sharply, reaching even more extreme positions this time. Conceptions of the soul that previously pertained to moral issues were now transferred almost unchanged to epistemological ones. Not only is the soul essentially heterogeneous in relation to the external world, but it does not even have direct access to it. What it can know are only the representations of things within it, which are merely arbitrary symbols that it imposes upon them. We are not possessors of things, nominalism would say (this was the name of the relevant school of thought), but only possessors of representations and internal states. And thus, approaching the end of the Middle Ages, a complex of concepts emerges regarding the soul and consciousness—internal representations, arbitrary symbols, individual consciousness detached from the world—that perhaps for the first time begins to take a shape that already resembles all too closely today’s “scientific” positions of cognitivism.
from consciousness to intelligence
With the entry of Western European societies into what is often called modernity, or otherwise, using more historical and sociological terms, with their entry into the era of emergence of the first urban-capitalist forms of social organization, the outlines of “consciousness” were reshaped. A fact of fundamental importance was naturally the appearance of physical mechanics in the scientific field, the influence of which did not remain strictly within the framework of science. Serving as a model of certain knowledge, it was transferred (as a model of thinking) optimistically to other fields as well. One of these was the understanding of the human body. Until the emergence of physical mechanics, matter – that is, its conceptualization – had managed to retain some of its “ancient” characteristics, and especially that of entelechy, that is, the existence of purposes within it. Physical mechanics drove the final nail into the coffin of such perceptions. Since it was now possible to give a description of matter based solely on deterministic and mechanical laws, there was no reason to maintain remnants of teleology. It may sound paradoxical, but the mechanization of matter also resulted in the complete expiration of thought. Purposes can be eliminated from matter, but they remain a basic element of human action. And if they can no longer be located in the body (which, as material, follows strict mechanical laws), then they are transferred entirely into the consciousness of the thinking subject. The result was an even deeper split between body and thought, a dualism of extreme form, as characteristically in the case of Descartes. Reiterating Christian motifs in many ways, the function of the body for the Cartesians was to create passions that the soul had to tame.
Descartes is often “blamed” for the burden of being considered the father of modern philosophy. Without necessarily being an incorrect view, from another perspective he was also the last of the medieval philosophers. During the Middle Ages, the Logos had lost its impersonal character (since it was forcibly linked to the Christian God, who was a person), but it had retained an element of universality even in Descartes, as a reflection of the cosmic order deriving from God. Its complete collapse from the pedestal of universality only became possible in the 18th century, in the cradle of capitalist development; in Britain.11 In a society like the British one, which was rapidly transforming toward a model of society of individuals, that is, competing subjects who at all times had to calculate gains and losses, the universality of the Logos no longer had a place. This social condition was also expressed in the psychological theories of British philosophers (the so-called current of empiricism), according to which Reason is fully transformed into private thought. It becomes the tool that every person has at their disposal to weigh situations and calculate by what means they can achieve their goals and maximize expected pleasure. And thus here another piece of the cognitive puzzle is completed: thought is calculation.
Yet a characteristic of urban life, beyond profits and losses, was also a sense of precarious social identity, since distinctions based on earlier divisions regarding inherited rights or occupational status no longer applied. At the same time, the new, highly competitive environment in which people grew and thrived did not allow for stable alliances, and the expectation of forming enduring social relationships over time was simply naive. To the extent that the sense of self lost its grounding in the social environment, it was inevitably turned inward. Descartes could doubt everything, except consciousness itself. For British philosophers of the 18th century, even the sense of self was not to be taken for granted; it too required explanation. The “solution” they found to this peculiarly urban split was simply to internalize it. Since the sense of self-continuity could no longer be anchored in one’s relationship with the external environment, it had to be separated from the individual’s actions. The self is not what one does, through embodied engagement with the world, but must be sought only within the soul’s inner states: in its representations and internal motives, the combination of which sets the mental mechanism in motion. Thus, a second contribution of British empiricism to the puzzle of cognitivism: thinking consists of strictly internal states of consciousness, independent of the body, which themselves function like a mechanism.
We have already come very close to the modern concept of intelligence. Almost the only thing missing was the word itself. The first appearance of the word “intelligence” in an English dictionary dates to the early 20th century, and its origins lie in the theories of evolutionary biology of the second half of the 19th century and the way these were translated into the social sphere. As its name suggests, what evolutionary biology managed to achieve was to unify the entire animal kingdom into a single evolutionary chain. At the top, of course, remained man, but now he had to share what were previously considered exclusively his traits with the other animals. One of these was the capacity for Reason, which had, however, in the meantime been transformed into a capacity for calculation. Gradually, however, the word Reason, referring to outdated theories that concerned only the human species, began to be abandoned in order to be eventually replaced by the word intelligence. With an additional addition this time. According to the theories of evolutionary biology but also those of social Darwinism which did not delay in appearing, every organism, if it wants to survive, must constantly adapt to its environment, which basically does not care about it. In other words, every organism is under a continuous test imposed by its environment, under a constant testing. Therefore, real intelligence is also an indicator of the organism’s adaptability to the challenges it faces. The ground was now ready for IQ tests to make their appearance.
Speaking somewhat Foucauldianly, it was precisely the techniques of measuring and recording IQ that ultimately placed intelligence at the center and gave it the position it holds today. And as expected, this development was directly related to social developments of the early 20th century. The restructuring of the social field based on industrial standards also dragged along the educational system, transforming it according to Fordist models of organizing production – compulsory education, standardized curriculum, program segmentation, examinations, etc. The problem with the new educational model was that a significant percentage of students seemed unable to learn, yet without showing any symptoms of psychopathology that would allow their failure to be attributed to some mental disorder. This percentage of “maladjusted” students eventually found their explanation through the IQ tests of experimental psychologists. Their failure was due to their low intelligence, that is, their inability to adapt to the educational environment, and this deficiency could be measured and predicted.12

about the Turing test again
Return to the original question. Is the Turing test, after all, a legitimate way to detect the existence of intelligence? Certainly. If by intelligence one means that Christian-originated appendage, with its almost puritanical “fear” of the body, which folds in on itself to count the small change and see where it falls short, then yes, it is the ideal test. When Turing, in the middle of the 20th century, wrote his article on thinking machines, he inherited a concept of intelligence that had already become commonplace in Western societies. And so he could use it somewhat—unproblematically (uncritically, we would say)—and be certain that his readers would understand what he was talking about. Tests had already become common practice. Perhaps he himself had taken an IQ test in his childhood, and we are sure he would not have had difficulty achieving a score close to 140 (by definition, 100 is the average). If he certainly had a talent (apart from mathematics), it was that he could grasp, even if unconsciously, the spirit of his era and rephrase the problems of that time in a language so simple (almost simplistic) and direct that it is hard to see how things could possibly be otherwise.
However, the work of criticism is to search for this “otherwise”; or at least the possibility of otherwise. And conversely, the sleep of criticism gives birth to monsters – which may have the mathematical accuracy of logical machines: like the one hidden behind “intelligence”.
Separatrix
cyborg #08 – 02/2016
- Here we use the term “artificial intelligence” roughly as a synonym for the term “cognitive sciences.” This, of course, does not mean that they are identical, nor that all cognitive scientists consider the creation of artificial intelligence feasible. The reason we use them almost as synonyms is that they are based on a common core of assumptions, as will become apparent. Also, we do not deal with that kind of artificial intelligence – machine learning that tries to find smart algorithms for specific problems without caring whether these have any relation to human intelligence. ↩︎
- As translated in the book “The Mind’s I” (eds. Douglas Hofstadter and Daniel Dennett), which includes an extensive excerpt from Turing’s original article. ↩︎
- By the term “written” form, what should be meant here is the digital, symbolic form that (usually) characterizes written language, in contrast to orality, in which the analog, fluid element predominates and which is necessarily supplemented by bodily presence (except in cases of technical mediation). ↩︎
- We remind you that, before the advent of computational machines, there was a special professional category of people whose job it was precisely to perform numerical calculations. Turing does not use the phrase “human computer” here metaphorically, as a thought experiment. He means it absolutely literally. ↩︎
- The excerpt that describes digital computers is not included in the book by Hofstadter and Dennett. ↩︎
- The interesting point here is that, in this case, the existence of the rulebook becomes necessary (and not just a simple, convenient invention), so that the supervisor can check that the executor is indeed performing the computational steps based on the rules. If it is assumed that the supervisor could also remember the rulebook, and thus its existence is again not necessary, then what is the necessity of the executor? The supervisor could do the work alone. And then someone would need to supervise the supervisor! Don’t these games ultimately have any seriousness? Not at all. Some people take them very seriously and build entire theories and practices upon them, without realizing their ultimate consequences. ↩︎
- From the English term “cognitivism”. We avoid rendering it as the cacophonous “γνωσιακισμός” (gnosiakismos), although it might have been a witty wordplay, with the addition of an extra “κ”: gnosi-akkismos. ↩︎
- Clarification: some years ago, within the cognitive sciences, it was fashionable to distinguish between symbolic and connectionist approaches. The former used rules of a logical type (If MAMMAL then ANIMAL, If ANIMAL then MORTAL) to model mental processes, while the latter used network models, such as neural networks. Connectionist approaches are not outside the framework of cognitivism: they too are algorithmic methods for handling symbols, with the difference that internal representations are “distributed” throughout the network and that the symbols they manipulate exist at a lower level of abstraction. In any case, this is a now obsolete debate. ↩︎
- Reproducing thus, in a way, the earlier course from Cartesian representation to Berkeley’s idealism. ↩︎
- Thomas Aquinas (13th century) is the most characteristic case. ↩︎
- In other countries the development was not the same, with the most characteristic example being the German philosophical line that starts from Kant, passes through Hegel and reaches Marx. However, the issue is that for the history of intelligence that interests us here, these philosophies did not play a role. Essentially, they were ignored by the subsequent psychological theories that ultimately gave birth to the concept of intelligence. ↩︎
- IQ tests were used in other cases as well, of course, e.g., to explain the low intelligence of individuals from other races, apart from the white race. For blatantly racist purposes, that is… ↩︎