The following text is a transcript of a discussion that took place between certain academics in a distant country, prompted by the emergence of ChatGPT a few months ago. First comes the text, without any additional comments of our own. At the end, we present a brief commentary from our side and reveal the details – however, we suggest you don’t peek at the end to learn the specifics; it’s better to read the discussion text first.
X.J.: Today we will talk about ChatGPT and artificial intelligence from the perspective of technology, human nature, and ethics. We see that in recent weeks various experts have been speaking about ChatGPT from their technical perspective. However, I believe that the discovery of ChatGPT does not simply constitute a technological phenomenon, but also a cultural one, and it is closely connected with society and the economy as well as with politics. For this reason, perhaps we should consider the impacts that this revolutionary cut in artificial intelligence will have on humanity and individuals from a broader perspective. I would like to start by asking X. to tell us what this technology is.
Z.X.: OpenAI has recently released a series of similar products, including ChatGPT. Some of these have better performance than ChatGPT from certain perspectives. I would like to use ChatGPT as a starting point to talk a bit about the meaning of these recent AI (artificial intelligence) technologies and the role they may perhaps play.
As many may remember, a few years ago AlphaGo defeated the top Go players one after another. At that time, AlphaGo had already surpassed various other programs that could only produce results based on their programmers’ algorithms. In contrast, AlphaGo had the ability to learn and evolve after first understanding the basic rules of Go. Deep learning today is generally based on image recognition, which is used in various tasks. It not only has the ability to recognize and learn patterns, but can also generate new content, giving us the impression that it has creative abilities.
Regarding its ability to conduct a conversation, ChatGPT represents a qualitative leap compared to chatbots of the past. Although its underlying algorithm has not been disclosed, we can make some assumptions. For example, the acronym GPT contains the G for generative and the P for pre-trained. ChatGPT is not designed like chatbots with the purpose of producing real-time responses, but it has a massive model, based on which it repeatedly tries to solve whatever problem is posed to it. Finally, the T stands for transformer, which produces near-human level language through multiple layers of transformations.
An approximate explanation of the early version of the algorithm is as follows: it starts with a letter, based on that it predicts the next one, then it predicts a whole word and then a sentence. We therefore see that ChatGPT does not learn to speak in the way humans do, but tries to imitate them as best as possible. Even if it speaks like a human, it does not think like a human.

X.J.: Thank you, X., for the introduction. Now let’s look at the topic of emotions and let’s talk about the user experience.
J.W.: H. said something very important earlier. ChatGPT seems to talk like a human, but thinks in a completely different way. If, therefore, humans and ChatGPT think so differently, is it possible for them to be on the same wavelength?
Human natural language is a type of symbolic system that has evolved over a long period of time, through human exchanges and mutual understanding, possessing a certain arbitrary quality. Even when people speak, the feelings and ideas they wish to express are never in a direct connection with language. Nor is human language completely natural and spontaneous; there is always a process of internal translation and transcription. In fact, people are often surprised by the inability of language to express their inner thoughts and feelings. For example, in most languages direct expressions of pain are rare, limited basically to “pain, sharp pain, mild pain,” even though these expressions are insufficient to convey the feeling of pain that individuals experience.
If we look at the entire spectrum of emotions, human language is even more fluid. To express a single feeling, people often use figurative metaphors. Therefore, those who use human language creatively inevitably find interesting and unique ways to express a specific emotion.
For this reason, I thought of doing a test with ChatGPT to see if it can understand metaphors. I used Leonard Cohen’s song Treaty, in which he says something like “Everyone seems to have sinned, as if they have lived their lives without being able to rid themselves of their guilt. Even if they are freed from it, they ultimately remain imperfect people, with flaws.” However, he did not express these thoughts directly. Instead, he used a metaphor, talking about a snake that feels uncomfortable because of its sins and thus sheds its skin, hoping that way to leave its sins behind. But even after it has shed its skin, the snake remains powerless and the poison still runs through its body. In other words, Cohen says that, even if you are freed from God and the feeling of guilt, you remain a rotten person and life continues to be painful.
What did ChatGPT understand from this metaphor? It did not comprehend that the snake sheds its skin because of its sins, but because it needed to transform its life in a way that would make it stronger and more resilient. Although it is not a pleasant process, the snake will learn from it and emerge stronger.
Here we can see various characteristics of ChatGPT. First, it doesn’t know much about the human language of tattoos, it doesn’t know about the snake’s metaphor from the Bible and its close relationship with original sin in the Garden of Eden. As a result, the only thing it can do is give an explanation based solely on the data it has been given. Second, I was impressed by the fact that ChatGPT is built in such a way as to give “positive” answers. Even if you speak to it in more pessimistic tones, it will give a positive interpretation and will try to encourage you and “lift you up.” It therefore begins by denying the snake’s guilt and then emphasizes the fact that shedding its skin will make it better. Third, since ChatGPT cannot understand the metaphorical content of the sentence, it doesn’t know that the reptile is a metaphor for humans. One reason is that ChatGPT does not confront this human dilemma. Another is that this machine has no experiences and lived experiences, it has no body, it has no social interactions. Thus, it struggles to understand the concept of metaphor and why humans might use one thing to talk about something else.
In my opinion, we all constantly find creative ways to use everyday language. Because language is so “standardized,” people always have a tendency to express themselves without this creative ability. Therefore, using natural language requires more creativity than we think, while ChatGPT, as it lacks such creativity, is boring. I have serious reservations about how interesting a conversationalist it will become in the future, and this may be a serious obstacle for large language models.
X.J.: I’ve also spent considerable time with ChatGPT over the past few days. My impression is that ChatGPT does first-rate work on tasks involving logic, is second-rate in terms of the knowledge it possesses, and only third-rate when it comes to its writing abilities. I say it has first-rate logic because it has such a powerful algorithm that allows it to understand language and context, enabling conversations to flow very smoothly. Sometimes the answers it provides are incorrect. However, even in these cases, its logic remains clear. I claim it is second-rate in terms of its knowledge because it has a very strong foundation in English, based on which it can answer almost any question; however, it knows far less about China, and what it comes up with about it sometimes is ridiculous. I say its writing abilities are third-rate because its answers lack personality. They resemble those of a research assistant who has just acquired some basic skills; its texts are very standardized and polite, but completely impersonal and devoid of any literary talent.
J.Y.: At this stage, I don’t think ChatGPT has particularly helped me. However, compared to the enormous hype of the Metaverse, I have a much better opinion of it. I believe that in the end it will have a tremendous contribution to the development of human knowledge, writing and thinking.
There are many people who worry that AI is developing too fast and that people will end up becoming slaves to technology. In my eyes, however, ChatGPT does not look like an enemy, but like an assistant who helps me with repetitive tasks and boring, low-level processes, while I need to prepare lessons, write articles and do research. Among other things, it helps me search and archive information, read the latest articles, etc. Having such assistance, people will have the comfort of being more creative themselves in the part of reflection and knowledge.
Many of my friends, of course, disagree with this idea and believe that people cannot experience intellectual development without first having the experience of completing some boring and repetitive tasks, such as learning a foreign language or acquiring some basic knowledge. In the age of AI, however, there is no reason for people to spend a lot of time on such repetitive tasks. It would be better to focus on more creative work. Somewhere I read that ChatGPT will bring revolution to the educational system… of the whole world. …in all educational systems there is still a large piece that is mechanical and repetitive. ChatGPT has the ability to take over part of this intellectual and linguistic work, thus allowing educational systems to focus more on the part of “inspiration” and creativity. It is, of course, possible that ChatGPT will move towards alienation. If its linguistic and intellectual abilities far exceed those of humans, is there perhaps a chance that it will begin to impose disciplinary measures on humans or handle them for its own purposes? Will it steal from humans their freedom in the field of technology?
On the other hand, AI has the ability to analyze or even design human emotions. I have studied video games up close, and there is a specific term in this field: emotional engineering. The reason a game makes us have a good time is because it handles human emotions. From my side, I continue to look forward to a collaboration between ChatGPT and humans, which will unleash the potential of both.

X.J.: Y. gave us an interesting perspective. I once asked ChatGPT, “Can you think like a human?” It replied that it could not have the autonomous consciousness and subjective experience of being human, nor could it possess human creative and evaluative skills. It also could not make moral judgments. It said that while it can excel at certain things, it cannot fully replace human thinking and can only assist people with certain tasks.
As AI continues to evolve, will it produce some kind of wisdom? According to certain theories, wisdom is acquired through experience, and this experience is not limited merely to human reason, but also includes human emotions and human will. Based on the philosopher Michael Polanyi (1891-1976), there is a kind of tacit knowledge that is inexpressible and can only be acquired through practical experience, such as when we learn to drive or swim. Embodied AI seems to go beyond these experiences. AI may not even be able to acquire wisdom in the way humans do.
Z.X.: If we start from this premise, then the questions we ask will also be based on a false assumption. AI does not need to attain any “ambiguous wisdom” in the way humans do. Ambiguous wisdom is the result of the limitations of human senses and intellectual abilities, stemming from the fact that we must resort to what we call metaphysics or reasoning in order to comprehend anything. However, if, with the help of other disciplines, AI manages to achieve direct, unmediated observation, analysis, and understanding solely through data collection, then it will also be able to discard this “ambiguous wisdom.” I therefore believe that AI can attain wisdom without doing so in the same way as humans.
Moreover, if we train AI properly, it might reach the point of understanding metaphors. It might not understand exactly what you’re talking about, but it can mimic your language. If we teach it in advance what a specific metaphor means, then it will be able to use it naturally in a conversation. If we think about it carefully, in a way the metaphors that have been used throughout the history of ideas and literature are nothing more than imitations of those who first used them. From this perspective, we should not deny that AI can become as intelligent as humans.
This leads us to Descartes’ thought experiment, in which he proposed the concept of “evil demons”: if there are evil demons that control all your senses and manipulate anything you perceive from the external world, then is there something within your mind about which you could have certainty? By the same logic, if AI can be a powerful imitator, even if it has no feelings or intelligence and cannot think like a human, it can still imitate a human with those abilities. From this perspective, who are we to say that AI is less intelligent than humans? Is there any reason to dispute the intelligence of AI beyond the fact that it does not possess emotions, does not have ambiguous wisdom, and is not human?
Secondly, regarding social functions and political structures, the extent to which AI can replace humans does not depend on the upper limits of the algorithm but on the lower limits of human capabilities. A large percentage of manual jobs consist of repetitive and very simple movements. These are the kinds of jobs that many young people find after graduation, when they first enter the labor market. For example, teaching assistants, lawyers’ assistants, copywriters’ assistants. They must go through a long period of training before they can rise up the hierarchical scale of their job, but now their jobs can easily be replaced by algorithms. This raises the question: if AI can replace many basic jobs, will this result in massive social injustice? Is there a possibility that technological progress and structural social injustices could lead to intergenerational injustice in the future?
X.J.: I have a different opinion on this topic. In a previous meeting, an AI expert told us that during their research they discovered that AI excels in waters when it needs to handle sophisticated knowledge, but has problems acquiring basic knowledge and skills that children possess. The reason is that children operate less with reason and more with intuition and perception, something that is lacking in AI. H. just spoke about something related to our prejudices regarding human beings: we consider humans to possess logic, yet they also possess intuition and an illuminating sense. Intuition and illumination; these are fundamental prerequisites for creativity. In this sense, what exactly is a human being? Due to technological progress, we tend to reduce humans to their rationality, but this is a very shallow understanding of human nature. Hume is famous for his saying that man is merely a slave to his emotions. With this, he intended to overthrow the myths bequeathed to us by modern enlightenment. Here we come to the limitations of OpenAI’s AI, including the absence of emotional baggage. There are also the moral dimensions of the issue. If ChatGPT has its own will and its own wealth of emotions, then AI will indeed become a new kind of human, and it will be difficult for us natural humans to control it.

J.W.: I would like to comment here on what H. said, that ChatGPT can function as an assistant, that it will not necessarily eliminate a large number of jobs, and that it can provide significant help to many workers. I agree, but I have two objections. First, H. may believe that the few geniuses among humans who will manage to surpass themselves constitute the hope of all humanity, but the threshold for many of us is very low. Thus, we see a sense of disappointment and pessimism for people, and on the other hand, hope for some new kind of intelligence.
A few minutes before, H. said that enlightenment and ambiguity are based on human limitations. But the reason why human beings can have feelings that we are proud of is precisely because we have material bodies. Feelings are the responses of the material body as it encounters the external world. If we did not have bodies, we would not be able to respond, no change would touch us, and our feelings simply would not exist.
Is having feelings ultimately a good thing or not? From a specific, negative perspective, there is no reason to feel proud about certain human characteristics. However, the reason why human civilization and society are worth preserving is that our best qualities are based precisely on these limitations. The upper limit of humanity is based on the lower limit of humanity.
If people did not have bodies and if there were no gap whatsoever between human experience and language, then we would have no incentive to create thousands of different languages. The greatest civilizations are based precisely on these limitations. As Borges once wrote in his story “The Immortal”: the ultimate limitation of human beings is the fact that they die, that they have bodies that decline rapidly, but precisely this is also the cause of their grandeur. It is because of the decline of their finite bodies that human beings possess a sense of self, a sense of self that is the source of every joy but also every pain. To cope with these great emotional ups and downs, people have created their own civilization.
This culture contains a deadly inability: it separates people. ChatGPT, like previous discoveries of mechanical tools, reminds us that human societies have a natural predisposition to grant more time to elites for their “higher” tasks and leave the tedious and repetitive jobs to the “underlings.” In the 18th century, this tendency evolved into the theory of division of labor: for a production system to be efficient, some people must undertake simple, necessary tasks, while others must take on the work of management, research, and coordination. The emergence of ChatGPT does not undermine this order of things in any way. Rather, it will reinforce the tendencies of division of labor. ChatGPT reproduces a very mistaken way of thinking: some people work on complex things and others on simple ones.
This way of thinking is mistaken for two reasons. First, even the most complex creative work often needs the simplicity of intuition, and this intuition in turn requires repetitive work to develop. For example, if you have never studied Spanish and constantly rely on translation software to translate Spanish into English or Chinese, then you will not be able to understand Spanish poetry. You might grasp the general meaning of a poem, but not the poem itself. Similarly, the kind of division of labor promoted by AI would eliminate all human creativity.
Secondly, there is nothing defensible about this way of thinking from a moral point of view. In a future where AI will participate in the division of labor, we will have split into two classes: slaves and masters. But since the masters will not be engaged in simple, repetitive tasks, they will not be able to create anything new and will merely repeat the same routines. Behind the inspiration for every new “creation,” even for an image or a film produced by AI, lies the continuously renewed material, sensory experience and the experience of social life, and not the repetitive application of some popular routine.
In other words, the process of social differentiation has negative impacts for all classes: those who think they are in the upper class see their creativity disappear; those in the middle or lower class do not have the opportunity to develop their intelligence, since they are condemned to repetitive work. From an ethical point of view, therefore, an extreme division of labor will create huge social divisions, with tragic consequences for all classes. The emergence of ChatGPT will reinforce these tragic trends; however, it will also serve as a warning to remind us of the consequences of separating humans and machines into classes.
To take the simplest example, if we talk daily with ChatGPT, we will forget the upper limit of natural language. Gradually, people will lose their ability to express themselves, to understand themselves, and ultimately to interact with others in a healthy way. Human friendship, love, and all positive relationships will disintegrate, and human societies will truly wither away. This is the most serious crisis one can imagine.

J.W.: I have some disagreements with J.W.’s analysis of social structure and history. Throughout history, stratification and differentiation have always accompanied human progress. We should not deny the rationality of stratification, but we should be able to distinguish who deserves to benefit from this structure and who does not. For example, Watt discovered the steam engine, Newton the laws of physics, and Yuan Longping hybrid rice. All of them deserve to be rewarded. However, in the process of social progress, it is unreasonable for some people in the upper classes to rely on violence to get what they want, using their political power to exploit the many, making it impossible for them to realize their potential.
Anyway, even if we ignore technology, there is already a great degree of injustice in the structure of human societies. There is no reason to see technology as the sole culprit. On the contrary, we must address these injustices face to face. People are the culprits of these injustices. Instead of being indignant and angry with machines, as the Luddites did during the industrial revolution, we must understand that social injustices are caused by unjust social structures. If we understand this, we will see that throughout human history, technological forces have often been our allies and not our enemies in our efforts to overturn unjust social structures. We need to understand that social structures are already unjust. There are far too many menial jobs, and this is what makes people become more and more like machines. Only by overturning these social injustices will everyone be able to become what they were meant to become.
J.Y.: Throughout history, technology has been a double-edged sword. This applies to ChatGPT as well. J.W. believes it poses a threat to the genuine kindness and beauty of human nature; H. sees more possibilities for skillful handling. In this emerging phase of AI, we must try to steer it toward a better direction instead of indulging in verbal attacks against it. My position has always been that scientists create technology for the good of humanity. Perhaps at this stage, academics in the theoretical sciences can join forces with those in the positive sciences to prevent ChatGPT from heading in a malicious direction.
I also don’t believe that only a few people will see the benefits of ChatGPT. On the contrary, when ChatGPT takes over repetitive tasks, everyone will be able to develop their potential. However, this presupposes that society will provide opportunities for everyone. Technological development must be closely intertwined with social development. Technology cannot solve all problems, and humanity must act in a complementary way to technological evolution, so that societies become more equal.
After the emergence of AI, societies might indeed become more equal, provided that AI does not make distinctions the way humans do – based on race, gender, geography, or even wealth. AI does not think in this way. From this perspective, machines might be in a better position to create an egalitarian platform where every person can realize their individual potential.
However, I would like to say some things about the “glass phenomenon,” a term coined by a foreign academic who is critical of the Metaverse. He claims that the masses are inside a glass, looking from the inside out, thinking they are free to swim wherever they want. In reality, however, only a few elites are outside the glass, making laws to control those who are inside. Many believe that AI will create the greatest inequalities in human history, inequalities based on intelligence. If someone is smarter than you, if they can think better than you, they will be able to crush you for fun. Nevertheless, I believe that ChatGPT can free people from the burden of heavy tasks. I also believe that if everyone is given opportunities, then everyone can create things at a high level. It is the institutions and repetitive tasks of modern society that do not leave room for everyone to realize their potential.
Finally, I would like to talk about a vision of mine: that the great cultural achievements of human history will remain as they are, but that the next creative wave will be the result of joint work between human and machine, with the machine providing inspiration and motivation to the human.
X.J.: Y.’s vision reminds me of the romantic fantasy of Greg Brockman, one of the co-founders of OpenAI. He had once said that AI and intelligent robots would replace all jobs and that the cost of labor would tend toward zero in the future. In the future, all humans would only need a basic income that would allow them to be free, something that could perhaps overturn the status quo of modern capitalist inequalities.
I’m afraid, however, that this vision is yet another utopia, at least for now. Behind the companies developing AI are giant internet corporations. These internet giants were there at the birth of the internet, believing then that an era of anarchy was dawning, in the true sense of the word, where everyone would have equal rights of expression. Having now passed through various phases, the internet has made societies more unequal. Resources, wealth, technology and talent are increasingly concentrated in a few developed countries and companies. In the real world, every technological progress is driven by capital, in a game where the winner takes all. All resources, all talent, all technologies are concentrated in the hands of a few oligarchs.
So, do we await a utopia of freedom and equality in the future? Or will societies be fragmented even further, producing a new hierarchical order? Regarding these questions, we can reflect on some things, while for others we must wait and see how they develop. I believe in the theory of Zhang Taiyan (1869–1936), as he formulated it in his work “Separating the Universal and the Particular in Evolution,” written toward the end of the Qing dynasty. He believed that both good and evil progress, with evil always one step ahead of good. Every step forward in technology will inevitably be accompanied by a step backward somewhere else. Extreme optimism should be avoided, but extreme pessimism is equally unhelpful. People always develop within history, despite its contradictions—and this is the eternal human condition.

And now the revelations. The names of the experts are as follows: X.J.: Xu Jilin, Z.X.: Zhang Xiaoyu, J.W.: Jin Wen, J.Y.: Jiang Yuhui. As is evident from the names, they are Chinese academics. Moreover, they reside and work in China. This particular discussion, therefore, took place within China. Since (unfortunately) our proficiency in the Chinese language is quite limited, the translation we presented above was done from a secondary source. We found an English translation of the discussion on the Reading the China Dream1 website, which follows political and social developments in China and systematically translates various texts from Chinese to English.
The reason we chose this particular text was not because it presents some shocking arguments in one direction or the other. However, we consider it important to become familiar with (or at least begin to acquaint ourselves with) some basic ideological directions within China. This is knowledge that we will probably need in the future. Texts that have a primary relationship with China (meaning they do not originate from (mis)interpretations of various Western well-intentioned sources) are particularly significant from this perspective.
Why did we avoid making the revelations from the beginning and instead left them for the end? If someone doesn’t know where the text comes from, they could easily think that its source might be some Western university. The participants in the discussion even demonstrate such familiarity with Western thought that it’s doubtful whether even some Western academics still possess it—after all, why would anyone nowadays concern themselves with Descartes, Hume, and M. Polanyi when they feel they have to rack their brains to decide the gender of angels? Yet this is precisely the characteristic of the text that makes its origin suspicious: it references classical Western thought and snobbishly displays the postmodern currents; that is, exactly the opposite of what Westerners do nowadays.
In any case, as far as its content is concerned, even with a cursory reading it becomes evident that the participants do not follow any “party” line, quite the opposite of what many might expect. The disagreements are intense, radical, and perhaps even fruitful to some degree. The speakers do not engage in parallel monologues, but rather one continues from where the previous left off. Within a brief discussion unfolds the entire spectrum of arguments that circulate also in Western societies, from theories about “what is man” to hopes for a society freed from the toil of labor, and concerns about the issue of labor division and its potential worsening due to AI technologies. Even the idea of the possibility of self-realization through AI regularly resurfaces. This idea seems very neo-liberal, which would mean that the process of individuation has progressed significantly even in China; due to lack of knowledge, however, we maintain some reservations that this might not be due to Marxist influence (in the sense that artificial intelligence is perceived as the means towards communism). This particular discussion, therefore, has nothing to envy from its Western counterparts. In fact, in some points it appears considerably more grounded. Indicative, for example, is the fact that, for Chinese academics (we repeat that we are talking about academics), the central significance seems to be played by the role of the body and its involvement within social relationships, with all the emotions and imperfections that this entails. No “intelligence” is feasible without these “weights” of humanity. Because for many Western “analysts,” the body (in its broad sense) indeed appears as a “burden” on the path towards artificial intelligence. Has perhaps now come the time for Westerners to start listening to what is being said in the East? Since they themselves are unable to manage their (political and intellectual) legacy, perhaps others in the East might be able to do it better.
translation, commentary
Separatrix
