🎉 The #CandyDrop Futures Challenge is live — join now to share a 6 BTC prize pool!
📢 Post your futures trading experience on Gate Square with the event hashtag — $25 × 20 rewards are waiting!
🎁 $500 in futures trial vouchers up for grabs — 20 standout posts will win!
📅 Event Period: August 1, 2025, 15:00 – August 15, 2025, 19:00 (UTC+8)
👉 Event Link: https://www.gate.com/candy-drop/detail/BTC-98
Dare to trade. Dare to win.
Dialogue with Turing Award Winner Hippakis: The greatest threat brought by AI is to turn humans into "slaves of slaves"
Source: Tencent Technology
Since the beginning of 2023, ChatGPT has plunged the world into an AI frenzy. With the debut of GPT4, its emerging powerful capabilities make people feel that in just a few years, AI will become an omnipotent existence.
But where is the upper limit of AI based on the large language model Transformer paradigm? Can it really completely replace us? There have been many answers to these questions. Some people think that the big language model will bring a new era, which is very close to the artificial intelligence that can complete all human work; but some people think that it is just a random parrot, unable to understand the world at all. At present, no matter which point of view there is a lack of sufficient interpretation and a well-formed system.
In order to allow people to see this issue more fully, Joseph Hifakis, a foreign academician of the Chinese Academy of Sciences, wrote "Understanding and Changing the World", expounding his decades-long understanding of artificial intelligence leading to AGI from the perspective of cognitive principles thinking about potential paths. Joseph Schiffakis had won the Turing Award ten years earlier than Hinton et al. This time, he expounded very clearly from the perspective of cognitive principles his "capability and inability of artificial intelligence" and "the path leading to AGI." Potential paths and risks” decades-long thinking.
丨Focus
01 The current AI is far from AGI
**Tencent Technology: What does the emergence of ChatGPT mean for artificial intelligence? Is it a new paradigm, or more of a specific application of an existing paradigm? **
Joseph Schiffakis: I think the emergence of ChatGPT and other language models is an important step in the development of artificial intelligence. **In fact, we have undergone a paradigm shift such that virtually any natural language query can be answered, often with a very relevant answer to the question. Large language models solve long-standing problems in natural language processing. ** This is an area where researchers have been unsuccessful for decades, traditionally approached by the symbolist school of thought, which separates the syntax and semantics of language to construct the rules of artificial intelligence.
Now, big language models take a different approach, and they consider the meaning of a word to be defined by all the contexts in which it is used. They use machine learning to perform calculations of probability distributions. For words, this probability distribution is used to predict the most likely next word in a sentence. This is a very simple but effective method. It's a bit naive, but it turns out to be great for summarizing text. Of course, the nature of the solution it employs also determines its limitations. Language models are great for creating summaries of some text, or even writing poetry. If you make it a summary of 20th century Chinese history, it can do a really good job. But on the other hand, if you ask some very precise questions or solve some very simple logic problems, it can go wrong. We can understand this because this type of question is context-independent model, therefore, we cannot check the coherence of the text and the answers it provides.
**Tencent Technology: Now there are many new technologies, such as logic trees (LOT), which can help machines to guide themselves to understand logical processes. Now, large language models are training themselves to develop more specific or complex logical processes. There are many layers in a neural network, and the higher the level, the more abstract the understanding. Is it possible if there might be something like a model or a structural understanding of the world in these higher level neurons? **
Joseph Schiffakis: In my book, I explain that humans and machines develop and apply different types of knowledge. This knowledge enables humans and machines to solve different types of problems, depending on how valid and general it is. **An important distinction is between "scientific and technical knowledge" and "tacted experiential knowledge acquired through learning". For example, when I talk, when I walk, my brain actually solves very difficult problems, but I don't understand how they work, and Neural networks generate the same implicit empirical knowledge, allowing us to Solve problems without understanding how they work. **
This is what we call data-based or data-driven knowledge. On the contrary, it is very important that the best scientific and technical knowledge is based on the use of mathematical models that provide a deep understanding of the physical phenomena of objects and components. For example, when you build a bridge, you can be sure (by its principles) that the bridge won't collapse for centuries to come. However, with neural networks, we can draw certain predictions, but we don't understand how they work, and it's impossible to construct a theory that explains the behavior of neural networks. **This property makes large language models severely limited in critical applications without human involvement.
The question is whether these GPT-LM systems can achieve human-level intelligence. This is the problem. I think there is a lot of confusion about what intelligence is, and how to achieve it. Because if we don't have a clear concept of intelligence, we can't develop theories about how it works, and we can't clearly define intelligence.
And today there is a lot of confusion. Recently, I wrote a paper discussing this issue. In fact, if you open up a dictionary, such as the Oxford Dictionary, you will see that **intelligence is defined as the ability to learn, understand and think about the world, and to achieve goals and act with purpose. **
**Machines can do impressive things. They can surpass humans in games. They are capable of performing various tasks. Great achievements have also been made recently. They can perform tasks related to sensory abilities, such as visual recognition. However, machines cannot surpass humans when it comes to situational awareness, adaptation to environmental changes, and creative thinking. **Quite simply, GPT is very good at translating natural language, but it cannot drive a car. You can't use GPT to drive a car. There is still a big gap between them. I think we still have a long way to go. **Today we only have weak artificial intelligence, we only have some components of general intelligence. We need something more. **
I think a major step towards general intelligence will be autonomous systems. The concept is now clear, autonomous systems arise out of the need to further automate existing organizations, by replacing humans with autonomous agents, which is also envisaged by the Internet of Things. In fact, we're talking about self-driving cars, smart grids, smart factories, smart farms, smarter telecommunications networks. **These systems are very different from Narrow AI in that these systems are composed of agents that are constrained in real time and have to deal with many different goals. These goals involve changes in actions and activities in many different domains, and GPT is not good at this, it is good at dealing with natural language and document transformation. **Additionally, we need systems that can work harmoniously with human agents. All of these are not possible with other language models. So we are still quite far away from artificial general intelligence. Of course, it all boils down to what exactly do we consider intelligence to be, because if intelligence is defined as just conversations and games, then we've reached artificial general intelligence, but I disagree with that definition.
**Tencent Technology: The standard test of intelligence in the past is the Turing test. Obviously GPT has passed the Turing test in terms of dialogue, but it is not autonomous intelligence. In this case, how can we judge the intelligence of AI? **
Joseph Schiffakis: I recently wrote a paper arguing that the Turing test is not enough. **I propose another test, which I call the substitution test. In fact, the idea is that if I could substitute a machine for another agent performing a task, I would say that this agent is as smart as the agent performing the task. **If I could replace a human with a machine to drive a car, teach a human, or be a good surgeon, then I would say that a machine is as smart as a human.
So if you take that definition, instead of testing, you'd think that human intelligence is actually a combination of skills. So do you understand how far we are from general intelligence? In this alternative test, some actions may have to be performed by a machine, such as a robot. When you want to do gardening, you need a robot to do it. GPT is just a language model, it does not include these robot parts.
**Tencent Technology: According to your definition, we will see the gap between artificial intelligence and human intelligence disappear only when computing and systems can automatically execute large amounts of text and adapt to changing environments. And now applications such as AutoGPT or Baby AGI can divide the task into different steps and try to achieve the goal of the task through different processes. It's pretty automated in a way. Do you think it's getting closer to AGI in the process? **
Joseph Schiffakis: There are many issues here, including systems engineering issues. **It is not enough to have a superintelligent agent, because you must also guarantee that its behavior can be explained. **This is also a problem that I discuss extensively in my thesis, which is the problem of explainable artificial intelligence or safe artificial intelligence that everyone is talking about.
What people don't understand is that **with neural networks, we can't understand their behavior. Obviously you can't explain why it makes such output, because you can't have a mathematical model to describe their behavior. Of course, we fully understand how the mathematical functions of each node of the neural network are calculated. **It's just a linear combination of inputs, plus some non-linear functions, so we can understand the behavior of each node. **But when we try to understand the emergent properties of the whole neural network, we despair. **But this is not an AI-specific problem, it's a general problem in science.
You can't infer the properties of water just from the properties of oxygen and hydrogen atoms. Even if you fully understand this, there is a problem of scale and complexity. This is the point of despair. **We cannot use the logic of technology combination or reductionism to understand the overall behavior of the neural network through the behavior of the elements in it. So the only way we can apply it to a neural network is to test it, because we can't verify its behavior, and we can't reason about it. **But if only tests are applied, it means you are taking a purely experimental approach, not theoretical understanding. So the type of content you can actually test varies widely: for example, you can't test holistic security issues because you can't analyze overall behavior. But you can do security testing defensively.
We have always applied testing to hardware and software. But in order to test, you have to have criteria for how long the test should last. For hardware and software, we have models and coverage standards. But for neural networks, we don't have that standard. I'm not saying that this is a very difficult problem to solve, **For neural networks, we have some alternative possibilities, such as adversarial examples. But these manipulations break a certain robustness in their behavior. **So you see, if I ask you a question, you will give an answer. If I modify your question slightly, you would give some similar answers if you were a human being. But we know that when we slightly change the input to a neuron, the response can be very different. So this is also something to consider.
02 Emergence can never be understood
**Tencent Technology: Do you think the emergence of this concept, that is, the transformation from basic capabilities to more advanced capabilities, is unexplainable? **
Joseph Schiffakis: Yes. You take out a subject such as physics. Physics is a very mature subject. Physicists try to make a logical connection between particle theory, quantum theory, or general relativity, and I don't think they'll ever succeed at that because there's a problem of scale. I think similar problems exist in any kind of system.
**Tencent Technology: So in your opinion, because of this unexplainable phenomenon, we can't actually predict what the big language model can do? **
Joseph Schiffakis: Obviously, we can't build a model to predict what it can do. We cannot build models, I mean mathematical models. Here, the AI community uses the word model to mean a neural network, which is a source of confusion.
I think we should take another holistic approach. Since we can't form a relevant model, **perhaps we can have a way to form a theory based on tests and empirical observations. It's supposed to be a theory of testing about statistical properties. **But according to my understanding, we have some needs that are technically difficult to meet in today's neural networks.
**Tencent Technology: Yes. So in order to understand these abilities that emerge from them, we need to establish a discipline like psychology to understand it? **
Joseph Schiffakis: Exactly. That's a good question. But it will be a bit problematic to use GPT itself to establish such an understanding. Because in fact some people are now saying that a GPT successfully passed the exam to become a lawyer or a doctor, so why can't such a GPT become a doctor or a lawyer?
I think this is a very interesting argument, but it involves the robustness issue I mentioned earlier. Also passing the exam, the ability between humans and neural networks is very different.
The question of robustness is that if you ask a sane person to answer the question, if you change the question a little bit, the answer will be similar. GPT does not guarantee the uniformity of answers. Another problem is that humans can rely on logic to control what they do and what they should say. But because a neural network, typically like ChatGPT, has no semantic control over what it does, it can do things that are obviously wrong. No reasonable person would make this mistake. So the conclusion of the whole argument is that if GPT can logically control the consistency of what it says, and it is correspondingly robust, then allowing GPT to be a lawyer would be great. But we're actually far from this level of artificial intelligence. **
**Tencent Technology: Why is ChatGPT so difficult to control? Is it because it is a distributed computing feature of a computer? **
Joseph Schiffakis: GPT is a different kind of computer. It is a natural computer. It's not a computer that executes programs while you write them, you have absolute control over what the system can and cannot do. When you train a neural network, you lose that control. These systems can be creative in a sense because they have degrees of freedom.
Now, if we can control these degrees of freedom and understand how they behave, we'll be fine. **The problem is that we can't control this huge degree of freedom of neural networks, and it is almost impossible to control it theoretically. **You can make a rough approximation of how they behave, but you won't have exact results. If you have a traditional computer program, even if it's a long program, you can still extract the semantic model and understand what's going on in it. This is a very important distinction.
**Tencent Technology: Can you talk about the concept of natural machines in detail? **
Joseph Schiffakis: **Natural machines are intelligences that make use of natural phenomena. For example, a neural network is a natural machine similar to a quantum computer or other computers. In the past, when I was a student, we also had a lot of computers. In building this natural machine, we will use some principles in physical phenomena, because any physical phenomenon contains some information content. For example, when I throw a stone, the stone is like a computer, it calculates a parabola, which forms an algorithm. You can observe any phenomenon, and you can use natural phenomena to build computers. But these computers are not pre-programmed. They exploit certain laws in physics or mathematics. This is the case with neural networks. **
**Tencent Technology: Let's talk about some other content in your book. You have discussed some research and innovation issues. We all know that although many ideas of neural networks come from Europe or Japan, the companies that use it and produce products, such as OpenAI and Deepmind, are all in the United States. What do you think is the reason for this? **
Joseph Schiffakis: There is a difference between attention and innovation. **Because innovation is the ability to apply research to develop new products or services to achieve technological breakthroughs. **
I think that's a very strong advantage of the US, they've done a great job of innovating. This started in California, where you have what I call the innovation ecosystem. **The innovation ecosystem brings together very good academic institutions, large technology companies, start-ups, and venture capital and capital. This consistency enables effective and efficient translation of new results and applications. Other countries have also adopted this model. The idea of an innovation ecosystem is a common one, and smaller countries like Israel and Switzerland have had a lot of success. **So to sum it up, I think that to achieve innovation, you should link great universities with great industries. It depends not only on material resources, but also on cultural factors, education and institutions should recognize individual creativity and entrepreneurship.
03 Neural Network Oracle: A New Science That Cannot Be Understandd
**Tencent Technology: You just mentioned that neural networks are the process of simulating biological brains and the physical world. How is this simulation possible when our understanding of biological brains is still very limited? How far is this neural network from our biological brain? **
Joseph Schiffakis: That's a good question. I just said that neural networks are a kind of natural computer, which adopts a different paradigm than traditional computers. Specifically, neural networks are inspired by the neural workings in our brains. It mimics some of the natural processes by which nerves work. **However, neural networks only imitate the computational principles of the brain, which is more complex because it has different structures and functions in different regions. And these different functions are built on top of a more complex architecture, which we are still trying to understand. **And the neural network of the brain is a parallel computing mode. Neural networks are also quite different from it in this respect.
It should also be understood that **if we only study the brain at the biological level, I don't think we can fully capture all human intentions. **As an example, use your laptop to run a piece of software. Then I will give you electronic instruments to study how this hardware works through measurements. If you have compiled the program, all the knowledge is present in the form of electrical signals at the hardware level. But only by analyzing this electrical signal, it is impossible to find the source code of the problematic software, because you have this scale problem. **I think this is the key to understanding human intelligence, we have to study the brain, but not only the brain. Therefore, the computational phenomenon of the brain is a combination of electrical signals, physicochemical phenomena, and psychological phenomena. **
**And the problem today is how to connect mental phenomena to brain computation. This is a major challenge in my opinion. If we don't succeed in this, I don't think we will ever be able to understand human intelligence. **
**Tencent Technology: You mentioned that artificial intelligence is opening up a new path for the development of human knowledge, breaking through the limitations of the human brain in dealing with complex problems. At what points do you think AI can completely surpass humans? **
Joseph Schiffakis: Yes. In my book, I explain that **machines can help us overcome some of the limitations of our thinking. **This has been confirmed by psychologists. The limitations here include the human mind being limited by cognitive complexity. **We humans cannot understand the relationship between more than five independent parameters. This is why the theories we develop are very simple. We don't have a theory with 1000's of independent parameters formed. **
**So I think this is a very important direction in the future. We will have more "oracles" that help us predict the development of complex phenomena or complex systems. ** For example, we will have intelligent digital twin systems that will help us make predictions, but will not understand (the logic of the predictions). So ** we're going to have a new kind of science. **I think it's interesting to be able to use this kind of science, but we also need to control the quality of the knowledge produced. **You should think about this, because humans will no longer have the sole privilege of producing knowledge. Now man has to compete with machines. **
So the important question for our society is whether we can cooperate with machines and master the development and evolution of knowledge developed by machines. **Or we will develop a situation where human-driven science and machine-driven science coexist. **It would be an interesting scenario if we had parallel science powered by these machines.
**Tencent Technology: You mentioned that the human mind is also a computing system. Both systems are very similar in their components compared to automatic machines. So what are the unique capabilities of humans compared to strong artificial intelligence? **
Joseph Schiffakis: That's a very good question. Because I've been working on autonomous systems, I tried designing self-driving cars. For a self-driving car, you'd have functions like perception, turning sensory information into concepts. You'd have a reflective function that models the outside world and makes decisions. Making decisions means managing many different goals. To achieve these goals, you need planning and more. There are indeed many similarities between autonomous systems and the human mind.
There are, however, some important differences between humans and autonomous systems. **One very important difference is that humans possess what I would call common sense knowledge. Common sense knowledge is the network of knowledge that we develop from birth. We have a mechanism, we don't know how it works. But through experience every day, you enrich this network and gain the common sense knowledge to understand the world. ** For a human being, when he thinks, he connects sensory information with this common sense conceptual model. The analysis results are then fed back from the conceptual model to sensory information. This is very different from neural networks. Let me give you an example: I show you a stop sign partially covered in snow, and you immediately say it is a stop sign without a doubt.
Now, if you want to train a neural network to recognize a stop sign that is partially covered in snow, this means that since the neural network cannot connect sensory information with the conceptual model, you will have to train the neural network to understand all weather conditions. Condition. **This is why children are easier to learn than neural networks. If you show a child a car once, he'll say it's a car the next time. **Because they form an abstract model of what a car is through observation. They can relate sensory information to this conceptual model. **This is one of the biggest challenges facing artificial intelligence today. **This is also an important problem for self-driving cars. Self-driving cars should be able to collect sensory information and link that information with maps and more. Making decisions based solely on sensory information could be dangerous. We've had examples of this before.
It is not clear why humans are able to understand complex situations without a lot of analysis and computation. We can do this because we can connect sensory information with certain conceptual information, abstract information. So where we can't go wrong at all, neural networks can go wrong a lot. I remember one time when my Tesla stopped suddenly because it thought the combination of the moon and trees was a yellow traffic light. This absolutely does not happen to humans, because humans can contextualize information to make sense of it. I immediately understood that it was the moon, because traffic lights can't be floating in the sky.
So when someone says these systems can compete with humans in some ways, maybe it can. **But human intelligence is characterized by your ability to understand the world and ask questions with purpose. Artificial intelligence is still far from this goal. **
**Tencent Technology: Because you have studied autonomous driving, which already includes an understanding of the environment, cognition and perception. Lecun argues that because we are visual animals, our understanding of the world is largely based on vision. If big language models can be multimodal and learn from the environment, can they understand the world itself? **
Joseph Schiffakis: **I think that if AI cannot connect concrete knowledge with symbolic knowledge, it will be impossible to understand the world only by relying on large language models. AI can do this only by combining concrete knowledge, that is, knowledge in databases, with symbolic knowledge. If it can't, then human intelligence will outperform machines. I'm pretty sure of that. **I know many people will disagree with me because Computational Intelligence can analyze and extract data through millions of parameters. Humans don't do this well. But humans are good at dealing with abstract problems.
**Human intelligence depends on the ability to use analogies and metaphors. **Even if we don't understand how human creativity works, I can still say it's very important. **Because in human creativity a distinction should be made between discovery and invention. **Machine can discover something from more complex and larger data by using data analysis. But invention is another matter. Invention means that I have invented a theory. I think we are far from understanding this part of human intelligence.
But the ability to discover is also useful, because it can help humans guess more general patterns. This is something our own minds cannot discover. But I don't think machines will be able to create new scientific theories or create new machines. **They will provide a synthesis of the knowledge they possess. Like a distillation process, they have a huge amount of knowledge, which they then distill and present to you. **This is amazing. But that's not enough. To achieve more possibilities still requires human capabilities.
In a paper I wrote, I explained that there are actually different types of intelligence. Human intelligence is very special because the basis for the development of human intelligence is the special world we strive to live in. **If we were born in another world, maybe we would develop another intelligence. Intelligence is the ability to generate knowledge and solve problems. **Of course, now that we see machines that can solve some problems that we can't, they actually possess another kind of intelligence. It's great, we have some sort of complementarity. **
04 The development of science and technology should give priority to improving human life
**Tencent Technology: We just had some philosophical discussions, and now we will discuss some issues about the moral impact of AI on society. The first question is that, unlike the optimism that new technologies will create enough new jobs, you mention that artificial intelligence will cause serious unemployment problems. And whether these problems can be difficult to solve without changing the socio-economic system. Can you explain why you say that? Because a lot of people are concerned about that. **
Joseph Schiffakis: The development of AI will increase productivity. There are some very simple laws in economics: if productivity increases, then you need fewer and fewer people to do the same work. This point is very clear.
Now some people think that AI will create some job opportunities, especially for high-quality people, it will create some new job opportunities. **But if you weigh the jobs created by AI against the jobs lost because of it, the impact of AI must be negative. **
Everyone now agrees that AI will cause unemployment. This is obvious. **But throughout human history technology has been able to increase productivity, which ultimately improves people's quality of life. **For centuries, people have worked fewer hours. We should consider solving this problem through appropriate economic and social reforms. Including education reform, because you have to educate people to adapt to this new era.
**Tencent Technology: In the industrial revolution, people's lives were not greatly improved at first. They work in factories and may work 14 hours a day. Do you think people's living conditions will be worse in the early days of technological innovation? **
Joseph Schiffakis: No, I think the Industrial Revolution generally improved the quality of human life. This is the heart of the matter. **I think the problem with society today is that it doesn't take this goal seriously, they think technological progress should be a priority. But I think the highest priority is how to improve human life, which should be the first priority. At least I am a humanitarian. **
**Tencent Technology: I am also a humanitarian and understand how serious this problem is. Do you think AI could have serious consequences other than unemployment? **
Joseph Schiffakis: It is possible. But the problem is that some people say that artificial intelligence will pose a threat to human beings, and even we may become slaves to machines. I don't like that statement. I say in my book that technology is neutral. You have atomic energy, you can use atomic energy to generate electricity, and you can use it to make bombs and kill people. This is your decision. If you really think about it, all these people who say that artificial intelligence is a threat to human beings are completely stupid. Because the use of technology is a human responsibility. **
**I think these people are saying that only because they also want to reduce human responsibility for this. **Because they want people to accept AI, which is too bad. People should take responsibility for possible problems. I don't know what's going on in China, but unfortunately in the western world people aren't too sensitive about it. They think technology (the negative impact) is preordained, which is very bad. I also said in my book that the biggest risk is not that humans are ruled by machines, but that humans accept that machines make all the key decisions. If I had a slave who could do anything I wanted, like those Arabian myths, then I would end up being my slave's slave. **So the danger comes from the people. I have also seen this in French schools, if a child has access to a chatbot, he becomes unable to write, organize his thoughts and ends up dependent on the machine. This is not a rosy scenario for humanity.
**Tencent Technology: A few days ago, many well-known figures in the AI field, including Sam Altman, signed a statement on the threat of AI extinction. In your book, you said that the current media and industry insiders are exaggerating the capabilities and threats of AI. One of them? Do you think the current paradigm of AI has the possibility of bringing about a crisis of human civilization? **
Joseph Schiffakis: **The dangers posed by AI are clear, and may mainly come from its misuse. **Unfortunately, today we have no relevant regulations against this danger. Because the government doesn't know how these things are developed, the lack of transparency means regulations cannot be applied. It's too bad for society. AI is very likely to be misused, so I also signed a petition in support of an investigation of the company.
Technology is very good and I have nothing against technology. It's a great thing that we have chatbots, and we should make progress in that direction. **Artificial intelligence, including general artificial intelligence, is a good thing, and I have nothing against it. What I am against is the misuse of these technologies. Various countries and international institutions should enforce the regulations, although there are certain difficulties because the large language model itself lacks interpretability. But we can still demand some kind of transparency from development companies, like how the datasets are built, and how those engines are trained. **
**Tencent Technology: Recently, the US Congress held a hearing on artificial intelligence and standard people. Including Sam Altman, Marcus has participated, and related bills are being passed in Europe. Do you think this is a good start? **
Joseph Schiffakis: But the problem is, **when people talk about safe artificial intelligence, a lot of times they're not talking about the same thing. **As an engineer, security has a very clear definition for me. Others may think that safe AI means trusting AI as much as humans. The underlying logic of this idea is to treat artificial intelligence as a human being, not a machine. There are a lot of other papers that say it doesn't matter what the AI does, what matters is the intent of the AI, so you have to be able to separate the intent from the outcome and so on. So there's a lot of discussion. **I hope all this discussion leads to some serious regulation, not just a wish list. **
**Tencent Technology: So let's chat about brighter possibilities. If artificial intelligence is not misused, in what ways can it change our lives? **
Joseph Schiffakis: If we don't misuse artificial intelligence, the future is quite promising. This is a huge revolution. It has enormous potential to develop knowledge to address some of the grand challenges facing humanity today, such as climate change, resource management, population issues, pandemics, and more.
I said before that there is a clear complementarity between humans and machines. ** For humans, the best scenario is a harmonious cooperation between machines and humans. And in this process, humans will be able to master all the processes of knowledge development and application, ensuring that these machines will not make key decisions for us by themselves. **
The challenge ahead is for us to find the right balance, to find the right balance of roles between humans and machines. I hope we can do this successfully.