"There are people who want AI to be heavily regulated in general because they are afraid of the consequences of AI. Because they say if anyone can master AI, if they can do anything they want with AI, then the situation will be It can be very dangerous, so there needs to be strong regulation of AI. I totally disagree with that.”
It took humans a long time to establish the current ethics and governance mechanisms. If artificial intelligence is allowed to develop consciousness and intelligence like humans, how can humans meet the requirements of ethics and governance? What is the mechanism of human intervention in artificial intelligence?
On July 6, the 2023 World Artificial Intelligence Conference opened in Shanghai. Yann LeCun, the winner of the 2018 Turing Award and the chief artificial intelligence scientist of the Meta AI Basic Artificial Intelligence Research Institute team, said at the opening ceremony that he completely disagrees that "because anyone can use artificial intelligence to do anything, the situation may be very dangerous." , so artificial intelligence needs to be strictly regulated.” In the long run, the only way to make AI platforms safe, good, and useful is to open source them.
Yang Likun talked about how artificial intelligence becomes intelligent and controllable. "If you think that the way to achieve human-level artificial intelligence is to do a larger autoregressive LLM (Large Language Model) and then use multi-model data testing, then you may think that these artificial intelligence systems are not safe. But in fact I I don’t think this system can be very intelligent. I think the way they become intelligent is also the way to make them controllable, that is, the concept of goal-driven artificial intelligence. Essentially, it is to give them goals that must be met.” Yang Likun said, Some of these goals are goals defined by the task, such as did the system answer the question? Did the system start your car? Did the system clear the table? Other goals act as safety fences, such as not to hurt anyone.
Yang Likun believes that these artificial intelligence systems will not deceive or dominate humans through infiltration bit by bit. Humans can set goals that force AIs to be honest, such as forcing AIs to succumb to humans, being careful about making them curious, and giving them access to resources they shouldn't have. "So I think these systems are going to be completely controllable and manipulable. Systems can be designed to be safe, but it's not easy. Designing these targets, making the system safe, is going to be a tough engineering challenge."
But Yang Likun said that humans don't have to design it well from the beginning, but can start by building a system as smart as a mouse. The goal of humans is to make it a good mouse. "We're going to put them in a sandbox or a simulation environment and we want to make sure they're safe, which is an important issue because there are some people who in general want AI to be heavily regulated because they're afraid of the consequences of AI. .Because they say if anyone can get hold of AI, if they can do anything they want with AI, then the situation could be very dangerous, so there needs to be strict regulation of AI. I don’t agree with that at all.”
On the contrary, Yang Likun believes that in the long run, the only way to make artificial intelligence platforms safe, good, and practical is to open source. In the future, everyone will interact with the digital world through artificial intelligence assistants, and all human information must pass through this artificial intelligence assistant. If technology is controlled by a few companies, this is not a good thing. "Future AI systems will be a treasure trove of all human knowledge, and the way they are trained needs to be based on numerous sources. So we will see more open-source LLMs and more open-source AI systems in the future."
View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
Turing Award winner Yang Likun: The only way to make AI platforms safe, good and practical is open source
Source: The Paper
Reporter Zhang Jing
"There are people who want AI to be heavily regulated in general because they are afraid of the consequences of AI. Because they say if anyone can master AI, if they can do anything they want with AI, then the situation will be It can be very dangerous, so there needs to be strong regulation of AI. I totally disagree with that.”
It took humans a long time to establish the current ethics and governance mechanisms. If artificial intelligence is allowed to develop consciousness and intelligence like humans, how can humans meet the requirements of ethics and governance? What is the mechanism of human intervention in artificial intelligence?
On July 6, the 2023 World Artificial Intelligence Conference opened in Shanghai. Yann LeCun, the winner of the 2018 Turing Award and the chief artificial intelligence scientist of the Meta AI Basic Artificial Intelligence Research Institute team, said at the opening ceremony that he completely disagrees that "because anyone can use artificial intelligence to do anything, the situation may be very dangerous." , so artificial intelligence needs to be strictly regulated.” In the long run, the only way to make AI platforms safe, good, and useful is to open source them.
Yang Likun talked about how artificial intelligence becomes intelligent and controllable. "If you think that the way to achieve human-level artificial intelligence is to do a larger autoregressive LLM (Large Language Model) and then use multi-model data testing, then you may think that these artificial intelligence systems are not safe. But in fact I I don’t think this system can be very intelligent. I think the way they become intelligent is also the way to make them controllable, that is, the concept of goal-driven artificial intelligence. Essentially, it is to give them goals that must be met.” Yang Likun said, Some of these goals are goals defined by the task, such as did the system answer the question? Did the system start your car? Did the system clear the table? Other goals act as safety fences, such as not to hurt anyone.
Yang Likun believes that these artificial intelligence systems will not deceive or dominate humans through infiltration bit by bit. Humans can set goals that force AIs to be honest, such as forcing AIs to succumb to humans, being careful about making them curious, and giving them access to resources they shouldn't have. "So I think these systems are going to be completely controllable and manipulable. Systems can be designed to be safe, but it's not easy. Designing these targets, making the system safe, is going to be a tough engineering challenge."
But Yang Likun said that humans don't have to design it well from the beginning, but can start by building a system as smart as a mouse. The goal of humans is to make it a good mouse. "We're going to put them in a sandbox or a simulation environment and we want to make sure they're safe, which is an important issue because there are some people who in general want AI to be heavily regulated because they're afraid of the consequences of AI. .Because they say if anyone can get hold of AI, if they can do anything they want with AI, then the situation could be very dangerous, so there needs to be strict regulation of AI. I don’t agree with that at all.”
On the contrary, Yang Likun believes that in the long run, the only way to make artificial intelligence platforms safe, good, and practical is to open source. In the future, everyone will interact with the digital world through artificial intelligence assistants, and all human information must pass through this artificial intelligence assistant. If technology is controlled by a few companies, this is not a good thing. "Future AI systems will be a treasure trove of all human knowledge, and the way they are trained needs to be based on numerous sources. So we will see more open-source LLMs and more open-source AI systems in the future."