Threat of loss of control?
Turing Award winner Bengio warns of “rogue AIs”
It could be a few years or even decades before artificial intelligence (AI) becomes "superhuman". However, computer scientist Yoshua Bengio, who is often referred to as the "AI godfather", is convinced that we need to prepare for this now. He advocates multilateral, independent AI laboratories that prepare for the possible emergence of "rogue AIs" and fight them if the worst comes to the worst.
That future systems would pose an existential threat to humanity is not necessarily the case, but "given the current state of technology, a threat to democracy, national security and the loss of control to superhuman AI systems is very plausible. This is not science fiction," says Bengio, professor at the University of Montréal (Canada) and pioneer in the field of deep learning, i.e. training with large amounts of data to solve complex tasks. This makes it all the more important to prepare for the potential risks.
There is currently a lot of discussion about deepfakes, i.e. realistic-looking voices, images or videos that are created or edited using AI - for example with regard to the manipulation of elections. The Canadian expert is more concerned about dialog systems that are used to persuade people. "There are already some studies that suggest that they perform comparably or better than humans when it comes to persuading someone to change their mind on a particular issue," said Bengio. The interactions could be customized for each person.
Fear of AI weapon systems
Security authorities around the world would in turn be concerned about the easy availability of knowledge. "The systems could be misused to design or build all kinds of weapons," says Bengio, who, together with Geoffrey Hinton and Yann LeCun, was honored with the Turing Award, one of the most highly regarded and prestigious awards in computer science, in 2018. The spectrum ranges from cyber attacks to biological and chemical weapons. There is concern that AI companies do not have sufficient security measures in place.
If it makes it easier for terrorists to do really bad things, then we need to make decisions carefully.
Informatiker Yoshua Bengio
Of course, there are also immense benefits and opportunities from AI, which is not the issue. "But if it makes it easier for terrorists to do really bad things, then we need to make decisions carefully. And it shouldn't be the CEO of a company who makes these decisions," said the expert, who also gave a lecture yesterday, Tuesday evening, as part of the celebrations to mark the 20th anniversary of the Faculty of Computer Science and the 50th anniversary of computer science teaching at the University of Vienna.
Warning against the "goal of self-preservation"
What is most frightening, however, is the possible loss of control to superhuman AI systems that could spread outside the computer on which they are running, learn to manipulate humans and ultimately perhaps even control robots or other industrial devices themselves. If we succeed in developing AI systems that are smarter than humans and pursue a "goal of self-preservation", "it will be like creating a new species - and that's not a good idea. At least until we understand the consequences better. Nobody knows how big the risk is. But that's part of the problem," explained Bengio.
How quickly the technology will develop in this direction is controversial. Estimates vary between three years and several decades. In any case, the current massive investments could accelerate the process. Ultimately, it is only relevant to "get a grip on the risks and bear in mind that if things have to move quickly, it could be too late to create the right legislation and international treaties".
AI labs should prepare democracies
Bengio sees one approach to countering this threat in establishing a multilateral network of publicly funded, non-profit AI labs that prepare for the possible emergence of rogue AI systems. If the worst comes to the worst, a safe and defensive AI could then be deployed against it. A cooperative group of democracies should work together on the design, because the possible negative effects of AI would not be limited. Appropriate defense methods and many aspects of research into them would have to be kept confidential to make it more difficult for the "rogue AI" to circumvent the new defensive measures, as he also recommends in an article in the Journal of Democracy.
AI regulation would have the greatest impact, with developers having to prove that AI systems are secure and that control cannot be lost. "At the moment, nobody knows how to do that. But if something like this were prescribed, the companies that have the money and the talent would already be doing a lot more research into building safe systems and putting their energy into protecting the public," says the expert. After all, a company that produces new drugs must also scientifically prove that the application is safe. However, these efforts do not exist in the AI industry because it is not a priority in the face of tough competition.
Ultimately, there are still numerous challenges with regard to the scientific understanding of AI safety, but also the political responsibility to ensure that appropriate protocols are followed when building safe AI as soon as this is possible. This is the only way to ensure "that no single person, no single company and no single government can abuse this kind of power", says Bengio.
This article has been automatically translated,
read the original article here.
Kommentare
Willkommen in unserer Community! Eingehende Beiträge werden geprüft und anschließend veröffentlicht. Bitte achten Sie auf Einhaltung unserer Netiquette und AGB. Für ausführliche Diskussionen steht Ihnen ebenso das krone.at-Forum zur Verfügung. Hier können Sie das Community-Team via unserer Melde- und Abhilfestelle kontaktieren.
User-Beiträge geben nicht notwendigerweise die Meinung des Betreibers/der Redaktion bzw. von Krone Multimedia (KMM) wieder. In diesem Sinne distanziert sich die Redaktion/der Betreiber von den Inhalten in diesem Diskussionsforum. KMM behält sich insbesondere vor, gegen geltendes Recht verstoßende, den guten Sitten oder der Netiquette widersprechende bzw. dem Ansehen von KMM zuwiderlaufende Beiträge zu löschen, diesbezüglichen Schadenersatz gegenüber dem betreffenden User geltend zu machen, die Nutzer-Daten zu Zwecken der Rechtsverfolgung zu verwenden und strafrechtlich relevante Beiträge zur Anzeige zu bringen (siehe auch AGB). Hier können Sie das Community-Team via unserer Melde- und Abhilfestelle kontaktieren.