Experts warn:
“Could lose control of AI systems”
Respected experts in artificial intelligence have issued a new urgent warning about the dangers of the technology. "Without sufficient caution, we could irretrievably lose control of autonomous AI systems," the researchers write, pointing to potential risks such as large-scale cyberattacks, societal manipulation, pervasive surveillance and even the "extinction of humanity".
Among the authors of the text published in the journal "Science" are scientists such as Geoffrey Hinton, Andrew Yao and Dawn Song, who are among the leading minds in AI research. They are particularly concerned about autonomous AI systems that can, for example, use computers on their own to achieve the goals set for them.
"Unforeseen side effects"
The experts argue that even programs with good intentions can have unforeseen side effects. This is because the way AI software is trained, it adheres closely to its specifications - but has no understanding of what result is supposed to come out of it. "As soon as autonomous AI systems pursue undesirable goals, we may no longer be able to keep them under control," the text reads.
Similar dramatic warnings have already been issued several times, including last year. This time, the publication coincides with the AI summit in Seoul. At the start of the two-day meeting on Tuesday, US companies such as Google, Meta and Microsoft, among others, pledged to use the technology responsibly.
More appearance than security
The question of whether the ChatGPT developer OpenAI is acting responsibly enough as a pioneer in AI technology was once again brought into sharper focus at the weekend. Developer Jan Leike, who was responsible for making AI software safe for humans at OpenAI, criticized headwinds from the boardroom following his resignation.
In recent years, "glitzy products" have been prioritized over safety, Leike wrote on X. He warned that "developing software that is smarter than people is an inherently dangerous undertaking". There is an urgent need to find out how to control AI systems "that are much smarter than us".
AI still too stupid?
OpenAI boss Sam Altman then assured the audience that his company was committed to doing more to ensure the safety of AI software. Yann LeCun, head of AI research at the Facebook group Meta, countered that such urgency would first require systems that are "smarter than a house cat".
At the moment, it is as if someone in 1925 were warning us that we urgently need to learn how to handle airplanes that carry hundreds of passengers across the ocean at the speed of sound. It will take many years for AI technology to become as smart as humans - and, as with airplanes, safety precautions will gradually come with it.
This article has been automatically translated,
read the original article here.
Kommentare
Willkommen in unserer Community! Eingehende Beiträge werden geprüft und anschließend veröffentlicht. Bitte achten Sie auf Einhaltung unserer Netiquette und AGB. Für ausführliche Diskussionen steht Ihnen ebenso das krone.at-Forum zur Verfügung. Hier können Sie das Community-Team via unserer Melde- und Abhilfestelle kontaktieren.
User-Beiträge geben nicht notwendigerweise die Meinung des Betreibers/der Redaktion bzw. von Krone Multimedia (KMM) wieder. In diesem Sinne distanziert sich die Redaktion/der Betreiber von den Inhalten in diesem Diskussionsforum. KMM behält sich insbesondere vor, gegen geltendes Recht verstoßende, den guten Sitten oder der Netiquette widersprechende bzw. dem Ansehen von KMM zuwiderlaufende Beiträge zu löschen, diesbezüglichen Schadenersatz gegenüber dem betreffenden User geltend zu machen, die Nutzer-Daten zu Zwecken der Rechtsverfolgung zu verwenden und strafrechtlich relevante Beiträge zur Anzeige zu bringen (siehe auch AGB). Hier können Sie das Community-Team via unserer Melde- und Abhilfestelle kontaktieren.