Self-commitment
AI companies want to develop secure applications
More than a dozen leading artificial intelligence companies have committed to developing safe applications at a summit in Seoul, South Korea. "These commitments will ensure that the world's leading AI companies are transparent and accountable about their plans to develop safe AI," said British Prime Minister Rishi Sunak on Tuesday.
The meeting in Seoul, which was attended by ChatGPT developer OpenAI, Google DeepMind and Anthropic, builds on a first global AI safety summit held in the UK last year. The UK is now also a co-organizer.
The companies have committed to publishing which artificial intelligence risks they consider to be "intolerable" and what they will do to ensure that these thresholds are not exceeded, according to the declaration submitted by the British Ministry of Science.
If these thresholds cannot be met, the manufacturers committed to "not using or developing" certain systems or models. The exact definition of these thresholds is to be determined at the next summit in France next year.
The technology giants Microsoft, Amazon, IBM and Facebook parent company Meta are also taking part in the meeting in Seoul, which runs until Wednesday.
Curse and blessing
Generative artificial intelligence is used, for example, in programs such as ChatGPT to generate texts, images or videos. The programs can be fed with different types of data, convert them, process them and generate new data from them.
Critics warn that AI could also be used to manipulate elections, for example through fake news or so-called deepfakes, i.e. deceptively real but manipulated images or videos. Activists and many governments are therefore calling for international standards for the development and use of AI.
Difficult to control
On Tuesday, the EU Council adopted an AI law that regulates the use of the technologies in areas such as video surveillance, speech recognition and the analysis of financial data. For example, a labeling obligation is planned: developers will have to mark texts, sounds and images generated with artificial intelligence so as not to mislead people.
However, the law will not come into force until spring 2026, although experts believe that the requirements will be difficult to monitor anyway due to the volume of material.
This article has been automatically translated,
read the original article here.
Kommentare
Willkommen in unserer Community! Eingehende Beiträge werden geprüft und anschließend veröffentlicht. Bitte achten Sie auf Einhaltung unserer Netiquette und AGB. Für ausführliche Diskussionen steht Ihnen ebenso das krone.at-Forum zur Verfügung. Hier können Sie das Community-Team via unserer Melde- und Abhilfestelle kontaktieren.
User-Beiträge geben nicht notwendigerweise die Meinung des Betreibers/der Redaktion bzw. von Krone Multimedia (KMM) wieder. In diesem Sinne distanziert sich die Redaktion/der Betreiber von den Inhalten in diesem Diskussionsforum. KMM behält sich insbesondere vor, gegen geltendes Recht verstoßende, den guten Sitten oder der Netiquette widersprechende bzw. dem Ansehen von KMM zuwiderlaufende Beiträge zu löschen, diesbezüglichen Schadenersatz gegenüber dem betreffenden User geltend zu machen, die Nutzer-Daten zu Zwecken der Rechtsverfolgung zu verwenden und strafrechtlich relevante Beiträge zur Anzeige zu bringen (siehe auch AGB). Hier können Sie das Community-Team via unserer Melde- und Abhilfestelle kontaktieren.