Against human rights:
“Tech companies are not fulfilling their responsibilities!”
Artificial intelligence is more dangerous than nuclear weapons, says Elon Musk. Ethicist Peter Kirchschläger takes a similar view. Watch the video above to find out why we underestimate AI, what democracies are not prepared for in the election year and whether it will soon abide by the 10 commandments.
"In fact, you could say that the risks posed by artificial intelligence are comparable to what nuclear weapons can do to humans and the planet," says Kirchschläger. The interesting thing here, however, is that there is zero supervision.
Manipulation could come to a head, especially before elections: Kirchschläger warns against fake content, which is mainly published shortly before elections. "You can't get that out of the minds of voters with all the clarifications that are then made. That's the worry I have, because it catches us in democracies quite unprepared." However, we should not forget that long-term measures can also be taken. Even the deepfakes, the real distortions of the truth through misinformation, are difficult to get out of people's heads - even if it happens months before the election.
Business models involve human rights violations
The professor of ethics at the University of Lucerne is calling for an international authority to monitor the issue according to human rights criteria - and to impose sanctions. The expert sees the supposed excuses that were made in the Cambridge Analytica case as lame excuses. Excuses for multinational technology companies not fulfilling their responsibilities.
"The crazy thing here is that we're talking about business models that not only have negative side effects that involve human rights violations, but business models that involve human rights violations at their core. Today you can put an app on the market with sexualized images of children and the only thing that happens is that you make a lot of money. That can't be right. There is a clear grievance here that urgently needs to be remedied."
Kirchschläger is not suggesting that politicians have been asleep here. On the contrary: there have been deliberate attempts to prevent politicians from regulating - in the sense of lobbying.
"Algorithms are neither fair nor neutral"
One of the main problems is that human rights are already being violated when data is collected and generated, or that data is being generated that should not actually be obtained. Transparency is urgently needed here: "Algorithms are neither fair, neutral nor objective, but of course who programs them plays a role." There is therefore also a kind of "overconfidence" with regard to AI. It is overestimated that the processes, i.e. the algorithm that makes the selection, also carry certain values, norms and prejudices.
Kommentare
Willkommen in unserer Community! Eingehende Beiträge werden geprüft und anschließend veröffentlicht. Bitte achten Sie auf Einhaltung unserer Netiquette und AGB. Für ausführliche Diskussionen steht Ihnen ebenso das krone.at-Forum zur Verfügung. Hier können Sie das Community-Team via unserer Melde- und Abhilfestelle kontaktieren.
User-Beiträge geben nicht notwendigerweise die Meinung des Betreibers/der Redaktion bzw. von Krone Multimedia (KMM) wieder. In diesem Sinne distanziert sich die Redaktion/der Betreiber von den Inhalten in diesem Diskussionsforum. KMM behält sich insbesondere vor, gegen geltendes Recht verstoßende, den guten Sitten oder der Netiquette widersprechende bzw. dem Ansehen von KMM zuwiderlaufende Beiträge zu löschen, diesbezüglichen Schadenersatz gegenüber dem betreffenden User geltend zu machen, die Nutzer-Daten zu Zwecken der Rechtsverfolgung zu verwenden und strafrechtlich relevante Beiträge zur Anzeige zu bringen (siehe auch AGB). Hier können Sie das Community-Team via unserer Melde- und Abhilfestelle kontaktieren.