Legal expert convinced:
Companies benefit from rules for AI
With the AI Act, the EU has presented the world's first comprehensive set of rules for dealing with AI. While parts of the economy are concerned about losing competitiveness to other countries, tech expert Lutz Riede believes that companies will benefit from the European rules for AI in the long term.
"There is at least a broad global consensus that artificial intelligence should be regulated - the how is a highly controversial question," says Riede. There is agreement above all on the issue of transparency: it must be clear to users when they are dealing with artificial intelligence. "Transparency is a clear trend in this area, and the AI Regulation shares this with regulatory approaches in the USA and other jurisdictions."
The lawyer qualified criticism from parts of the economy that there is a threat of over-regulation and that the new EU rules would make it considerably more difficult to offer or use AI applications, because: "The AI Regulation does not regulate a great deal".
"Promoting innovation in the long term"
The AI Act ("Artificial Intelligence Act") follows a risk-based approach, i.e. the more dangerous the area of application, the stricter the rules. The strictest requirements only apply to so-called high-risk systems, which include AI systems for assessing creditworthiness, for example. Less risky AI applications, such as chatbots, must primarily comply with transparency rules. The vast majority of AI systems are likely to fall into the lowest risk category and are not subject to any new rules, according to an impact assessment by the EU Commission. This category includes video games and spam filters, for example.
"This limited scope of application is in itself something that promotes innovation," says Riede, who works at the commercial law firm Freshfields. The expert sees it as positive that the AI Act now has a uniform and directly applicable regulation within the EU. This is particularly valuable for internationally active companies, as it avoids a patchwork of different regulations and creates a level playing field, which is "conducive to innovation in the long term".
Tough struggle for compromises
"Overall, however, there are still many points of criticism, much of it justified," says Riede. For example, there are open interfaces with other laws, such as the GDPR, which are left to companies to apply and could therefore lead to legal uncertainty. There had been lengthy negotiations in the run-up, and the final legal text of several hundred pages was the result of compromises and tough wrangling.
Consumers will be able to lodge complaints with the relevant authorities, "which will be under public pressure to regulate efficiently", said Riede. Consumer protection law and the claims enforced by consumer protection organizations will also play a major role in connection with AI, the expert believes. "I assume that the VKI and the Chamber of Labor will take a very close look at what is written in the terms of use of AI providers."
Liability issues are excluded from the AI Act; a directive is currently being discussed that is intended to make it easier for users of AI systems to assert claims by easing the burden of proof and establishing presumption rules.
Mandatory for companies from February 2025
The EU regulation on the regulation of artificial intelligence came into force on August 1, 2024, and the first rules will become mandatory for companies from February 2025. AI systems with a general purpose (such as ChatGPT) are subject to separate rules with additional documentation and verification requirements, which will apply from August 2025. The majority of the new provisions will apply from August 2026. In the event of violations, companies can expect fines of up to 35 million euros or seven percent of their global profits.
The ball is in the court's court
Some courts are already dealing with legal issues relating to artificial intelligence. However, most of the proceedings are currently still pending in the USA or the UK; in the EU, the courts have only dealt with this on a case-by-case basis so far, according to Riede. One hotly debated area is the so-called scraping of data for training purposes. This involves the question of whether AI systems can learn from copyrighted content without the consent of the rights holder. What the solutions for this will look like will also be "decided by the courts".
This article has been automatically translated,
read the original article here.
Kommentare
Liebe Leserin, lieber Leser,
die Kommentarfunktion steht Ihnen ab 6 Uhr wieder wie gewohnt zur Verfügung.
Mit freundlichen Grüßen
das krone.at-Team
User-Beiträge geben nicht notwendigerweise die Meinung des Betreibers/der Redaktion bzw. von Krone Multimedia (KMM) wieder. In diesem Sinne distanziert sich die Redaktion/der Betreiber von den Inhalten in diesem Diskussionsforum. KMM behält sich insbesondere vor, gegen geltendes Recht verstoßende, den guten Sitten oder der Netiquette widersprechende bzw. dem Ansehen von KMM zuwiderlaufende Beiträge zu löschen, diesbezüglichen Schadenersatz gegenüber dem betreffenden User geltend zu machen, die Nutzer-Daten zu Zwecken der Rechtsverfolgung zu verwenden und strafrechtlich relevante Beiträge zur Anzeige zu bringen (siehe auch AGB). Hier können Sie das Community-Team via unserer Melde- und Abhilfestelle kontaktieren.