The European Parliament has recently passed the EU AI Act, a comprehensive framework for the governance and oversight of artificial intelligence technologies within the European Union. With 499 votes for, 28 against, and 93 abstaining, the act aims to promote the development of human-centric and trustworthy AI, while protecting health, safety, fundamental rights, and democracy from its harmful effects.
However, the act also imposes restrictions on certain types of AI services and products, prohibiting the use of biometric surveillance, social scoring systems, predictive policing, emotion recognition, and untargeted facial recognition systems. On the other hand, generative AI models such as OpenAI‘s ChatGPT and Google‘s Bard would be allowed to operate, provided their outputs are clearly labeled as AI-generated.
Advocates of AI development argue that implementing strict regulations on AI technology could lead to affecting innovation, while opponents contend that these regulations are necessary to prevent harmful consequences. OpenAI CEO Sam Altman has been vocal about supporting government oversight for the AI industry. Still, he also warned European regulators against overregulation.
The announcement of the EU AI Act comes just after the Markets in Crypto-Assets (MiCA) bill became law on May 31, with both acts garnering support from industry leaders, who champion regulation to level the playing field for companies in their respective sectors.
In conclusion, the AI Act presents a new shift in how AI technologies will be regulated within the European Union. With its passage, the challenge now lies in striking the right balance between promoting innovation and ensuring safety and ethics in AI development. The debate surrounding the pros and cons of these regulations will continue as individual negotiations with members of the European Parliament (MEPs) take place before the act becomes law.
Source: Cointelegraph