A recent study conducted by researchers at Stanford University has brought attention to the lack of compliance with the European Union (EU) Artificial Intelligence (AI) Act, among large language models (LLMs) used in AI tools like GPT-4 and Google’s Bard. The EU AI Act, a milestone in AI governance, not only regulates AI within the EU’s 450 million-strong population but also serves as a groundbreaking blueprint for future global AI regulations.
According to the study, AI companies have a steep and challenging journey ahead to achieve compliance. The researchers assessed ten major model providers and ranked their degree of compliance with the 12 requirements outlined in the AI Act on a 0 to 4 scale. The findings revealed a concerning discrepancy in adherence levels, with some providers scoring below 25% in meeting the AI Act’s requirements and only one provider, Hugging Face/BigScience, scoring above 75%. This highlights the pressing need for improvement even among high-scoring providers.
The research pinpointed critical areas of non-compliance, such as insufficient transparency in disclosing copyrighted training data status, energy consumption, emissions generated, and methodologies to manage potential risks. The team also discovered a disparity between open and closed model releases, as open releases resulted in more thorough resource disclosure but posed greater challenges in monitoring or controlling deployment.
All providers, regardless of their release strategies, can and should enhance their conduct, according to Stanford’s conclusions. In recent months, however, a decline in transparency has been observed in significant model releases. For example, OpenAI made no declarations about data and compute in GPT-4 reports, citing a competitive landscape and safety concerns.
These findings contribute to the larger narrative of AI technology providers’ complex and often tense relationships with regulatory bodies. OpenAI has notably been lobbying to influence several countries’ attitudes towards AI and has previously threatened to leave Europe if regulations were too strict – albeit later retracting the threat.
The researchers offered recommendations for AI regulation improvements, such as ensuring the AI Act holds larger foundation model providers accountable for transparency and accountability. They also emphasized the need for technical resources and talent to enforce the Act, highlighting the AI ecosystem’s intricacies.
A crucial challenge, the researchers argue, is how swiftly model providers can adapt and modify their business practices to satisfy regulatory demands. They observed that, absent robust regulatory pressure, many providers could achieve high scores (in the 30s or 40s out of 48 possible points) through meaningful yet feasible changes.
In conclusion, the researchers’ work provides a valuable glimpse into AI regulation’s future, asserting that the AI Act, if enacted and enforced, could have a considerable positive influence on the ecosystem by promoting transparency and accountability. As AI continues to revolutionize society with its remarkable capabilities and associated risks, it becomes increasingly apparent that transparency is not just an optional extra, but a fundamental aspect of responsible AI deployment.
Source: Decrypt