The ever-growing importance of artificial intelligence (AI) in our daily lives has caught the attention of European Union (EU) officials, who are now considering additional measures to make AI tools, such as OpenAI’s ChatGPT, more transparent to the public. The European Commission Deputy Head, Vera Jourova, voiced concerns over companies deploying generative AI tools, such as Microsoft’s Bingchat and Google’s Bard, and the potential threat they pose to the spread of disinformation. She believes that these companies should place labels on their AI-generated content as a proactive measure to combat “fake news”.
Jourova’s call for transparency comes from an understandable place, especially as AI-generated content can sometimes be misleading and difficult to distinguish from factual, human-generated content. By labeling AI-generated content, it provides users with information to be aware of the content’s origins and judge its credibility accordingly.
On the other hand, the responsibility to provide such labels raises concerns about freedom of expression and the stifling of creativity. In specific cases, it may indeed be necessary to address disinformation and potential malign uses of AI-generated content. However, it also runs the risk of marginalizing genuine and innovative content produced by AI, which can contribute to a richer and diverse online experience.
It’s worth noting that major tech companies like Google, Microsoft, and Meta Platforms have already signed into the EU’s “Code of Practice on Disinformation”, which acts as both an agreement and a tool for self-regulatory standards. Jourova urges these companies to report on new AI-related safeguards this upcoming July. However, her statements don’t end there; she also mentioned that just a week prior, Twitter had exited the Code, a move that would likely attract more scrutiny from regulators.
As the EU prepares to introduce its EU AI Act within the next two to three years, a set of comprehensive guidelines for public use of AI, European officials are urging generative AI developers to create a voluntary code of conduct. But with the potential for both benefits and drawbacks, it remains to be seen if labeling AI-generated content will truly help or hinder the development and adoption of AI technologies moving forward. The main conflict stirred in this debate revolves around striking a balance between ensuring transparency and maintaining the freedom and creativity that AI tools can bring.
Source: Cointelegraph