In a bid to bring more democratic input to AI development, OpenAI, the parent company of the artificial intelligence chatbot ChatGPT, has announced it will award ten grants worth $100,000 each towards experiments in establishing a “proof-of-concept” democratic process for determining AI system rules. OpenAI believes that the rules created should abide by the law and provide benefits to humanity, ultimately contributing to the responsible oversight of AGI and superintelligence.
The plans unveiled by OpenAI are ambitious, and while the conclusions drawn from these experiments will not be binding, they can help raise essential questions around AI governance. This move comes at a time when governments worldwide are looking to implement regulations on general-purpose generative AI.
OpenAI’s CEO, Sam Altman, recently met with European regulators to emphasize the need for non-restrictive regulations to ensure ongoing innovation. Altman voiced a similar message when he testified before the United States Congress, arguing that AI needs more adaptive guidelines for its conduct. OpenAI’s announcement also highlighted that decisions about AI should not be dictated by a single individual, company, or country.
One key question the AI governance project will tackle is “How should disputed views be represented in AI outputs?” OpenAI warns that if AI is not developed mindfully, a superhuman form of AI could emerge within a decade, so getting it right is crucial for humanity’s well-being.
These efforts are funded by the non-profit arm of OpenAI, who also mentioned that the project’s results would be free and accessible to the public. In a world that’s increasingly reliant on AI, transparent decision-making processes and ethical AI conduct guidelines are of utmost importance. As the debate on AI governance continues to evolve, the grants provided by OpenAI could offer valuable insights to help shape a more responsible future for artificial intelligence.
Source: Cointelegraph