Regulating AI: New Independent Agency vs. Strengthening Existing Authorities

Intricate futuristic cityscape, AI-focused regulatory agency building, policymakers discussing, warm golden sunlight, expressive art style, subdued earth tones, mood of contemplation and debate, AI ethical practices, transparent AI system visuals, safety review metaphor, sense of balance and protection.

For the second time this month, Sam Altman, OpenAI CEO, went to Washington to discuss artificial intelligence with U.S. policymakers. Altman appeared before the U.S. Senate Committee on the Judiciary alongside IBM’s Chief Privacy & Trust Officer Christina Montgomery and Gary Marcus, Professor Emeritus at New York University.

In response to Louisiana Senator John Kennedy’s question about regulating A.I., Altman proposed the formation of a government office in charge of setting standards. He suggested that an agency should license efforts that reach a certain scale of capabilities, ensure compliance with safety standards, and revoke licenses if necessary. It highlights the importance of having a government body supervising the development and implementation of A.I. technology.

Altman further emphasized that independent audits of A.I. technology should be mandatory. Experts should be able to assess whether a model complies with established safety and performance thresholds. He sees this as a crucial step towards ensuring responsible and ethical handling of artificial intelligence.

In contrast to Altman’s vision, IBM’s Montgomery mentioned that regulatory authorities already exist to oversee various domains. Instead of creating a new agency, resources could be allocated to these existing bodies, ensuring they possess the required powers and infrastructure. This view leans more toward streamlining the existing regulatory system while acknowledging improvements are needed.

Taking the FDA as a reference, Professor Marcus suggested a safety review for artificial intelligence, similar to how drugs undergo the approval process before entering the market. He believes that a nimble and adaptable agency should pre-review and monitor such projects after their release, with the authority to recall any tech deemed unsafe.

Montgomery added that transparency, explainability in A.I., defining high-risk usage, and protecting data used to train AI are all essential aspects to be considered. Regulatory bodies should require impact assessments, maintain transparency, and hold companies accountable for their work.

Governments worldwide continue to grapple with the challenges posed by artificial intelligence. The European Union’s AI act, promoting regulatory sandboxes for AI testing, echoes the need for a human-centric and ethical approach to AI development. More recently, Italy banned OpenAI’s ChatGPT over privacy concerns but lifted the ban once the company implemented changes to its privacy settings.

The main conflict here is determining whether a new independent agency should be created specifically to regulate A.I. technology, or if existing authorities should be strengthened to handle the burden. Taking into account the speed at which A.I. is evolving, effective regulation should strike a balance between supporting technological advancements and protecting society from potential risks.

Source: Decrypt

Sponsored ad