Microsoft President, Brad Smith, has joined the chorus of tech industry giants urging governments to regulate artificial intelligence (AI) and has called for more rapid action. During a panel discussion in Washington, D.C., he expressed the importance of AI being governed appropriately, addressing current and emerging concerns, and uniting the public and private sectors in order to serve society at large.
Generative AI, the technology capable of producing text, images, and other media in response to user inputs, has been garnering increased regulatory scrutiny. High-profile examples include Google’s Bard, OpenAI’s ChatGPT, and the image generator platform Midjourney. Since the public launch of ChatGPT, prominent individuals such as Warren Buffett, Elon Musk, and OpenAI CEO Sam Altman have expressed concerns about the technology’s potential dangers.
Widespread apprehension has arisen from the possibility of AI replacing human jobs, demonstrated by the current WGA writer’s strike and video game developers considering AI technology as a potential alternative.
Smith suggests developers should be required to obtain a license before deploying advanced AI projects and that “high-risk” AI should function only within licensed AI data centers. He also encourages companies to take responsibility for managing the disruptive technology, stating that the onus is not solely on governments to address the societal impacts of AI.
According to Smith, this means notifying governments when tests commence and continuing to monitor AI deployment, reporting any unforeseen issues. Despite the concerns, Microsoft has invested heavily in AI, contributing over $13 billion to ChatGPT developer OpenAI and integrating the popular chatbot into its Bing web browser.
Smith asserts Microsoft’s commitment to the responsible development and deployment of AI, stressing that the technology’s required guardrails should not be the exclusive responsibility of tech companies. In March, Microsoft released its Security Copilot, the first specialized AI tool to help IT and cybersecurity experts identify cyber threats using large datasets.
These remarks resonate with the sentiments expressed by OpenAI CEO Sam Altman during a recent hearing before the U.S. Senate Committee on the Judiciary. He proposed the creation of a federal agency responsible for regulating AI development and establishing safety standards.
As AI continues to advance, the importance of striking a balance between harnessing its capabilities and mitigating potential risks becomes increasingly paramount. Greater cooperation between governments and the tech industry is crucial for navigating the ethical and societal consequences posed by AI technology.
Source: Decrypt