Apple & Samsung’s ChatGPT Concerns: AI Benefits vs. Confidentiality Risks in Major Companies

Intricately designed futuristic office space, contrasting light and shadows, corporate employees cautiously interacting with AI devices, an ambiance of concern and caution, a blend of Van Gogh's Starry Night and Fernand Léger's cubism style, a touch of muted colors to emphasize the mood of doubt.

Apple has taken the cautious approach to employee use of artificial intelligence (AI) platforms, specifically OpenAI‘s ChatGPT and Microsoft’s Github Copilot, due to concerns over potential leaks of confidential company information. The technology giant appears to share these apprehensions with its competitor, Samsung, which reportedly restricted employee access to ChatGPT.

These concerns arise, as employee interactions with AI platforms, including the popular ChatGPT and Microsoft’s recently released Github Copilot, may inadvertently share sensitive data with external companies. Samsung has even gone as far as threatening potential termination for policy violations when it comes to sharing data with external servers like ChatGPT, Google’s Bard, and Bing’s ChatGPT implementation.

The addition of Apple and Samsung to the list of organizations wary of ChatGPT comes amidst increasing concerns over the handling of AI platforms; industry giants like JPMorgan Chase, Verizon, Northrop Grumman, and Amazon are already restricting or prohibiting ChatGPT’s use. Moreover, officials in Italy, Germany, France, and Canada have raised concerns over privacy controls in AI platforms, with Italy imposing a temporary ban on ChatGPT that was later reversed following platform updates.

Apple is no stranger to AI, having made its own mark in the field with its Siri virtual assistant in 2011 and the acquisition of numerous AI-focused startups, such as the California-based WaveOne. However, during a second-quarter earnings call, Apple CEO Tim Cook emphasized the importance of being careful in approaching AI technology.

To address these concerns, OpenAI has implemented a series of updates to ChatGPT to improve privacy controls. Moreover, U.S. Senators Michael Bennet and Peter Welch have introduced the Digital Platform Commission Act to establish a federal agency responsible for regulating AI and digital platforms. This comes after a group of AI experts, including OpenAI CEO Sam Altman, called for regulation and federally enforced safety standards.

In a statement, Senator Bennet acknowledged the speed at which technology is evolving, expressing the need for a federal agency that prioritizes the public interest and ensures AI tools and digital platforms adhere to safety standards.

Although many companies are embracing various AI technologies, the surge in concerns over the potential misuse of AI platforms demonstrates the importance of ensuring these tools are secure, compliant, and reliable for both corporate and consumer use.

Source: Decrypt

Sponsored ad