AI in Blockchain Security: Potential and Limitations in Smart Contract Audits

Futuristic, AI-powered cyber lab, dusk lighting, vibrant digital holograms, curious yet focused mood, AI algorithms analyzing vast blockchain data, holographic display of smart contract code in progress, human auditor decrypting vulnerabilities alongside AI, contrast between artificial and human intelligence, sense of partnership and watchfulness.

As artificial intelligence continues to evolve, blockchain security firms are assessing the potential of AI in identifying weaknesses in smart contract codes. OpenZeppelin, a leading blockchain security company, recently conducted an experiment to determine the effectiveness of OpenAI’s GPT-4 model in discovering various smart contract vulnerabilities.

The study aimed to evaluate whether GPT-4’s impressive performance in solving traditional coding problems and academic exams could be applied to smart contract codes. The AI model was tested using 28 Ethernaut challenges, a series of tasks designed to expose smart contract vulnerabilities. GPT-4 was successful in solving 19 of the 23 tasks it was given before its training data cutoff date in September 2021. However, the model failed in four of the final five challenges.

Despite its seemingly reasonable success rate, experts at OpenZeppelin found that the AI tool’s outputs lacked the “reliable reasoning” required for security purposes. In some instances, GPT-4 correctly identified vulnerabilities but failed to explain the attack vector or propose solutions. Furthermore, the AI model was found to rely on false information and even invented vulnerabilities that did not exist.

This experiment highlights the limitations of AI in dealing with complex tasks such as identifying smart contract vulnerabilities. Mariko Wakabayashi, machine learning lead at OpenZeppelin, pointed out that while AI has strengths in solving creative and open-ended tasks, its weaknesses are too great for reliable security applications.

The need for human auditors in the field of blockchain security remains crucial, as experts must assess the accuracy of the solutions provided by AI models. It’s clear that even though AI can add value to the field, it cannot replace the expertise and judgment of human auditors in the near future.

While some crypto companies, like Crypto.com, are optimistic about the use of AI in their operations, others like Bitget adopt a more cautious approach. Misinformation and false investment advice played a significant role in Bitget’s decision to limit AI tool usage.

In conclusion, AI developments hold significant promise for improving efficiency and acting as a catalyst for innovation in the realm of blockchain and cryptocurrency. However, the technology remains supplemental to human expertise. Blockchain security professionals must not rely solely on AI to identify and address vulnerabilities in smart contracts.

Source: Blockworks

Sponsored ad