Increasing interest in the application of OpenAI’s artificial intelligence chatbot, ChatGPT, in Web3 security has been met with unexpected scrutiny, as illustrated by a recent report released by the bug bounty and security services platform, Immunefi. Released in November 2022, ChatGPT had crossed the 100 million user mark within two months, but growing concerns about security, privacy, and ethics gave pause to many whitehats — a fact substantiated by Immunefi’s survey of 165 most active whitehats in the Web3 security community conducted in May 2023.
The report found a high percentage of whitehats (76.4%) utilizing ChatGPT for security practices and saw potential in this novel technology for educational purposes (74%). But it seems there’s a dichotomy between its gains and potential pitfalls. The community saw merits in using it for smart contract auditing (60.6%) and vulnerability discovery (46.7%), but there were significant apprehensions about limited accuracy in identifying security vulnerabilities (64.2%), lack of domain-specific knowledge (61.2%), and trouble with large-scale audits (61.2%).
Confidence levels were also mixed, to say the least, with only 35% moderately confident and 26% not confident at all in ChatGPT’s ability to identify Web3 security vulnerabilities. A hefty 52% expressed concern about the general use of ChatGPT presenting security problems, including possibilities of phishing, scams, and ransomware development.
On the positive side, 75% still believed that ChatGPT could improve Web3 security research, offering potential if subjected to more rigorous fine-tuning and training. Interestingly, 68% of the respondents would still recommend ChatGPT to colleagues for Web3 security purposes, even in light on its shortfalls.
However, the report uncovered a worrying trend – an influx of ChatGPT-generated bug reports, predominantly spurious. Immunefi labelled them as spam, flagged for confusing key programming concepts, or citing non-existing functions. No real vulnerability has been traced through these ChatGPT-generated bug reports. Faced with this spam flood, Immunefi had to enforce stringent rules, permanently banning accounts submitting ChatGPT-generated reports. It is telling that 21% of all banned accounts were penalized for this very offence.
Evidently, the AI security frontier is teetering at the brink of a complex landscape. As the integration of AI in security deepens, it’s important to navigate this balance between the tremendous potential and the haunting vulnerabilities, to ensure that transgressions don’t outstrip the benefits.
Source: Cryptonews