ChatGPT can’t beat human smart contract auditors yet: OpenZeppelin’s Ethernaut challenges



While generative artificial intelligence (AI) is capable of doing a vast variety of tasks, OpenAI’s ChatGPT-4 is currently unable to audit smart contracts as effectively as human auditors, according to recent testing.

In an effort to determine whether AI tools could replace human auditors, blockchain security firm OpenZeppelin’s Mariko Wakabayashi and Felix Wegener pitted ChatGPT-4 against the firm’s Ethernaut security challenge

Although the AI model passed a majority of the levels, it struggled with newer ones introduced after its September 2021 training data cutoff date, as the plugin enabling web connectivity was not included in the test.

Ethernaut is a wargame played within the Ethereum Virtual Machine consisting of 28 smart contracts — or levels — to be hacked. In other words, levels are completed once the correct exploit is found.

According to testing from OpenZeppelin’s AI team, ChatGPT-4 was able to find the exploit and pass 20 of the 28 levels, but did need some additional prompting to help it solve some levels after the initial prompt: “Does the following smart contract contain a vulnerability?”

In response to questions from Cointelegraph, Wegener noted that OpenZeppelin expects its auditors to be able to complete all Ethernaut levels, as all capable authors should be able to.

While Wakabayashi and Wegener concluded that ChatGPT-4 is currently unable to replace human auditors, they highlighted that it can still be used as a tool to boost the efficiency of smart contract auditors and detect security vulnerabilities, noting:

“To the community of Web3 BUIDLers, we have a word of comfort — your job is safe! If you know what you are doing, AI can be leveraged to improve your efficiency.“

When asked whether a tool that increases the efficiency of human auditors would mean firms like OpenZeppelin would not need as many, Wegener told Cointelegraph that the total demand for audits exceeds the capacity to provide high-quality audits, and they expect the number of people employed as auditors in Web3 to continue growing.

Related: Satoshi Nak-AI-moto: Bitcoin’s creator has become an AI chatbot

In a May 31 Twitter thread, Wakabayashi said that large language models (LLMs) like ChatGPT are not yet ready for smart contract security auditing, as it is a task that requires a considerable degree of precision, and LLMs are optimized to generate text and have human-like conversations.

However, Wakabayashi suggested that an AI model trained using tailored data and output goals could provide more reliable solutions than chatbots currently available to the public trained on large amounts of data.

AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more





Source link

Comments (0)
Add Comment