AI-coded smart contracts may be flawed, could ‘fail miserably’ when attacked: CertiK

3


Artificial intelligence tools such as OpenAI’s ChatGPT will create more problems, bugs and attack vectors if used to write smart contracts and build cryptocurrency projects, says an executive from blockchain security firm CertiK.

Kang Li, CertiK’s chief security officer, explained to Cointelegraph at Korean Blockchain Week on Sept. 5 that ChatGPT cannot pick up logical code bugs the same way that experienced developers can.

Li suggested ChatGPT may create more bugs than identify them, which could be catastrophic for first-time or amateur coders looking to build their own projects.

“ChatGPT will enable a bunch of people that have never had all this training to jump in, they can start right now and I start to worry about morphological design problems buried in there.”

“You write something and ChatGPT helps you build it but because of all these design flaws it may fail miserably when attackers start coming,” he added.

Instead, Li believes ChatGPT should be used as an engineer’s assistant because it’s better at explaining what a line of code actually means.

“I think ChatGPT is a great helpful tool for people doing code analysis and reverse engineering. It’s definitely a good assistant and it’ll improve our efficiency tremendously.”

The Korean Blockchain Week crowd gathering for a keynote. Source: Andrew Fenton/Cointelegraph

He stressed that it shouldn’t be relied on for writing code — especially by inexperienced programmers looking to build something monetizable.

Li said he will back his assertions for at least the next two to three years as he acknowledged the rapid developments in AI may vastly improve ChatGPT’s capabilities.

AI tech getting better at social engineering exploits

Meanwhile, Richard Ma, the co-founder and CEO of Web3 security firm Quantstamp, told Cointelegraph at KBW on Sept. 4 that AI tools are becoming more successful at social engineering attacks — many of which are identical to attempts by humans.

Ma said Quantstamp’s clients are reporting an alarming amount of ever more sophisticated social engineering attempts.

“[With] the recent ones, it looks like people have been using machine learning to write emails and messages. It’s a lot more convincing than the social engineering attempts from a couple of years ago.”

While the ordinary internet user has been plagued with AI-generated spam emails for years, Ma believes we’re approaching a point where we won’t know if malicious messages are AI or human-generated.

Related: Twitter Hack: ‘Social Engineering Attack’ on Employee Admin Panels

“It’s gonna get harder to distinguish between humans messaging you [or] pretty convincing AI messaging you and writing a personal message,” he said.

Crypto industry pundits are already being targeted, while others are being impersonated by AI bots. Ma believes it will only get worse.

“In crypto, there’s a lot of databases with all the contact information for the key people from each project. So the hackers have access to that [and] they have an AI that can basically try to message people in different ways.”

“It’s pretty hard to train your whole company to not respond to those things,” Ma added.

Ma said better anti-phishing software is coming to market that can help companies mitigate against potential attacks.

Magazine: AI Eye: Apple developing pocket AI, deep fake music deal, hypnotizing GPT-4