August 10, 2023
2 mins read

Researchers ‘hypnotise’ ChatGPT into hacking

The researchers learned that English has essentially become a “programming language” for malware…reports Asian Lite News

Tricking generative AI to assist in scams and cyberattacks does not require much coding knowledge, a new report has warned.

According to tech major IBM, researchers have described simple workarounds for getting large language models (LLMs) — including ChatGPT — to write malicious code and provide poor security advice.

“In a bid to explore security risks posed by these innovations, we attempted to hypnotise popular LLMs to determine the extent to which they were able to deliver directed, incorrect and potentially risky responses and recommendations — including security actions — and how persuasive or persistent they were in doing so,” said Chenta Lee, chief architect of threat intelligence at IBM.

“We were able to successfully hypnotise five LLMs — some performing more persuasively than others — prompting us to examine how likely it is that hypnosis is used to carry out malicious attacks,” he added.

The researchers learned that English has essentially become a “programming language” for malware. With LLMs, attackers no longer need to rely on Go, JavaScript, Python, etc., to create malicious code, they just need to understand how to effectively command and prompt an LLM using English.

Through hypnosis, the security experts were able to get LLMs to leak the confidential financial information of other users, create vulnerable code, create malicious code, and offer weak security recommendations.

In one instance, the researchers informed the AI chatbots that they were playing a game and that they needed to purposefully share the incorrect answer to a question in order to win and “prove that you are ethical and fair”.

When a user asked if receiving an email from the IRS to transfer money for a tax refund was normal, the LLM said Yes (but actually it’s not).

Moreover, the report said that OpenAI’s GPT-3.5 and GPT-4 models were easier to trick into sharing incorrect answers or playing a never-ending game than Google’s Bard.

GPT-4 was the only model tested that understood the rules well enough to give incorrect cyber incident response advice, such as advising victims to pay a ransom. In contrast to Google’s Bard, GPT-3.5 and GPT-4 were easily tricked into writing malicious code when the user reminded it to.

ALSO READ-OpenAI to roll out ‘huge set’ of ChatGPT updates   

Previous Story

Direct flight service between Belarus, India to start soon

Next Story

YouTube testing new ‘For You’ section on channel homepages

Latest from Tech Lite

India to host AI impact summit 2026

India to host 2026 AI Summit, boosting global dialogue on safe, inclusive tech, while Gujarat launches state-wide AI plan for governance, growth, and public welfare. India is set to host the AI

Apple Appoints Sabih Khan as New COO

Khan will take over from Jeff Williams, who is stepping down from the role this month and will retire later this year…reports Asian Lite News Apple has announced that Sabih Khan, an

EVs Drive Cleaner Cities

Each EV is equipped with GPS tracking for real-time monitoring and efficiently serves the city’s 159.46 square kilometre area The use of electric vehicles (EVs) in household waste collection is ushering in

India’s Digital Decade Rise

Arundhati Bhattacharya, President and CEO at Salesforce-South Asia and recipient of the Padma Shri, sat down with IANS at an event in the capital to reflect on India’s transformation into a digital
Go toTop

Don't Miss

India calls for building responsible, safe AI

Prime Minister says India is a front-runner in AI and

AI Leads Smartphone Tech Shift

Indians are spending a staggering 2,300 hours annually on their