ChatGPT has massively disrupted the modern Tech landscape, but with new AI technologies, come new vulnerabilities, as Kaspersky discovers in a new report.

It also showcases how ChatGPT can be used to aid crimes, with ChatGPT generating code to resolve problems that arise when stealing user data. Kaspersky notes that ChatGPT can lower the barrier of entry to users looking to use AI’s knowledge for malicious purposes. “Actions that previously required a team of people with some experience can now be performed at a basic level even by rookies.” Kaspersky notes in its report.

Jailbreak prompts bought and sold

Jailbreak prompts bought and sold

These jailbreak prompts are allegedly common, and found by users of “social platforms and members of shadow forums”. However, the security firm notes that not all prompts are engineered to perform illegal actions, and some are used to get more precise results.

The report continues to state that getting information from ChatGPT that’s usually restricted is simple, just by asking it again.

The report continues to state that ChatGPT can also be used in penetration testing scenarios, as well as using the basic LLM in “evil” modules that seek to harm, and are focused on illicit activities.

“it is likely that the capabilities of language models will soon reach a level at which sophisticated attacks will become possible.”

Until then, just remember to keep your personal details as safe and secure as possible.

Sayem Ahmed was Dexerto’s Tech Editor in the UK team, leading hardware coverage globally. Sayem is an expert in all things Nvidia, AMD, Intel, and PC components. He has over 10 years of experience, with bylines at Eurogamer, IGN, Trusted Reviews, Kotaku, and many more.