WormGPT: ChatGPT’s evil twin should have us all deeply concerned

ChatGPT has taken the world by storm, racking up over 100 million users since its launch late last year. OpenAI’s chatbot has put artificial intelligence on the map in a big way, sparking a number of follow-ups like Microsoft Bing Chat and Google Bard in its wake. However, there’s a new AI in town — WormGPT, and it’s not here to make your life any easier.

WormGPT isn’t an AI chatbot developed to amusingly offer the invertebrate AI assistance of a wriggly invertebrate ala the feline-focused CatGPT. Instead, it’s a far more malicious tool designed without ethical boundaries for the sole purpose of bolstering productivity, raising effectiveness, and lowering the bar for entry of your common garden variety cybercriminal. 

WormGPT: Artificial iniquity

Shamelessly presented as a blackhat alternative to current GPT models, WormGPT could potentially become the most useful tool in the criminal arsenal since the invention of the Jimmy bar or balaclava.

Doing away with all of that ethical bottlenecking of generative AI models, WormGPT provides anyone with a spare €60 the gateway to AI-assisted criminalities including phishing attacks, social engineering techniques, and even the creation of custom malware.

Powered by GPTJ (Generative Pre-trained Transformer model with JAX) language model, WormGPT has similar capabilities to other Large Language Models (LLMs) — making it able to generate human-like answers to queries and questions along with providing generative responses on trained data.

It’s believed the primary focus of WormGPT’s training revolved around malware and phishing, with one of the tool’s primary features being its ability to compose convincing and sophisticated phishing communications in order to further business email compromise (BEC) attacks.

In plain old human speak, the bar for entry when it comes to cybercrime has now been lowered to such a degree that if you were listening closely over the weekend you might have heard the distant echo of said bar bouncing off of Satan’s skull.

AI face with machine code eyes

(Image credit: Wired)

The dark side of the boom

It was only a matter of time before it happened, and you’d have been nigh on crazy to assume the AI boom would only include friendly chatbots, generative art, and easy ways to cheat on your homework.

In fact, the list of disturbing ways AI is currently being used grows larger every day, and the release of WormGPT could be the first undeniable proof that we’re flying headfirst into a cybersecurity hellscape of epic proportions.

Security analysts have been in a tizzy about the potential of cybercriminals weaponizing AI for some time, with just a thin buffer zone of oft-easily skirted ethical roadblocks standing in the way of the intent of bad actors. A buffer zone increasingly eroding due to mass attempts to “jailbreak” the ChatGPT software into acting without restrictions.

Juhani Hintikka, CEO of cyber security firm WithSecure, recently confirmed in an interview that the company has observed malware samples being generated by ChatGPT already — and that the generative nature of LLMs like it allows for the results to be generated in “many different variations,” with these tools spawning mutated versions of malicious code that makes it “harder for defenders to detect.”

In essence, our ability to defend against a potential tsunami of security threats could be pushed to its very limits as highly customized, unique, and varied malware is generated in mere moments by AI tools like ChatGPT, and WormGPT.

Outlook

With the first AI chatbot specifically for criminal and nefarious deeds being so brazenly announced, there’s no doubt it’s just the first of many ever-advancing, ever-improving models ne’er-do-wells will have to choose from in the near future.

Ironically, AI will increasingly become a vital tool to prevent the flood of AI-generated cybercrime over the coming years — an AI arms race to see which side is more proficient in its prompts.

The doomsday clock currently sits at 90 seconds to midnight, pushed part of that way by our rapid adoption of disruptive technologies. Meanwhile, the digital doomsday clock that monitors our internet security might as well be seconds away from midnight.

So maybe it’s time to all climb into our antivirus Anderson shelters, and fill our bellies with MRE Malwarebytes as we await the only likely outcome as two disruptive forces come together on the digital landscape — mutually assured destruction.

Arrow

Back to Ultrabook Laptops

Arrow

Load more deals


Source link