ChatGPT can write you malware code if you insist long enough

Security firm CyberArk has published a worrying report on ChatGPT AI technology. The language model developed by OpenAI could be used to create sophisticated malware that is hard to detect by ordinary cybersecurity measures. This is because this type of malware is “polymorphic”, programmed to constantly change its appearance or signatures through new decryption routines. Thus, security solutions such as anti-virus or anti-malware, which use file signatures for detection, are not able to recognise and block threats.

Cybersecurity experts warn that ChatGPT can create hard-to-detect malware

According to CyberArk experts, code developed using ChatGPT has exhibited “advanced capabilities” that could “easily evade security software solutions. This could be achieved despite the fact that OpenAI has implemented filters and monitors ChatGPT activity to prevent its model from being used for malicious purposes.

Read:  US soldiers experience vomiting when using AR headsets ordered from Microsoft. They say they would be killed in a real combat scenario

Despite the fact that, when asked directly, ChatGPT refuses to create malware, users’ insistence and manipulation of the language model by making requirements in ways other than directly can lead to the creation of code for malware eventually. Thus, it seems that the security measures currently in place are not quite enough to prevent the technology from being used to create new computer viruses.

It is enough for users to receive a response from ChatGPT that includes such dangerous code, after which, using natural language, they can ask the AI to make changes to the code to mold to their needs. These requirements do, of course, require a fairly good understanding of the systems the “hackers” want to attack, but they significantly reduce the development time of these dangerous programs.

Read:  Lidl: Three interesting vacuum cleaners now available in retailer stores

Even though ChatGPT is currently in its infancy, having been launched only a few months ago, it seems that the technology that OpenAI puts at everyone’s fingertips can also be used for malicious purposes. Besides, it doesn’t seem to be very successful for “positive” purposes either, as the information it provides is often erroneous, or outdated, as the AI is not connected to the internet, and the information it holds has only been collected until 2021.

The Best Online Bookmakers February 22 2024

BetMGM Casino