Published at : 14 Jan 2023 10:02 AM (IST)
The AI-driven natural language processing tool quickly accumulated more than one million users, who have utilised the web-based chatbot for a variety of purposes, including the creation of wedding speeches, hip-hop songs, academic writings, and computer code, among other things.
Copywriters are already being replaced, and reports claim that Google is so alarmed by ChatGPT’s capabilities that it issued a “code red” to ensure the survival of the company’s search business. Not only has ChatGPT’s human-like abilities taken the internet by storm, but it has also set a number of industries on edge: a school in New York banned ChatGPT out of fears that it could be used to cheat; copywriters are already being replaced; and a New York school banned Chat
It would indicate that the cybersecurity sector, a group that has long been dubious about the possible consequences of current AI, is also taking note of the situation in light of the worries that ChatGPT may be misused by hackers with low resources and zero technical understanding.
Just a few short weeks after the introduction of ChatGPT, the Israeli cybersecurity company Check Point gave a demonstration in which they showed how the web-based chatbot, in conjunction with OpenAI’s code-writing system Codex, could generate a phishing email that was capable of carrying a malicious payload. Use cases such as this one, according to the manager of the threat intelligence group at Check Point, Sergey Shykevich, who spoke with GetPureGyan, demonstrate that ChatGPT has the “potential to significantly alter the cyber threat landscape.” He added that it represents “another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities.”
Using the chatbot, GetPureGyan was also able to create a phishing email that seemed to come from a reputable source. The request to have ChatGPT compose a phishing email was first rejected by the chatbot after we made it. A prompt snapped back, “I am not trained to generate or promote dangerous or destructive information,” when it was asked about its intentions. However, by just modifying the request, we were able to easily overcome the built-in guardrails that were included in the programme.
The majority of the security experts with whom GetPureGyan spoke believe that the ability of ChatGPT to write phishing emails that sound authentic — the most common attack vector for ransomware — will lead to widespread adoption of the chatbot by cybercriminals, particularly those who do not have English as their native language.
According to Chester Wisniewski, a senior research scientist at Sophos, it is simple to see ChatGPT being misused for “all kinds of social engineering assaults,” in which the culprits wish to appear to write in a more believable American English.
According to what Wisniewski told GetPureGyan, “on a fundamental level, I have been able to write some great phishing lures with it,” and “I expect it could be utilised to have more realistic interactive conversations for business email compromise and even attacks over Facebook Messenger, WhatsApp, or other chat apps.”
The act of actually obtaining malware and making use of it is a very minor component of the shitwork that is required to be a bottom feeder cyber criminal.
Grugq, a researcher in computer security
It’s not that far-fetched of a notion to think that a chatbot might create convincing prose and have genuine interactions with users. According to Hanah Darley, who is in charge of threat research at Darktrace and gave an interview to GetPureGyan, “For instance, you may tell ChatGPT to pretend to be a GP practise, and it can create life-like text in seconds.” It is not difficult to conceive of the ways in which those who pose a danger may utilise this as a force multiplier.
Check Point has also lately raised concerns about the chatbot’s apparent capability of assisting hackers in the process of writing dangerous malware. The researchers claim that they were present for at least three occasions in which hackers bragged about their ability to use ChatGPT’s artificial intelligence for nefarious reasons. These hackers lacked any kind of technical expertise. One hacker displayed code generated by ChatGPT on a forum located on the dark web. According to the hacker, the code was used to steal data of interest, compress them, and then send them over the internet. Another person published a Python script, stating that it was the very first script they had ever written in their whole life. Check Point said that despite the fact that the malware seemed to be harmless, it could “simply be updated to encrypt someone’s PC fully without any user intervention.” According to Check Point, the same member on the site has previously offered access to compromised enterprise systems as well as stolen data.
GetPureGyan was recently given a demonstration by Dr. Suleyman Ozarslan, a security researcher who is also the co-founder of Picus Security. In this demonstration, Dr. Ozarslan showed how ChatGPT was used to write code for ransomware that targeted macOS and write phishing lures centred around the World Cup. Ozarslan asked the chatbot to write code in Swift, the programming language used for developing apps for Apple devices, that could find Microsoft Office documents on a MacBook and send them over an encrypted connection to a web server, before encrypting the Office documents that were already on the MacBook. This was done before the chatbot encrypted the Office documents that were already on the MacBook.
According to Ozarslan, “I have no doubts that ChatGPT and other technologies like this will democratise cybercrime.” [Citation needed] “It’s bad enough that anyone can already purchase ransomware code ‘off-the-shelf’ on the dark web; now nearly anybody can make it themselves, which is a much bigger problem.”
Unsurprisingly, when word spread around the business that ChatGPT had the potential to build dangerous code, brows were wrinkled. Concerns that an AI chatbot may transform would-be hackers into full-fledged crooks have also been dispelled by a number of industry professionals in response to this development. Check Point’s assertions that ChatGPT would “super charge cyber thieves who suck at coding” were ridiculed by an independent security researcher known as The Grugq in an article that was published on Mastodon.
They are required to register domains and keep the infrastructure up to date. They need to add fresh material to the websites and test the software to ensure that it will continue to barely function on the new platform even while it barely functions on the old platform. According to The Grugq, “they need to monitor their infrastructure for health and watch what is occurring in the press to make sure that their campaign isn’t in an article on the top five most humiliating phishing phails.” The act of actually obtaining malware and making use of it is a very minor component of the shitwork that is required to be a bottom feeder cyber criminal.
Some people feel that the capability of ChatGPT to develop harmful code comes with a positive side effect.
“ChatGPT provides defenders with the ability to develop code to imitate opponents or even automate duties in order to make their job more manageable. “It has already been utilised for a number of outstanding activities, including tailored schooling, producing newspaper articles, and creating computer code,” said Laura Kankaala, F-threat Secure’s intelligence head. “These are just a few examples.” “However, it should be pointed out that it might be risky to place one’s entire reliance in the output of text and code created by ChatGPT. The code that it creates may have flaws or security risks. Kankaala continued by casting doubt on the dependability of the code that was created by ChatGPT by stating that the resulting text may potentially include blatant factual inaccuracies.
Jake Moore, a researcher at ESET, said that as the technology develops, “if ChatGPT learns sufficiently from its input, it may soon be able to assess possible attacks on the fly and provide positive recommendations to boost security.”
It’s not only the people who work in the security industry that are confused about what part ChatGPT will play in the development of cybersecurity in the future. When we submitted the question to the chatbot, we weren’t just interested in hearing what it had to say about ChatGPT; we were also keen to hear what ChatGPT had to say about itself.
The chatbot’s response was that “it is difficult to predict exactly how ChatGPT or any other technology will be used in the future because it depends on how it is implemented and the intentions of those who use it,” which translates to “it is difficult to predict exactly how ChatGPT or any other technology will be used in the future.” The manner in which ChatGPT is used will ultimately determine the effect that it has on information security. It is essential to have an accurate understanding of the possible dangers and to take the necessary precautions to reduce them.