Spanish English French German Italian Portuguese
Social Marketing
HomeTechnologychatbotIs ChatGPT a cybersecurity threat?

Is ChatGPT a cybersecurity threat?

Since his debut in November, Chat GPT It has become the Internet's new favorite toy. The AI-powered natural language processing tool quickly amassed more than 1 million users, who have used the web-based chatbot for everything from generating wedding speeches y hip hop lyrics to writing academic essays and writing computer code.

ChatGPT's human-like abilities have not only taken the internet by storm, but have also pushed several industries to the brink: a New York school banned ChatGPT for fear it could be used for cheating, copywriters are already being replaced, and reports claim that Google is so alarmed by ChatGPT's capabilities that it issued a "code red" to ensure the survival of the company's search business.

It seems that the cybersecurity industry, a community that has long been skeptical of the potential implications of modern AI, taIt is also taking note of concerns that hackers with limited resources and no technical knowledge could abuse ChatGPT.

Israeli cybersecurity company Check Point has demonstrated how the web-based chatbot, when used in conjunction with Codex - OpenAI's code writing system - could create an email of Phishing capable of carrying a malicious payload. Check Point's threat intelligence group manager, Sergey Shykevich, said he believes use cases like this illustrate that ChatGPT has the "potential to significantly alter the cyber threat landscape," adding that it represents "another step forward in the dangerous evolution of increasingly sophisticated and effective solutions. cybernetic capabilities.

Many of the security experts believe that ChatGPT's ability to write emails from Phishing that sound legitimate, the main attack vector for the ransomware, will make the chatbot widely adopted by cybercriminals, particularly those who are not native English speakers.

Chester Wisniewski, Principal Research Scientist at Sophos, said it's easy to see ChatGPT being abused for "all sorts of social engineering attacks" where the perpetrators want to appear to be writing in more convincing American English.

The idea that a chatbot can write compelling text and realistic interactions is not that far-fetched.. "For example, you can tell ChatGPT to pretend to be a GP surgery and it will generate realistic text in seconds," he said. Hanah Darley, who leads threat research at Darktrace. "It's not hard to imagine how threat actors could use this as a force multiplier."

Check Point also recently sounded the alarm about the chatbot's apparent ability to help cybercriminals write malicious code.. The researchers say they witnessed at least three cases in which hackers with no technical skills bragged about how they had leveraged ChatGPT's artificial intelligence for malicious purposes. A hacker on a dark web forum displayed code written by ChatGPT that allegedly stole files of interest, compressed them, and sent them across the web. Another user posted a python script, which they claimed was the first script they had created. Check Point noted that while the code seemed benign, it could "easily be modified to encrypt someone's machine completely without any user interaction."

RELATED

Leave a response

Please enter your comment!
Please enter your name here

Comment moderation is enabled. Your comment may take some time to appear.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks