ChatGPT has taken the world by storm with over 100 million monthly users in January, setting the record for the fastest growing app since its launch at the end of 2022. This AI Chatbot has a wide range of uses, from writing essays to penning a business plan, it can even generate code. But what exactly is it, and what are the potential cybersecurity risks?
What is ChatGPT?
ChatGPT is an AI driven natural language processing tool created by OpenAI. Designed to answer questions and assist with tasks, it is currently open to the public and free of charge. Additional features and functionality are also available with a paid subscription.
The application sources its data from textbooks, websites, and articles, using these to model its own language and responses to the questions posed. It is suitable for chatbots, AI system conversions and virtual assistant applications but it also has the capability to develop code, write articles, translate, and debug, among other tasks.
Why does ChatGPT pose a risk to cybersecurity?
Researchers have found that ChatGPT can develop code that can be used for malicious purposes. And while ChatGPT has some content filters in place to restrict malicious output, these filters can be bypassed.
For example, the software company CyberArk was able to successfully bypass these filters and use the program to create polymorphic malware. They were also able to use ChatGPT to mutate the code, thus creating a code that was highly evasive and difficult to detect. Additionally, they were able to generate programs that could be used in malware and ransomware attacks. Cybersecurity solutions provider Check Point was also able to use ChatGPT to create a convincing spear-phishing attack.
When Forbes magazine asked the AI Bot itself whether it was a cybersecurity threat, they received an answer stating that it is not a threat, but did add that “any technology can be misused.”
As ChatGPT is an example of machine learning, the threat will continue to grow in line with the demand for malicious code. With the increasing input it receives, it will learn to craft more sophisticated answers, leading to the possibility of more sophisticated coding capabilities. With these capabilities available to the public, it will require less skills from threat actors to carry out these attacks.
BlackFog can help defend against these attacks.
We did some research of our own and found that ChatGPT is capable of writing a PowerShell attack, if asked in a “non malicious” way. Take a look at the video below to find out how the code was created, what happened during the attack and how BlackFog prevented the attacker from stealing the victim’s data.
The PowerShell script is generated quickly by ChatGPT and can be easily used in an attack.
As you can see, once the script has been installed onto the victim’s device, data is exfiltrated every five seconds, but the victim is completely unaware that anything is happening in the background.
BlackFog, once installed, immediately stopped the attack in its tracks and no further data was exfiltrated from the victims device. This happened automatically and without the need for any intervention from the user. The attacker then sees that the script has stopped functioning and they have no option but to abandon the attack, while the user now has peace of mind that their data is safe.
BlackFog’s Anti Data Exfiltration (ADX) technology automatically blocks all types of cyberthreats and ensures that no unauthorized data leaves an organizations’ devices or networks. The 24/7 protection is on-device, meaning that no matter where employees are working, as long as they have an internet connection, they are 100% protected.
With ChatGPT growing in popularity, and the reality that its machine learning capabilities will only produce more sophisticated code, it’s inevitable that less skilled threat actors will be empowered to launch cyberattacks. To stay ahead of cybercriminals, organizations must evaluate their cybersecurity strategy and ensure they have third generation defenses in place to combat these cyberattacks.
Share This Story, Choose Your Platform!
Related Posts
AI Data Exfiltration: The Next Frontier Of Cybercrime
How are cybercriminals using AI data exfiltration to enhance their ransomware attacks and what must businesses do to counter these threats?
5 Enterprise Use Cases Where AI Privacy Concerns Must Be Addressed
AI privacy concerns are rising with AI adoption - five use cases highlight the key issues businesses must consider.
What AI Management Really Means For The Enterprise
Ongoing AI management is essential in maintaining compliance in a challenging environment. Here's what businesses need to consider.
AI Security Risks Every Business Must Know About
AI Security Risks are growing as AI embeds in business. What key threats must firms address to stay compliant with data regulations?
Who’s Really In Charge? Why AI Governance Is Now A Business Imperative
Find out why a strong AI governance program will be essential if enterprises are to make the best use of the highly in-demand technology.
AI Compliance: A Roadmap For Addressing Risk And Building Trust
AI compliance is set to be a major focus for businesses in the coming year. Here's what you need to know to make this as easy as possible.






