
ChatGPT: A new danger in the cybersecurity realm
ChatGPT can develop code that can be used for malicious purposes. And while ChatGPT has some content filters in place to restrict malicious output, these filters can be easily bypassed. Take a look at this video to see how the code was created, what happened during the attack and how BlackFog prevented the attacker from stealing the victim’s data.
Share This Story, Choose Your Platform!
Related Posts
The State of Ransomware: January 2026
BlackFog's state of ransomware January 2026 measures publicly disclosed and non-disclosed attacks globally.
Prompt Poaching: How Fake ChatGPT Extensions Stole 900k Users’ Data
Two fake AI extensions hit 900k Chrome users, stealing chats, code and data – a stark example of Prompt Poaching.
Lotus C2 – A New C2 Framework Sold as a Cybercrime Kit
Learn how Lotus C2 enables credential theft, data exfiltration, and mass attacks, blurring red team and cybercrime lines.
Shadow AI Threat Grows Inside Enterprises as BlackFog Research Finds 60% of Employees Would Take Risks to Meet Deadlines
BlackFog research shows Shadow AI growth as 60% of employees accept security risks to work faster using unsanctioned AI tools.
The Void: A New MaaS Infostealer Targeting 20+ Browsers
Find out how Model Context Protocol (MCP) could be abused as a covert channel for data theft: five real risks, examples, and mitigations.
2025 Q4 Ransomware Report
BlackFog’s 2025 Q4 Ransomware Report - The Unrelenting Surge: Ransomware Closes Q4 at Record Levels






