More

    Nation-state APTs employ Dark AI, says Kaspersky expert

    TechnologyCybersecurityNation-state APTs employ Dark AI, says Kaspersky expert

    Brace for more sophisticated and stealthy attacks driven by the rise of Dark AI in APAC. This is among the key findings shared by global cybersecurity and digital privacy company Kaspersky during its APAC Cyber Security Weekend 2025 in Da Nang, Vietnam.

    The event featured a timely discussion on how attackers employ AI technology to wage digital menace around the world, from simple phishing attacks to nation-state-backed cyber espionage.

    “Since ChatGPT gained global popularity in 2023, we have observed several useful adoptions of AI, from mundane tasks like video creation to technical threat detections and analysis. In the same breath, bad actors are using it to enhance their attacking capabilities. We are entering an era in cybersecurity and in our society where AI is the shield and Dark AI is the sword,” says Sergey Lozhkin, head of global research & analysis team (GReAT) for META and APAC at Kaspersky.

    Dark AI refers to the local or remote deployment of non-restricted large language models (LLMs) within a full framework or chatbot system that is used for malicious, unethical, or unauthorized purposes. These systems operate outside standard safety, compliance, or governance controls, often enabling capabilities such as deception, manipulation, cyberattacks, or data abuse without oversight.

    Dark AI in action

    Lozhkin shared the most common and well-known malicious use of AI today comes in the form of Black Hat GPTs, which emerged as early as mid-2023. These are AI models that are intentionally built, modified, or used to perform unethical, illegal, or malicious activities such as generating malicious codes, crafting fluent and persuasive phishing emails for both mass and targeted attacks, creating voice and video deepfakes, and even supporting Red Team operations. 

    Black Hat GPTs can be or private or semi-private AI models. Known examples include WormGPT, DarkBard, FraudGPT, and Xanthorox, designed or adapted to support cybercrime, fraud, and malicious automation.

    Aside from the typical dark uses of AI, Lozhkin revealed that Kaspersky experts are now observing a darker trend: nation-state actors leveraging LLMs in their campaigns. 

    “OpenAI recently revealed it has disrupted over 20 covert influence and cyber operations attempting to misuse its AI tools. We can expect threat actors to create more clever ways of weaponizing generative AI operating in both public and private threat ecosystems. We should brace for it,” he explains.

    Open AI’s report revealed that the malicious actors have used LLMs to craft convincing fake personas, respond in real-time to targets, and produce multilingual content designed to deceive victims and bypass traditional security filters. 

    “AI doesn’t inherently know right from wrong, it’s a tool that follows prompts. Even when safeguards are in place, we know APTs are persistent attackers. As dark AI tools become more accessible and capable, it’s crucial for organizations and individuals in Asia Pacific to strengthen cybersecurity hygiene, invest in threat detection powered by AI itself, and stay educated on how these technologies can be exploited,” Lozhkin adds.

    To help organizations defend themselves against Dark AI and AI-enabled cyber threats, Kaspersky experts suggest:

    To be updated on the latest threats using Dark AI, visit https://www.kaspersky.com/

    Related Posts