Cybercriminals lure LLMs to the dark side

betanews.comPublished: 4/30/2025

Summary

AI-powered cybercriminals are leveraging generative AI and large language models (LLMs) to impersonate individuals and spread disinformation, eroding trust in digital identities. The report highlights four critical areas: AI-driven phishing attacks, data poisoning campaigns that manipulate LLM training data, creation of advanced malware using AI-generated tools, and the weaponization of AI models for fraud and bypassing safety measures. As cybersecurity experts warn, these threats are evolving rapidly, necessitating updated security frameworks to counter this growing threat landscape.