Janus Associates Blog - Bringing You Cybersecutity Industrty News and Threat Reports

AI-Powered Risks & Pitfalls: Navigating the Cybersecurity Threat Landscape

Written by Janus Associates | Oct 28, 2025 5:03:59 PM

Generative AI is changing the game in cybersecurity for everyone, both those looking to protect systems and those who might exploit vulnerabilities. As we see the rise of deepfakes, automated phishing schemes, and unpredictable AI hallucinations, companies are quickly trying to update their risk management strategies to align with the latest NIST Cybersecurity Framework 2.0 and AI guidelines.  

It’s important to stay ahead of these emerging threats, and that’s where having expert support, like that from JANUS Associates, becomes crucial for your organization. They can help you navigate this complex landscape and ensure you're well-prepared for whatever comes next. 

NIST Cybersecurity Framework 2.0: What’s New for AI? 

NIST CSF 2.0 launched in early 2025 broadens its focus to cover all sectors, introducing an even more crucial 'Govern' function. This aspect emphasizes the importance of enterprise-wide risk management, including setting up policies around AI and ensuring leadership support. Organizations are now urged to regularly evaluate their AI usage, enforce risk policies, and continually monitor outputs to prevent harm and inaccuracies.

 

How Attackers Exploit AI in 2025 

  • Deepfake Scams: Cybercriminals use generative AI to create realistic fake audio, video, and images targeting executives, vendors, and customers. These attacks exploit trust and make social engineering far more effective.  
  • Automated Phishing: Advanced LLMs produce targeted emails, mimic individual style, and even hijack live threads, bypassing legacy detection systems.  
  • Behavior-Evading Malware: AI-driven malware adapts its code on the fly, evading standard tools and detection protocols.  
  • Prompt Injection & Data Poisoning: Attackers input malicious data to compromise AI models, forcing them to leak sensitive information or behave unpredictably.  

How Defenders Respond with AI

  • AI-Driven Threat Detection: Machine learning and anomaly detection now automate log analysis, quickly identifying signs of compromise and suspicious behavior across environments.
  • Automated Incident Response: AI systems contain threats faster than ever by deploying playbooks and isolating compromised endpoints.
  • Secure Code & Model Validation: LLMs assist in writing, patching, and validating software, reducing human error, but require oversight to prevent hallucinations and vulnerabilities.
  • Continuous Monitoring & Governance: NIST mandates feedback loops and human-in-the-loop checks, focusing on persistent oversight, red-teaming, and rapid adaptation to new threats. 

 

The evolving language of AI and cybersecurity can be confusing, especially with rapid advancements and new threats emerging daily. To help you navigate this space with confidence, below we’ve clarified some essential terms. Understanding these concepts isn’t just helpful, it’s necessary for business leaders and IT professionals responsible for managing ongoing operations.

What is Generative AI? 

Generative AI refers to artificial intelligence systems capable of creating new content such as text, code, images, audio, or video, by leveraging large datasets and advanced neural networks. In cybersecurity, these systems create convincing attack tools. Generative AI can also be used to power defense automation. 

What Does LLM Mean? 

LLM stands for Large Language Model, a type of generative AI trained by vast amounts of text to produce contextually relevant and syntactically accurate text. LLMs are foundational for automated phishing, content creation, and log analysis, but must be monitored for “hallucinations” and risk exposure, per NIST guidance. 

What are AI Hallucinations? 

AI “hallucinations” are factual errors or fabricated outputs generated by an AI system, often presented convincingly. NIST’s latest frameworks stress the impossibility of fully eliminating hallucinations and instead focus on their detection, containment, and human oversight, especially when used in critical legal or financial workflows. 

For example, a major U.S. law firm recently apologized in Federal court for AI-generated errors in a bankruptcy filing, where the AI fabricated legal citations, misleading the proceedings. The firm voluntarily offered to pay substantial penalties and promised to adopt stricter AI oversight policies.

 

Misinformation and Corporate Information Leaks via AI 

Generative AI has amplified misinformation through the creation of deepfakes, realistic but false videos and voice recordings that mimic executives or public figures. AI-powered phishing attacks can imitate writing styles, take over email threads, and hijack corporate credentials, leading to highly successful scams that bypass legacy defenses.  

Additionally, LLMs have unintentionally leaked proprietary business information when integrated into workflows without adequate safeguards, resulting in embarrassing and costly reputational damage.  

Automated tools powered by generative AI can scour public forums and internal documents, synthesizing confidential data for malicious use, a risk that increases in the absence of written AI policies.

80% Of Ransomware Attacks Now Use Artificial Intelligence

 

Why Partner with JANUS Associates? 

In business since 1988, JANUS specializes in helping organizations align cybersecurity and AI practices with NIST’s latest frameworks, offering expert risk assessments and implementation support mapped to CSF 2.0 and current privacy standards. Request a Complimentary Cyber Security Consultation with a JANUS Associates expert today and fortify your defenses for the AI-centric future. 

To further support organizations in addressing the cybersecurity challenges and opportunities posed by AI, NIST has initiated the Cyber AI Profile Working Sessions, highlighted in the official overview video and recent workshop summary.

 

Sources/References