Generative AI is changing the game in cybersecurity for everyone, both those looking to protect systems and those who might exploit vulnerabilities. As we see the rise of deepfakes, automated phishing schemes, and unpredictable AI hallucinations, companies are quickly trying to update their risk management strategies to align with the latest NIST Cybersecurity Framework 2.0 and AI guidelines.
It’s important to stay ahead of these emerging threats, and that’s where having expert support, like that from JANUS Associates, becomes crucial for your organization. They can help you navigate this complex landscape and ensure you're well-prepared for whatever comes next.
NIST CSF 2.0 launched in early 2025 broadens its focus to cover all sectors, introducing an even more crucial 'Govern' function. This aspect emphasizes the importance of enterprise-wide risk management, including setting up policies around AI and ensuring leadership support. Organizations are now urged to regularly evaluate their AI usage, enforce risk policies, and continually monitor outputs to prevent harm and inaccuracies.
The evolving language of AI and cybersecurity can be confusing, especially with rapid advancements and new threats emerging daily. To help you navigate this space with confidence, below we’ve clarified some essential terms. Understanding these concepts isn’t just helpful, it’s necessary for business leaders and IT professionals responsible for managing ongoing operations.
Generative AI refers to artificial intelligence systems capable of creating new content such as text, code, images, audio, or video, by leveraging large datasets and advanced neural networks. In cybersecurity, these systems create convincing attack tools. Generative AI can also be used to power defense automation.
LLM stands for Large Language Model, a type of generative AI trained by vast amounts of text to produce contextually relevant and syntactically accurate text. LLMs are foundational for automated phishing, content creation, and log analysis, but must be monitored for “hallucinations” and risk exposure, per NIST guidance.
AI “hallucinations” are factual errors or fabricated outputs generated by an AI system, often presented convincingly. NIST’s latest frameworks stress the impossibility of fully eliminating hallucinations and instead focus on their detection, containment, and human oversight, especially when used in critical legal or financial workflows.
For example, a major U.S. law firm recently apologized in Federal court for AI-generated errors in a bankruptcy filing, where the AI fabricated legal citations, misleading the proceedings. The firm voluntarily offered to pay substantial penalties and promised to adopt stricter AI oversight policies.
Generative AI has amplified misinformation through the creation of deepfakes, realistic but false videos and voice recordings that mimic executives or public figures. AI-powered phishing attacks can imitate writing styles, take over email threads, and hijack corporate credentials, leading to highly successful scams that bypass legacy defenses.
Additionally, LLMs have unintentionally leaked proprietary business information when integrated into workflows without adequate safeguards, resulting in embarrassing and costly reputational damage.
Automated tools powered by generative AI can scour public forums and internal documents, synthesizing confidential data for malicious use, a risk that increases in the absence of written AI policies.
In business since 1988, JANUS specializes in helping organizations align cybersecurity and AI practices with NIST’s latest frameworks, offering expert risk assessments and implementation support mapped to CSF 2.0 and current privacy standards. Request a Complimentary Cyber Security Consultation with a JANUS Associates expert today and fortify your defenses for the AI-centric future.
To further support organizations in addressing the cybersecurity challenges and opportunities posed by AI, NIST has initiated the Cyber AI Profile Working Sessions, highlighted in the official overview video and recent workshop summary.