CISOs are finding themselves more involved in AI teams, often leading the cross-functional effort and AI strategy. But there aren’t many resources to guide them on what their role should look like or what they should bring to these meetings.
We’ve pulled together a framework for security leaders to help push AI teams and committees further in their AI adoption—providing them with the necessary visibility and guardrails to succeed. Meet the CLEAR framework.
If security teams want to play a pivotal role in their organization’s AI journey, they should adopt the five steps of CLEAR to show immediate value to AI committees and leadership:
If you’re looking for a solution to help take advantage of GenAI securely, check out Harmonic Security.
Alright, let’s break down the CLEAR framework.
A foundational requirement across regulatory and best-practice frameworks—including the EU AI Act, ISO 42001, and NIST AI RMF—is maintaining an AI asset inventory.
Despite its importance, organizations struggle with manual, unsustainable methods of tracking AI tools.
Security teams can take six key approaches to improve AI asset visibility:
Security teams should proactively identify AI applications that employees are using instead of blocking them outright—users will find workarounds otherwise.
By tracking why employees turn to AI tools, security leaders can recommend safer, compliant alternatives that align with organizational policies. This insight is invaluable in AI team discussions.
Second, once you know how employees are using AI, you can give better training. These training programs are going to become increasingly important amid the rollout of the EU AI Act, which mandates that organizations provide AI literacy programs:
“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems…”
Most organizations have implemented AI policies, yet enforcement remains a challenge. Many organizations opt to simply issue AI policies and hope employees follow the guidance. While this approach avoids friction, it provides little enforcement or visibility, leaving organizations exposed to potential security and compliance risks.
Typically, security teams take one of two approaches:
Striking the right balance between control and usability is key to successful AI policy enforcement.
And if you need help building a GenAI policy, check out our free generator: GenAI Usage Policy Generator.
Most of this discussion is about securing AI, but let’s not forget that the AI team also wants to hear about cool, impactful AI use cases across the business. What better way to show you care about the AI journey than to actually implement them yourself?
AI use cases for security are still in their infancy, but security teams are already seeing some benefits for detection and response, DLP, and email security. Documenting these and bringing these use cases to AI team meetings can be powerful – especially referencing KPIs for productivity and efficiency gains.
Instead of reinventing governance structures, security teams can integrate AI oversight into existing frameworks like NIST AI RMF and ISO 42001.
A practical example is NIST CSF 2.0, which now includes the “Govern” function, covering: Organizational AI risk management strategies Cybersecurity supply chain considerations AI-related roles, responsibilities, and policies Given this expanded scope, NIST CSF 2.0 offers a robust foundation for AI security governance.
Security teams have a unique opportunity to take a leading role in AI governance by remembering CLEAR:
By following these steps, CISOs can demonstrate value to AI teams and play a crucial role in their organization’s AI strategy.
To learn more about overcoming GenAI adoption barriers, check out Harmonic Security.