
2 days ago
Unspoken Security’s AJ Nash on Protecting Against AI Model Poisoning
In our latest episode of The Future of Threat Intelligence, recorded at RSA Conference 2025, AJ Nash, Founder & CEO, Unspoken Security, provides a sobering assessment of AI's transformation of cybersecurity. Rather than focusing solely on hype, AJ examines the double-edged nature of AI adoption: how it simultaneously empowers defenders while dramatically lowering barriers to entry for sophisticated attacks. His warnings about entering a "post-knowledge world" where humans lose critical skills and adversaries can poison trusted AI systems offer a compelling counterbalance to the technology's promise.
AJ draws parallels to previous technology trends like blockchain that experienced similar hype cycles before stabilizing, but notes that AI's accessibility and widespread applicability make it more likely to have lasting impact. He predicts that the next frontier in security will be AI integrity verification — building systems and organizations dedicated to ensuring that the AI models we increasingly depend on remain trustworthy and resistant to manipulation. Throughout the conversation, AJ emphasizes that while AI will continue to evolve and integrate into our security operations, maintaining human oversight and preserving our knowledge base remains essential.
Topics discussed:
- The evolution of the RSA Conference and how industry focus has shifted through cycles from endpoints to threat intelligence to blockchain and now to AI, with a particularly strong emphasis on agentic AI.
- The double-edged impact of AI on workforce dynamics, balancing the potential for enhanced productivity against concerns that companies may prioritize cost-cutting by replacing junior positions, potentially eliminating career development pipelines.
- The risk of AI-washing similar to how "intelligence" became a diluted buzzword, with companies claiming AI capabilities without substantive implementation, necessitating deeper verification — and even challenging — of vendors' actual technologies.
- The emergence of a potential "post-knowledge world" where overreliance on AI systems for summarization and information processing erodes human knowledge of nuance and detail.
- The critical need for AI integrity verification systems as adversaries shift focus to poisoning models that organizations increasingly depend on, creating new attack surfaces that require specialized oversight.
- Challenges to intellectual property protection as AI systems scrape and incorporate existing content, raising questions about copyright enforcement and ownership in an era where AI-generated work is derivative by nature.
- The importance of maintaining human oversight in AI-driven security systems through transparent automation workflows, comprehensive understanding of decision points, and regular verification of system outputs.
- The parallels between previous technology hype cycles like blockchain and current AI enthusiasm, with the distinction that AI's accessibility and practical applications make it more likely to persist as a transformative technology.
Key Takeaways:
- Challenge AI vendors to demonstrate their systems transparently by requesting detailed workflow explanations and documentation rather than accepting marketing claims at face value.
- Implement a "trust but verify" approach to AI systems by establishing human verification checkpoints within automated security workflows to prevent over-reliance on potentially flawed automation.
- Upskill your technical teams in AI fundamentals to maintain critical thinking abilities that help them understand the limitations and potential vulnerabilities of automated systems.
- Develop comprehensive AI governance frameworks that address potential model poisoning attacks by establishing regular oversight and integrity verification mechanisms.
- Establish cross-organizational collaborations with industry partners to create trusted AI verification authorities that can audit and certify model integrity across the security ecosystem.
- Document all automation workflows thoroughly by mapping decision points, data sources, and potential failure modes to maintain visibility into AI-driven security processes.
- Prioritize retention of junior security positions to preserve talent development pipelines despite the temptation to replace entry-level roles with AI automation.
- Conduct regular sampling and testing of AI system outputs to verify accuracy and detect potential manipulation or degradation of model performance over time.
- Balance innovation with security controls by evaluating new AI technologies for both their benefits and their potential to create new attack surfaces before deployment.
- Incorporate geopolitical and broader contextual awareness into threat intelligence practices to identify potential connections between world events and emerging cyber threats that AI alone might miss.
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.