
Thursday Sep 11, 2025
SIG's Rob van der Veer on Why "Starting Small" with AI Security Might Fail
What happens when someone who's been building AI systems for 33 years confronts the security chaos of today's AI boom? Rob van der Veer, Chief AI Officer at Software Improvement Group (SIG), spotlights how organizations are making critical mistakes by starting small with AI security — exactly the opposite of what they should do.
From his early work with law enforcement AI systems to becoming a key architect of ISO 5338 and the OWASP AI Security project, Rob exposes the gap between how AI teams operate and what production systems actually need. His insights on trigger data poisoning attacks and why AI security incidents are harder to detect than traditional breaches offer a sobering reality check for any organization rushing into AI adoption.
The counterintuitive solution? Building comprehensive AI threat assessment frameworks that map the full attack surface before focused implementation. While most organizations instinctively try to minimize complexity by starting small, Rob argues this approach creates dangerous blind spots that leave critical vulnerabilities unaddressed until it's too late.
Topics discussed:
- Building comprehensive AI threat assessment frameworks that map the full attack surface before focused implementation, avoiding the dangerous "start small" security approach.
- Implementing trigger data poisoning attack detection systems that identify backdoor behaviors embedded in training data.
- Addressing the AI team engineering gap through software development lifecycle integration, requiring architecture documentation and automated testing before production deployment.
- Adopting ISO 5338 AI lifecycle framework as an extension of existing software processes rather than creating isolated AI development workflows.
- Establishing supply chain security controls for third-party AI models and datasets, including provenance verification and integrity validation of external components.
- Configuring cloud AI service hardening through security-first provider evaluation, proper licensing selection, and rate limiting implementation for attack prevention.
- Creating AI governance structures that enable innovation through clear boundaries rather than restrictive bureaucracy.
- Developing organizational AI literacy programs tailored to specific business contexts, regulatory requirements, and risk profiles for comprehensive readiness assessment.
- Managing AI development environment security with production-grade controls due to real training data exposure, unlike traditional synthetic development data.
- Building "I don't know" culture in AI expertise to combat dangerous false confidence and encourage systematic knowledge-seeking over fabricated answers.
Key Takeaways:
- Don't start small with AI security scope — map the full threat landscape for your specific context, then focus implementation efforts strategically.
- Use systematic threat modeling to identify AI-specific attack vectors like input manipulation, model theft, and training data reconstruction.
- Create processes to verify provenance and integrity of third-party models and datasets.
- Require architecture documentation, automated testing, and code review processes before AI systems move from research to production environments.
- Treat AI development environments as critical assets since they contain real training data.
- Review provider terms carefully, implement proper hardening configurations, and use appropriate licensing to mitigate data exposure risks.
- Create clear boundaries and guardrails that actually increase team freedom to experiment rather than creating restrictive bureaucracy.
- Implement ongoing validation that goes beyond standard test sets to detect potential backdoor behaviors embedded in training data.
Listen to more episodes:
No comments yet. Be the first to say something!