Future of Threat Intelligence
Welcome to the Future of Threat Intelligence podcast, where we explore the transformative shift from reactive detection to proactive threat management. Join us as we engage with top cybersecurity leaders and practitioners, uncovering strategies that empower organizations to anticipate and neutralize threats before they strike. Each episode is packed with actionable insights, helping you stay ahead of the curve and prepare for the trends and technologies shaping the future.
Episodes

7 days ago
7 days ago
The security industry's obsession with cutting-edge threats often overshadows a more pressing reality: the vast majority of organizations are still mastering basic AI implementation. Vivek Menon, CISO & Head of Data at Digital Turbine, brings his insights from the RSA expo floor to share why the agentic AI security rush may be premature, while highlighting the genuine opportunities AI presents for resource-constrained security teams.
Vivek shares with David how smaller organizations can leverage AI automation to achieve enterprise-level security capabilities without corresponding budget increases. His balanced approach to AI security threats demonstrates why defenders maintain strategic advantages over attackers, despite the expanded attack surface that dominates industry discussions.
Topics discussed:
Why the agentic AI security market represents a classic "horse before the cart" scenario, with vendors solving problems for the 1% of enterprises building agents while 99% are still evaluating basic AI adoption.
How the rush toward AI agents is forcing long-overdue conversations about non-human identity management, which lacks pace and scale in implementation.
The strategic advantage defenders maintain in AI-powered security conflicts, leveraging time-based preparation capabilities while attackers face immediate success requirements with limited development windows.
The dual nature of AI security impact, balancing genuine attack surface expansion against significantly enhanced defensive capabilities.
Distinguishing between legitimate security innovation and buzzword-driven marketing, focusing on practical implementation readiness over theoretical capability demonstrations.
How programmatic advertising technology companies navigate unique security challenges while maintaining operational efficiency in highly automated, data-driven business environments.
Key Takeaways:
Evaluate vendor AI solutions by asking what percentage of your industry actually uses the underlying technology before investing in security tools for emerging threats.
Prioritize non-human identity management initiatives now, as the shift toward AI agents will expose existing gaps in identity governance at scale.
Leverage AI automation to achieve enterprise-level security capabilities without proportional budget increases, especially for resource-constrained organizations.
Adopt AI as a defensive accelerator rather than viewing it primarily as an attack surface expansion problem.
Invest time in comprehensive threat protection strategies, capitalizing on defenders' advantage over attackers who must succeed immediately.
Assess your organization's AI maturity before implementing agentic AI security solutions, ensuring you're solving actual rather than theoretical problems.
Focus security budgets on mainstream technology threats affecting 99% of enterprises rather than cutting-edge solutions for the 1%.
Listen to more episodes:
Apple
Spotify
YouTube
Website

Thursday Jun 12, 2025
Thursday Jun 12, 2025
Most organizations approach ransomware as a technical problem, but Steve Baer, Field CISO at Digital Asset Redemption, has built his career understanding it as fundamentally human. His team's approach highlights why traditional cybersecurity tools fall short against motivated human adversaries and how proactive intelligence gathering can prevent incidents before they occur.
Steve's insights from the ransomware negotiation business challenge conventional wisdom about cyber extortion. Professional negotiators consistently achieve 73-75% reductions in ransom demands through skilled human interaction, while many victims discover their "stolen" data is actually worthless historical information that adversaries misrepresent as current breaches. Digital Asset Redemption's unique position allows them to purchase stolen organizational data on dark markets before public disclosure, effectively preventing incidents rather than merely responding to them.
Topics discussed:
Building human intelligence networks with speakers of different languages who maintain authentic personas and relationships within dark web adversarial communities.
Professional ransomware negotiation techniques that achieve consistent 73-75% reductions in extortion demands through skilled human interaction rather than automated responses.
The reality that less than half of ransomware victims require payment, as many attacks involve worthless historical data misrepresented as current breaches.
Proactive data acquisition strategies that purchase stolen organizational information on dark markets before public disclosure to prevent incident escalation.
Why AI serves as a useful tool for maintaining context and personas but cannot replace human intelligence when countering human adversaries.
Key Takeaways:
Investigate data value before paying ransoms — many attacks involve worthless historical information that adversaries misrepresent as current breaches.
Engage professional negotiators rather than attempting DIY ransomware negotiations, as specialized expertise consistently achieves 73-75% reductions in demands.
Build relationships within the cybersecurity community since the industry remains small and professionals freely share valuable threat intelligence.
Deploy human intelligence networks with diverse language capabilities to gather authentic threat intelligence from adversarial communities.
Assess AI implementation as a useful tool for maintaining context and personas while recognizing human adversaries require human intelligence to counter.
Listen to more episodes:
Apple
Spotify
YouTube
Website

Tuesday Jun 10, 2025
Tuesday Jun 10, 2025
The cybersecurity industry has talked extensively about burnout, but Mark Alba, Managing Director of Cybermindz, is taking an unprecedented scientific approach to both measuring and treating it. In this special RSA episode, Mark tells David how his team applies military-grade psychological protocols originally developed for PTSD treatment to address the mental health crisis in security operations centers. Rather than relying on anecdotal evidence of team fatigue, they deploy clinical psychologists to measure resilience through validated psychological assessments and deliver interventions that can literally change how analysts' brains process stress.
Mark walks through their use of the iRest Protocol, a 20-year-old treatment methodology from Walter Reed Hospital that shifts brain activity from amygdala-based fight-or-flight responses to prefrontal cortex logical thinking. Their team of five PhDs works directly within enterprise SOCs to establish baseline psychological metrics and track improvement over time, giving security leaders unprecedented visibility into their team's actual capacity to handle high-stress incident response.
Topics discussed:
Clinical measurement of cybersecurity burnout through validated psychological assessments including the MASLAC sleep index and psychological capital evaluations.
Implementation of the iRest Protocol, a military-developed meditative technique used at Walter Reed Hospital for PTSD treatment.
Real-time resilience scoring through the Cybermindz Resilience Index that combines sleep quality, psychological capital, burnout indicators, and stress response metrics.
Research methodology to establish causation versus correlation between psychological state and SOC performance metrics like mean time to respond and incident response rates.
Neuroscience of cybersecurity roles, including how threat intelligence analysts perform optimally at alpha brain wave levels while incident responders need beta wave states.
Strategic staff rotation based on psychological state rather than just skillset, moving analysts between different cognitive roles to optimize both performance and mental health.
Key Takeaways:
Implement clinical burnout measurement using validated tools like the MASLAC sleep index and psychological capital assessments rather than relying on subjective burnout indicators in your SOC operations.
Deploy psychometric testing within security operations centers to establish baseline resilience metrics before incidents occur, enabling proactive team management strategies.
Establish brainwave optimization protocols by moving threat intelligence analysts to alpha wave states for creative pattern recognition and incident responders to beta wave states for rapid decision-making.
Correlate psychological metrics with traditional SOC performance indicators like mean time to respond and incident response rates to identify causation patterns.
Rotate staff assignments based on real-time psychological capacity assessments rather than just technical skills, optimizing both performance and mental health outcomes.
Measure psychological capital within your security team to understand cognitive capacity for handling high-stress cyber incidents and threat analysis workloads.
Establish post-incident psychological protocols using clinical psychology techniques to prevent long-term burnout and retention issues following major security breaches.
Create predictive analytics models that combine resilience scoring with operational metrics to forecast SOC team performance and proactively address capacity issues.
Listen to more episodes:
Apple
Spotify
YouTube
Website

Thursday Jun 05, 2025
Thursday Jun 05, 2025
The cybersecurity industry has witnessed numerous technology waves, but AI's integration at RSA 2025 signals something different from past hype cycles. Howard Holton, Chief Technology Officer at GigaOm, observed AI adoption across virtually every vendor booth, yet argues this represents genuine transformation rather than superficial marketing. His analyst perspective, backed by GigaOm's practitioner-focused research approach, reveals why AI will become the foundational operating system of security work rather than just another tool in an already crowded stack.
Howard's insights challenge conventional thinking about human-machine collaboration in security operations. He explains how natural language understanding finally bridges the gap between human instruction variability and machine execution consistency, solving a problem that has limited automation effectiveness for decades. Howard also explores practical applications where AI handles repetitive security tasks that exhaust human analysts, while humans focus on curiosity-driven investigation and strategic analysis that machines cannot replicate.
Topics discussed:
The fundamental differences between AI's practical applicability and blockchain's limited use cases, despite similar initial hype cycles and market positioning across cybersecurity vendors.
How natural language understanding creates breakthrough human-machine collaboration by allowing AI systems to execute consistent tasks regardless of instruction variability from different analysts.
The biological metaphor for human versus machine intelligence, where humans operate as "chaos machines" with independent processes driven by curiosity rather than single-objective optimization.
GigaOm's practitioner-focused approach to security maturity modeling that measures actual organizational capability rather than vendor feature adoption or platform configuration levels.
Why AI will become the operating system of security work, following the evolution from Microsoft Office to SaaS as foundational business operation layers.
The strategic advantage of AI handling hyper-repetitive security processes that traditionally drive human analysts to inefficiency while preserving human focus for curiosity-driven investigation.
How enterprise security teams can identify the optimal intersection between AI's computational strengths and human analytical capabilities within their specific organizational contexts and threat landscapes.
Key Takeaways:
Evaluate your security maturity models to ensure they measure organizational capability and adaptability rather than vendor feature adoption or platform configuration levels.
Identify repetitive security processes that exhaust human analysts and prioritize these for AI automation while preserving human focus for curiosity-driven investigation.
Leverage natural language understanding in AI tools to standardize security process execution despite instruction variability from different team members.
Audit your current technology stack to distinguish between genuinely applicable AI solutions and superficial AI marketing similar to the blockchain hype cycle.
Create practitioner-focused assessment criteria when evaluating security vendors to ensure solutions address real-world enterprise implementation challenges.
Develop language-agnostic security procedures that AI systems can interpret consistently regardless of how different analysts explain the same operational requirements.
Listen to more episodes:
Apple
Spotify
YouTube
Website

Tuesday Jun 03, 2025
Tuesday Jun 03, 2025
The cybersecurity industry has long operated on fear-based selling and vendor promises that rarely align with practical implementation needs. Jeff Man, Sr. Information Security Evangelist at Online Business Systems, brings a pragmatic perspective after years of navigating compliance requirements and advising organizations from Fortune 100 enterprises to small e-commerce operators. His cautious optimism about the industry's current trajectory stems from witnessing a fundamental shift in how vendors understand and communicate compliance requirements, particularly around PCI DSS 4.0's recent implementation.
Jeff's extensive conference speaking experience and hands-on consulting work reveal critical disconnects between security marketing rhetoric and operational reality. His observation that security presentation slides from 1998 remain almost entirely relevant today underscores both the persistence of fundamental security challenges and the industry's slow evolution beyond superficial solutions toward meaningful risk management frameworks.
Topics discussed:
The transformation of vendor compliance conversations from generic marketing responses to specific requirement understanding, particularly around PCI DSS 4.0 implementation strategies.
Why speaking "compliance language" with clients proves more effective than traditional security-focused approaches, as organizations prioritize mandatory requirements over theoretical security improvements.
The reality that 99% of companies fall into small business security categories rather than commonly cited SMB statistics, creating massive gaps between available solutions and actual organizational needs.
Risk prioritization methodologies that focus security investments on the 3% of CVEs actively exploited by attackers rather than attempting to address overwhelming vulnerability backlogs.
The evolution from fear-uncertainty-doubt selling tactics toward informed decision-making frameworks that help organizations understand exactly what security technologies deliver versus marketing promises.
How independent advisory perspectives enable better technology purchasing decisions by providing objective analysis separate from vendor sales motivations and product-specific solutions.
The convergence of threat detection, vulnerability prioritization, and compliance requirements into cohesive risk management strategies that align with business operational realities rather than security team preferences.
Key Takeaways:
Prioritize vendors who demonstrate specific compliance requirement knowledge rather than offering generic "we do compliance" responses, particularly for PCI DSS 4.0 implementation.
Frame security discussions using compliance language with business stakeholders, as regulatory requirements drive action more effectively than theoretical security benefits.
Focus vulnerability management efforts on the approximately 3% of CVEs that attackers actively exploit rather than attempting to address entire vulnerability backlogs.
Recognize that 99% of organizations operate with small business security constraints and require solutions scaled appropriately rather than enterprise-grade implementations.
Seek independent security advisory perspectives separate from vendor sales processes to make informed technology purchasing decisions based on actual needs versus marketing promises.
Evaluate security investments through risk prioritization frameworks that align with business operations rather than pursuing comprehensive security controls beyond organizational capabilities.
Leverage the convergence of compliance requirements, threat intelligence, and vulnerability management to create cohesive risk management strategies rather than implementing disparate security tools.

Thursday May 29, 2025
Thursday May 29, 2025
The criminal underground is experiencing its own version of startup disruption, with massive ransomware-as-a-service operations fragmenting into smaller, more agile groups that operate like independent businesses. John Fokker, Head of Threat Intelligence at Trellix, brings unique insights from monitoring hundreds of millions of global sensors, revealing how defenders' success in EDR detection is paradoxically driving criminals toward more profitable attack models. His team's systematic tracking of AI adoption in criminal networks provides a fascinating parallel to legitimate business transformation, showing how threat actors are methodically testing and scaling new technologies just like any other industry.
Drawing from Trellix's latest Global Threat Report, John tells David why the headlines focus on major enterprise breaches while the real action happens in the profitable mid-market, where companies have extractable revenue but often lack enterprise-level security budgets. This conversation offers rare visibility into how macro trends like AI adoption and improved defensive capabilities are reshaping criminal business models in real-time.
Topics discussed:
The systematic fragmentation of large ransomware-as-a-service operations into independent criminal enterprises, each focusing on specialized capabilities rather than maintaining complex hierarchical structures.
How improved EDR detection capabilities are driving a strategic shift from encryption-based ransomware attacks toward data exfiltration and extortion as a more reliable revenue model.
The economic targeting patterns that focus on profitable mid-market companies with decent revenue streams but potentially limited security budgets, rather than the headline-grabbing major enterprise victims
Criminal adoption patterns of AI technologies that mirror legitimate business transformation, with systematic testing and gradual scaling as capabilities prove valuable.
The emergence of EDR evasion tools as a growing criminal service market, driven by the success of endpoint detection and response technologies in preventing traditional attacks.
Why building trust in autonomous security systems faces similar challenges to autonomous vehicles, requiring proven track records and reduced false positives before organizations will release human oversight.
The strategic use of global sensor networks combined with public intelligence to map evolving attack patterns and identify blind spots in organizational threat detection capabilities.
How entropy-based detection methods at the file and block level can identify encryption activities that indicate potential ransomware attacks in progress.
The evolution from structured criminal hierarchies with complete in-house kill chains to distributed networks of specialized service providers and independent operators.
Key Takeaways:
Monitor entropy changes in files and block-level data compression rates as early indicators of ransomware encryption activities before full system compromise occurs.
Prioritize EDR and XDR deployment investments to force threat actors away from encryption-based attacks toward less reliable data exfiltration methods.
Focus threat intelligence gathering on fragmented criminal groups rather than solely tracking large ransomware-as-a-service operations that are splintering into independent cells.
Implement graduated trust models for AI-powered security automation, starting with low-risk tasks and expanding autonomy as false positive rates decrease over time.
Combine internal sensor data with public threat intelligence reports to identify blind spots and validate detection capabilities across multiple threat vectors.
Develop specialized defense strategies for mid-market organizations that balance cost-effectiveness with protection against targeted criminal business models.
Track AI adoption patterns in criminal networks using the same systematic approach businesses use for technology transformation initiatives.
Build detection capabilities that identify lateral movement and privilege escalation activities that indicate advanced persistent threat presence in network environments.
Establish incident response procedures that account for data exfiltration and extortion scenarios, not just traditional encryption-based ransomware attacks.
Create threat hunting programs that specifically target EDR evasion tools and techniques as criminals increasingly invest in bypassing endpoint detection technologies.

Tuesday May 27, 2025
Tuesday May 27, 2025
In this special RSA episode of Future of Threat Intelligence, Martin Naydenov, Industry Principal of Cybersecurity at Frost & Sullivan, offers a sobering perspective on the disconnect between AI marketing and implementation. While the expo floor buzzes with "AI-enabled" security solutions, Martin cautions that many security teams remain reluctant to use these features in their daily operations due to fundamental trust issues. This trust gap becomes particularly concerning when contrasted with how rapidly threat actors have embraced AI to scale their attacks.
Martin walks David through the current state of AI in cybersecurity, from the vendor marketing rush to the practical challenges of implementation. As an analyst who regularly uses AI tools, he provides a balanced view of their capabilities and limitations, emphasizing the need for critical evaluation rather than blind trust. He also demonstrates how easily AI can be leveraged for malicious purposes, creating a pressing need for security teams to overcome their hesitation and develop effective counter-strategies.
Topics discussed:
The disconnect between AI marketing hype at RSA and the practical implementation challenges facing security teams in real-world environments.
Why security professionals remain hesitant to trust AI features in their tools, despite vendors rapidly incorporating them into security solutions.
The critical need for vendors to not just develop AI capabilities but to build trust frameworks that convince security teams their AI can be relied upon.
How AI is dramatically lowering the barrier to entry for threat actors by enabling non-technical individuals to create convincing phishing campaigns and malicious scripts.
The evolution of phishing from obvious "Nigerian prince" scams with typos to contextually accurate, perfectly crafted messages that can fool even security-aware users.
The disproportionate adoption rates between defensive and offensive AI applications, creating a potential advantage for attackers.
How security analysts are currently using AI as assistance tools while maintaining critical oversight of the information they provide.
The emerging capability for threat actors to build complete personas using AI-generated content, deepfakes, and social media scraping for highly targeted attacks.
Key Takeaways:
Implement verification protocols for AI-generated security insights to balance automation benefits with necessary human oversight in your security operations.
Establish clear trust boundaries for AI tools by understanding their data sources, decision points, and potential limitations before deploying them in critical security workflows.
Develop AI literacy training for security teams to help analysts distinguish between reliable AI outputs and potential hallucinations or inaccuracies.
Evaluate your current security stack for unused AI features and determine whether trust issues or training gaps are preventing their adoption.
Create AI-resistant authentication protocols that can withstand the sophisticated phishing attempts now possible with language models and deepfake technology.
Monitor adversarial AI capabilities by testing your own defenses against AI-generated attack scenarios to identify potential vulnerabilities.
Integrate AI tools gradually into security operations, starting with low-risk use cases to build team confidence and establish trust verification processes.
Prioritize vendor solutions that provide transparency into their AI models' decision-making processes rather than black-box implementations.
Establish metrics to quantify AI effectiveness in your security operations, measuring both performance improvements and false positive/negative rates.
Design security awareness training that specifically addresses AI-enhanced social engineering techniques targeting your organization.

Thursday May 22, 2025
Thursday May 22, 2025
In our latest episode of The Future of Threat Intelligence, recorded at RSA Conference 2025, AJ Nash, Founder & CEO, Unspoken Security, provides a sobering assessment of AI's transformation of cybersecurity. Rather than focusing solely on hype, AJ examines the double-edged nature of AI adoption: how it simultaneously empowers defenders while dramatically lowering barriers to entry for sophisticated attacks. His warnings about entering a "post-knowledge world" where humans lose critical skills and adversaries can poison trusted AI systems offer a compelling counterbalance to the technology's promise.
AJ draws parallels to previous technology trends like blockchain that experienced similar hype cycles before stabilizing, but notes that AI's accessibility and widespread applicability make it more likely to have lasting impact. He predicts that the next frontier in security will be AI integrity verification — building systems and organizations dedicated to ensuring that the AI models we increasingly depend on remain trustworthy and resistant to manipulation. Throughout the conversation, AJ emphasizes that while AI will continue to evolve and integrate into our security operations, maintaining human oversight and preserving our knowledge base remains essential.
Topics discussed:
The evolution of the RSA Conference and how industry focus has shifted through cycles from endpoints to threat intelligence to blockchain and now to AI, with a particularly strong emphasis on agentic AI.
The double-edged impact of AI on workforce dynamics, balancing the potential for enhanced productivity against concerns that companies may prioritize cost-cutting by replacing junior positions, potentially eliminating career development pipelines.
The risk of AI-washing similar to how "intelligence" became a diluted buzzword, with companies claiming AI capabilities without substantive implementation, necessitating deeper verification — and even challenging — of vendors' actual technologies.
The emergence of a potential "post-knowledge world" where overreliance on AI systems for summarization and information processing erodes human knowledge of nuance and detail.
The critical need for AI integrity verification systems as adversaries shift focus to poisoning models that organizations increasingly depend on, creating new attack surfaces that require specialized oversight.
Challenges to intellectual property protection as AI systems scrape and incorporate existing content, raising questions about copyright enforcement and ownership in an era where AI-generated work is derivative by nature.
The importance of maintaining human oversight in AI-driven security systems through transparent automation workflows, comprehensive understanding of decision points, and regular verification of system outputs.
The parallels between previous technology hype cycles like blockchain and current AI enthusiasm, with the distinction that AI's accessibility and practical applications make it more likely to persist as a transformative technology.
Key Takeaways:
Challenge AI vendors to demonstrate their systems transparently by requesting detailed workflow explanations and documentation rather than accepting marketing claims at face value.
Implement a "trust but verify" approach to AI systems by establishing human verification checkpoints within automated security workflows to prevent over-reliance on potentially flawed automation.
Upskill your technical teams in AI fundamentals to maintain critical thinking abilities that help them understand the limitations and potential vulnerabilities of automated systems.
Develop comprehensive AI governance frameworks that address potential model poisoning attacks by establishing regular oversight and integrity verification mechanisms.
Establish cross-organizational collaborations with industry partners to create trusted AI verification authorities that can audit and certify model integrity across the security ecosystem.
Document all automation workflows thoroughly by mapping decision points, data sources, and potential failure modes to maintain visibility into AI-driven security processes.
Prioritize retention of junior security positions to preserve talent development pipelines despite the temptation to replace entry-level roles with AI automation.
Conduct regular sampling and testing of AI system outputs to verify accuracy and detect potential manipulation or degradation of model performance over time.
Balance innovation with security controls by evaluating new AI technologies for both their benefits and their potential to create new attack surfaces before deployment.
Incorporate geopolitical and broader contextual awareness into threat intelligence practices to identify potential connections between world events and emerging cyber threats that AI alone might miss.

Tuesday May 20, 2025
Tuesday May 20, 2025
In this special RSA 2025 episode of The Future of Threat Intelligence, David speaks with Jawahar Sivasankaran, President of Cyware, about their partnership with Team Cymru to democratize threat intelligence. Jawahar outlines how their CTI program in a box approach enables organizations to implement comprehensive threat intelligence capabilities in weeks rather than months.
Jawahar offers a unique perspective on industry progress and remaining challenges in collaborative defense. This conversation explores the practical realities of operationalizing threat intelligence for organizations beyond the most mature security teams, the current implementation of AI in security operations, and a thoughtful assessment of how automation will reshape security careers without eliminating the need for human expertise.
Topics discussed:
How Cyware's partnership with Team Cymru creates turnkey threat intelligence solutions with pre-configured use cases and clear outcomes for rapid implementation.
The critical gap in threat intelligence sharing between private and public sectors despite overall industry progress in security capabilities.
Cyware's work with ISACs to facilitate bi-directional threat intelligence sharing that benefits organizations at varying maturity levels.
Current implementation of AI through Cyware's Quarterback module, featuring knowledge bots and NLP capabilities beyond future aspirations.
Multi-agent AI approach to threat-centric automation that focuses on enriching and correlating intelligence for actionable outcomes
Historical perspective on industry disruption and how AI will transform security careers by automating basic tasks while creating new opportunities in design, architecture, and human-machine collaboration.
The evolution of security solutions over two decades of RSA conferences and whether the industry is making meaningful progress against adversaries.
Practical strategies for implementing comprehensive threat intelligence programs without months of planning and configuration.
Key Takeaways:
Implement a "CTI program in a box" approach to accelerate threat intelligence adoption, reducing deployment time from months to weeks through pre-configured use cases with clear, measurable outcomes.
Establish bi-directional threat intelligence sharing between private and public sectors to strengthen collective defense capabilities against emerging adversary tactics and behaviors.
Leverage partnerships with ISACs to gain access to curated threat intelligence that has been validated and contextualized for your specific industry vertical.
Deploy AI-powered knowledge bots with NLP capabilities to help your security team more efficiently process and action threat intelligence data without requiring extensive expertise.
Adopt a multi-agent AI approach for security operations that enriches threat intelligence, correlates information across sources, and recommends specific defensive actions.
Evaluate your organization's cyber threat intelligence maturity honestly, recognizing that even large enterprises and government agencies often struggle with operationalizing intelligence effectively.
Streamline threat intelligence implementation through turnkey solutions that provide unified platforms rather than attempting to build capabilities from scratch.
Balance AI automation with human expertise in your security operations, recognizing that technology will transform job functions rather than eliminate the need for skilled professionals.
Transform basic security workflows into threat-centric processes that focus on actionable outcomes rather than just data collection and processing.
Prioritize collaborative defense mechanisms that benefit organizations with varying levels of security maturity, particularly those downstream that lack advanced threat identification capabilities.
Listen to more episodes:
Apple
Spotify

Thursday May 08, 2025
Thursday May 08, 2025
In a world obsessed with cutting-edge security technology, Lonnie Best, Senior Manager of Detection & Response Services at Rapid7, makes a compelling case for mastering the fundamentals. After transitioning from craft beer journalism through nuclear security to cybersecurity, Lonnie witnessed the evolution of ransomware attacks from "spray and pray" tactics to sophisticated credential theft and security tool disablement.
His insights reveal why 54% of incident response engagements still trace back to inadequate MFA implementation, and why understanding "how computers compute" creates better security professionals than certifications alone. Lonnie also shares practical wisdom on building effective security operations, avoiding analyst burnout, and measuring program success. As AI increasingly handles tier-one alert triage, he predicts the traditional junior analyst role will fundamentally change within 5-10 years — though human expertise will always remain essential for validating what machines uncover.
Topics discussed:
The evolution of attack sophistication from "spray and pray" ransomware to targeted credential theft and security tool disablement, requiring more comprehensive detection capabilities.
How managed detection and response (MDR) services have evolved to provide enterprise-grade security capabilities to organizations lacking internal resources or security maturity.
The critical components of building an effective internal SOC: centralized logging through SIEM implementation, specialized security expertise across multiple domains, and leadership strategies to combat analyst burnout.
Implementing AI and machine learning for tier-one alert triage to reduce analyst fatigue while maintaining human oversight for validation, with predictions that traditional junior analyst roles will transform within 5-10 years.
Why traditional metrics like alert closures fail to accurately measure SOC analyst performance, requiring more nuanced approaches focusing on contribution quality rather than quantity.
The hiring dilemma of attitude versus aptitude in security analysts, revealing why foundational system administration experience creates more effective investigators than certifications alone.
Strategies for preventing analyst burnout through appropriate tooling, staffing levels, and leadership practices that recognize security's 24/7 operational demands.
The persistent gap between security knowledge and implementation, as demonstrated by 54% of incident response engagements in 2024 resulting from inadequate MFA deployment or enforcement.
Practical fundamentals for effective security: comprehensive asset inventory, attack surface management, vulnerability remediation, and understanding where critical assets reside.
Key Takeaways:
Implement multi-factor authentication across all access points to address the root cause behind 54% of incident response engagements in 2024, according to Rapid7's metrics.
Build your security operations center with centralized logging through SIEM implementation as the core foundation before expanding detection capabilities.
Recruit security analysts with system administration experience rather than just certifications to ensure practical understanding of system behavior and anomaly detection.
Deploy AI and machine learning solutions specifically for tier-one alert triage to combat analyst fatigue while maintaining human oversight for validation.
Create comprehensive asset inventories that identify and map all crown jewels and their access paths before implementing advanced security controls.
Develop leadership strategies that address security's 24/7 operational demands, including appropriate time-off policies and workload management to prevent burnout.
Measure security operations performance through nuanced metrics beyond alert closures, focusing on the quality of investigations and genuine threat detection.
Structure your security team with specialized roles (threat hunting, cloud detection, malware analysis) to create effective career paths and deeper expertise.
Incorporate regular one-on-one meetings with security analysts to assess performance challenges and identify improvement areas beyond traditional metrics.
Prioritize attack surface management alongside vulnerability remediation to understand how attackers could gain entry and navigate toward critical assets.
Listen to more episodes:
Apple
Spotify
YouTube
Website