Partha Naidu, Cofounder • April 21, 2025
Introduction
From my time leading incident response and threat intelligence-driven hunting missions at the Air Force Security Operations Center in San Antonio, one thing became crystal clear: security operations (SecOps) is not just about closing alerts. It’s about reducing risk.
Reducing risk is a complex endeavor involving multiple moving parts beyond alert triage— SOC teams conduct this by having dedicated personnel focused on threat intelligence, detection engineering, investigations, hunting, and more. These roles work together with one mission: find the adversary amongst the noise. Adversaries are leveraging AI to 10x their outcomes through speed and scale. Defenders need to be doing the same to reduce mean-time-to-detection and mean-time-to-response of emerging threats from days to minutes.This blog explores how Agentic AI supports not just investigations, but the entire ecosystem of a modern SOC.
Emerging Threats End-to-End Use Case
A great example of multiple security personas working together in a SOC is the above emerging threats workflow that emulates one that we implemented at the Air Force. It starts with a threat intelligence analyst continuously monitoring open sources like Twitter, blogs, and news feeds to identify threat reports relevant to the organization. From there, a detection engineer crafts a detection rule, which is then validated by the red team and tested in real-world scenarios. Once alerts are generated, a security analyst investigates them and provides feedback if the rule is too noisy, prompting another cycle of refinement. When the detection is accurate and reliable, the focus shifts to response—where the analyst collaborates with a security engineer to build and automate a playbook for future alerts. This playbook goes through further testing and tuning, with back-and-forth feedback until it's production-ready. Each handoff adds time, making the entire process slow and resource-intensive.
This is a wonderful opportunity for agents to engage with this process and support some of the key personas: threat intelligence, detection engineer, security analyst, and security engineers.
What Is Agentic AI and Why It Matters for SecOps
Agentic AI refers to autonomous AI systems that take action toward specific goals, adapting in real time with minimal human input. Unlike traditional rule-based automation, agentic AI is:
Autonomous: Operates independently to perform tasks and make decisions.
Goal-Oriented: Focuses on outcomes, not just rules.
Adaptive: Adjusts to new inputs, environments, unstructured data, and data patterns.
Multi-Step Capable: Handles complex workflows involving multiple tools and decisions.
In the world of cybersecurity—where attackers are creative, patient, and increasingly AI-augmented—defenders need intelligent systems that can reason, adapt, and work alongside humans. Agentic AI offers exactly that. The four revolutionary capabilities of agentic AI can extend to multiple security personas and SOC workflows.
Threat Intel Operations
Threat intel isn’t just about parsing STIX/TAXII feeds or tracking IOCs. Mature teams focus on emerging TTPs (tactics, techniques, and procedures) and turning insights into proactive defenses.
Agentic AI supports threat intel by:
Analyzing unstructured threat intelligence (e.g., blog posts, tweets, CVEs) to recommend new detection rules.
Surfacing and digesting emerging threats from OSINT and structured feeds.
Identifying relevance of new threat reporting to an organization based on tools and context
In the emerging threats example, AI agents can parse unstructured threat reporting like blogs, Twitter feeds, etc and then build a semantic understanding of the reporting to then apply against internal security stacks and known context. For example, an attack vector for Entra is only relevant if a customer has Entra or an attack vector targeting financial industries isn’t relevant for organizations in the healthcare space. Instead of threat intel analysts spending hours reading every post, an AI agent can parse all the latest news to identify relevant reporting for an organization.
Detection Engineering
Detection engineering is an ongoing battle of coverage vs. noise. If you undertune detections, you flood analysts with false positives. If you over-tune, you miss adversary behavior. The goal is trustworthy alerting—and trust comes from precision.
Agentic AI helps detection engineering by:
Identifying patterns in noisy alerts based on entity frequency or seasonality to provide tuning recommendations to a human
Generation of new detection rules based on an input (human driven prompt or another agent like a threat intel agent)
Continuing on the emerging threats example, a threat intel agent can pass relevant and processed threat intel information to a detection agent. The detection agent can then recommend a detection rule based on the customer’s security stack (ex: Splunk rule) and prior context (ex: exclude noisy entities) to a human engineer for review. This significantly reduces overhead for rule development and tuning, reducing mean-time-to-detection of this new attack vector. But what about response?
Investigations and Automation
As many of you already know, investigations are an exciting use case for agentic AI as it can:
Automate contextual enrichment across identity, devices, cloud, and third-party telemetry.
Pivot and explore hypotheses through multi-step reasoning.
Correlate across alerts, behavioral deviations, threat intelligence, and access levels
So instead of security analysts having to manually gather investigation information, find historical context, check threat intelligence, and then come up with their own inferences, analysts can now leverage agents to autonomously act on their behalf and just be a reviewer or focus on escalations. And this can be seen in today’s market with the rise of “AI SOC Analysts”.
But let’s be honest: most “AI SOC analysts” today are really just managed SOARs interacting with a SIEM and an LLM to summarize a playbook’s findings. They respond to alerts, run a few predefined queries, summarize, and stop. That’s helpful for basic triage—but it’s not enough for unknown attack vectors like emerging threats. .
Here’s the reality:
Attackers test their methods in advance against detection tools like CrowdStrike.
If something trips a high-fidelity alert? They simply avoid using that vector.
Instead, they go low and slow over months—flying under the radar.
True investigations require looking at alerts in historical context and across multiple data sources often over 9+ months.
Therefore, effective AI SOC Analysts must be capable of investigating every alert—including previously unseen ones—across any data volume or scale, with the ability to analyze up to a year’s worth of historical data. And agents cannot achieve this depth of investigations if they live on a SIEM due to rate limits.
Going back to our emerging threat’s example, AI agents can significantly aid security analysts and security engineers by accelerating investigation, playbook generation/validation/tuning steps. This wouldn’t remove the human in the loop as the playbook and decision requires human feedback to adjust and improve. But it will be a great time save.
The After
We can supercharge this powerful workflow with agents in key functions to significantly reduce the completion time to build an end-to-end use case for emerging threats. And this is just the beginning. There are more opportunities to continue to improve this workflow. For example, agents can also generate behavior to test rules and then provide tuning recommendations based on the performance against those tests.
Humans cannot be replaced
But as you can see, it does not eliminate humans from being validators in the loop. Human verification and feedback is ultimately critical for feedback and adjustments.
Adversaries are people employing AI. People still defeat people, regardless of the tools involved. Cyber defenders match them by being creative, intuitive, and contextual—traits AI struggles to replicate. The future isn’t about AI replacing humans. It’s about synergy—humans and AI working together in specialized roles to outpace increasingly sophisticated adversaries.
Summary
Security operations is more than just investigations. It’s a team effort involving detection engineers, threat intel analysts, hunters, red teams, automation engineers, and more. And at the heart of it all? A shared mission to reduce risk.
Agentic AI amplifies every role by making sense of data, adapting to change, and helping SOCs move from reactive alert handling to strategic risk management.
This is the philosophy behind Kenzo Security. Kenzo is the first Agentic Security Platform that takes a data first approach to supporting all the personas of your security operations teams. We start by modeling a human-level understanding across your security telemetry in our proprietary Security Data Mesh. Specialized agents supporting every function of the SOC leverage this Security Data Mesh and work together to deliver powerful outcomes and defeat the adversary.
If you’re ready to level up your security operations, schedule a demo with us today.