What If We Applied the Self-Driving Car Levels to Security?
When people talk about self-driving cars, they often use the SAE levels of automation. It's a simple system that runs from Level 0 (no automation) to Level 5 (full autonomy). Everyone in that industry knows what the levels mean. They've given manufacturers, regulators, and drivers a shared way to talk about progress.
What if security had the same kind of roadmap?
A Quick Primer on the SAE Levels
Here's how the car industry thinks about autonomy:
-
Level 0 (No Automation): The driver does everything. Think of an old manual car, no sensors, no driver assistance.
-
Level 1 (Driver Assistance): The car can take over one task, like adaptive cruise control. Most modern cars have this today.
-
Level 2 (Partial Automation): The car can steer and control speed at the same time, but the driver must stay engaged. Tesla's Autopilot and GM's Super Cruise are examples of Level 2 systems.
-
Level 3 (Conditional Automation): The car can drive itself in specific conditions, such as on highways, but expects the driver to take over when alerted. Honda's "Traffic Jam Pilot" is one of the first commercially available Level 3 systems.
-
Level 4 (High Automation): The car can handle most driving situations on its own. Some experimental robo-taxis, like Waymo's driverless service in Phoenix, are pushing into Level 4 territory.
-
Level 5 (Full Automation): The dream scenario with no steering wheel, no pedals and no human needed. A car that can handle every environment and condition on its own.
The progression is straightforward, and that clarity is what makes the SAE model so useful. Everyone understands the difference between Level 2 and Level 4, even without digging into the technical details.
Translating the Framework to Security
To apply this framework to security, we need to map the key elements of autonomous driving to cybersecurity operations. The parallels are more natural than you might expect.
In this model, the driver is the security analyst. They are constantly scanning the environment, making split-second decisions, and carrying the responsibility for safety.
The vehicle represents the collection of security systems: the SIEM, the endpoint agents, the compliance workflows. Some vehicles are clunky and old-fashioned, while others come packed with sensors and assistive features.
The road is the threat landscape, full of potholes, blind spots, and the occasional reckless driver.
Seen this way, the parallels become clear:
- Just as a driver can be overwhelmed by complex road conditions and heavy traffic, analysts are often buried under endless alerts and logs.
- Just as a car can take over tasks like braking or lane-keeping, security systems can automate containment or evidence collection.
- Just as autonomous driving requires trust in the car's decisions, autonomous security demands trust in the system's judgment.
Of course, the metaphor has limits. Threats are adaptive in ways that weather and traffic are not. Attackers actively look for ways to confuse or mislead defenses.
But that makes the model even more useful, because it forces us to ask hard questions: what should remain in human hands, and what can we confidently hand over to machines?
Levels of Autonomous Secure Systems
If we map the SAE levels onto security, a clear progression starts to emerge.
Level 0: No Automation
At Level 0, nothing is automated. Security is entirely manual. Analysts investigate every alert, build every correlation, and remediate incidents by hand. Compliance teams gather screenshots, spreadsheets, and tickets for audits. Think of organizations relying purely on manual log analysis, paper-based incident reports, and Excel spreadsheets for risk tracking. It is exhausting, slow, and error-prone, but still the reality for many organizations today.
Level 1: Driver Assistance
At Level 1, systems can detect and alert on specific threats but require human action for any response. An antivirus detects malware and alerts the user, but someone must decide whether to quarantine or allow it. A firewall identifies suspicious traffic and logs it, but an analyst must review the logs and manually create blocking rules. An intrusion detection system spots anomalous network behavior and generates an alert, but a human must investigate and determine if action is needed. The tools provide information and basic detection, but all decision-making and response actions remain entirely in human hands.
Level 2: Partial Automation
At Level 2, systems can execute single, predefined responses to specific triggers. When a clear rule is met, the system takes one automated action, but humans must validate and complete the response. A phishing email detection might automatically quarantine the message, but an analyst must investigate the sender, check for other victims, and close the incident. A malware detection might isolate an endpoint, but someone must analyze the threat, clean the system, and determine when it's safe to reconnect. The automation handles the immediate protective action, but humans drive the investigation and resolution.
Level 3: Conditional Automation
Level 3 systems can execute complete workflows for well-defined incident types without human intervention. When a known malware signature is detected, the system can automatically isolate the endpoint, collect forensic evidence, run cleanup procedures, notify stakeholders, generate an incident report, and restore the endpoint to service. When an employee downloads a file from a known malicious domain, it can block the download, scan their system, update firewall rules, alert management, and document the event, all without human touch. The key difference: these systems handle entire incident lifecycles from detection to closure, but only for scenarios they've been specifically programmed to recognize. Anything outside these defined patterns gets escalated to humans. Analysts become supervisors, setting policies and handling exceptions.
Level 4: High Automation
Level 4 systems can adapt and make decisions about novel or complex scenarios using AI and machine learning. Rather than following pre-programmed playbooks, they analyze patterns, assess risk, and determine appropriate responses even for previously unseen threats. A system might detect unusual network traffic that doesn't match known attack patterns, correlate it with user behavior and business context, then decide whether to block, monitor, or allow the activity. It handles not just incident response but also continuous compliance monitoring, automatically generating audit reports and adjusting security controls based on regulatory changes. Humans set strategic direction and policies, but the system manages day-to-day operations and most decision-making autonomously.
Level 5: Full Automation
Finally, Level 5 imagines full autonomy. A hypothetical system that continuously predicts, prevents, detects, and mitigates threats across the enterprise without a human in the loop. Think of AI-driven security meshes that adapt in real-time, deploy countermeasures, and generate compliance reports. All while learning from every interaction. Compliance becomes a byproduct of secure operation. Analysts are no longer drivers or even supervisors, they set direction, and the system takes care of the rest. Whether this future is desirable or even achievable is still an open question, but it is useful as a north star.
Where Companies Really Are
The reality is that most organizations operate at different levels across different security functions. A typical company might have Level 1 capabilities for malware detection (antivirus alerts and basic blocking) but remain at Level 0 for incident response and compliance reporting. They might use firewalls that automatically block known bad IPs (Level 1) while manually investigating every security alert and creating compliance reports in spreadsheets (Level 0).
According to the 2024 SANS Detection and Response Survey, only 16% of organizations have fully automated cyber response processes. Most rely on SIEMs and security tools that generate alerts and perform basic blocking, but the complex work (correlation, investigation, remediation, and compliance) remains manual. It's a patchwork of automation: some Level 1 detection capabilities mixed with predominantly Level 0 processes. The gap between ambition and reality is stark. Everyone talks about automation, but in practice, it's still mostly humans doing the driving.
Contrast that with the space sector, where autonomy is not optional but a matter of survival. A rocket on its way to the moon cannot wait for a SOC analyst to approve a response. Spacecraft carry onboard fault detection and recovery systems that continuously monitor telemetry and autonomously isolate subsystems, switch to backups, or enter safe mode. That's Level 4 automation in action, the system takes corrective measures while humans on Earth watch from afar.
NASA's Mars rovers go even further. They must navigate unknown terrain, avoid hazards, and make operational decisions on their own, since waiting 20 minutes for instructions is not practical. Upcoming missions like the Lunar Gateway will push this further still, embedding autonomous monitoring and remediation of both technical and security risks. That starts to look like Level 5, systems that defend and sustain themselves without human intervention.
If space systems can reach Level 4 and beyond out of necessity, the question for enterprises is not "is this possible?" but "why are we still stuck at the first levels?"
The Benefits and Challenges of Moving Up
The attraction of moving up the levels is obvious. Faster detection and response can contain attacks before they spread. Automation can scale where human teams cannot, especially in environments with thousands of daily alerts. Recent research by Vectra AI found that SOC teams deal with an average of 3,832 alerts per day, with 62% of those alerts being ignored. Fatigue and burnout can be reduced, compliance can become more predictable, and organizations can build resilience rather than scramble from one incident to the next.
The financial case for automation is equally compelling. IBM's 2024 Cost of Data Breach Report found that organizations with extensive security automation deployment experienced average breach costs of $3.62 million compared to $5.52 million for organizations with no automation, a 34% cost reduction.
But the path upward is not without challenges. Trust in automated decisions remains low, and for good reason. Consider what happens when an autonomous system misidentifies a critical business application as malware and automatically quarantines it during peak trading hours. The financial impact can be catastrophic. Questions of accountability loom large when machines make mistakes: who is liable when an AI-driven system blocks legitimate customer access, causing revenue loss?
The complexity challenge is equally daunting. The Seemplicity 2024 Remediation Operations Report reveals that 85% of organizations find it challenging to manage the noise from their security tools, with organizations using an average of 38 different security vendors. Each tool has its own logic, thresholds, and decision-making criteria. When you chain these together for automation, the potential for cascading failures multiplies.
False positive scenarios become amplified at scale. A Level 3 system might automatically isolate hundreds of endpoints based on a signature that triggers on legitimate software updates. The business disruption from over-automation can be worse than the original threat. In regulated industries, this becomes even more complex. Imagine an automated system that blocks a critical medical device communication because it resembles a known attack pattern.
The explainability problem is particularly acute in highly regulated environments. Auditors and regulators demand to understand not just what happened, but why the system made specific decisions. "The AI told us to do it" isn't an acceptable explanation when facing PCI-DSS auditors or DORA compliance requirements. Organizations need systems that can provide clear, auditable decision trails.
Skills gaps widen as automation advances. Paradoxically, as systems become more autonomous, the humans overseeing them need deeper expertise to understand when and how to intervene. Junior analysts who might have learned through hands-on incident response now find themselves managing systems they don't fully understand. When automation fails, and it will, teams without sufficient expertise to take manual control face even longer recovery times.
There are also real ethical concerns about what it means to remove humans from the loop in decisions that affect risk and safety. Who makes the call on acceptable risk when an automated system wants to shut down a production database to contain a potential threat?
What Would Level 5 Look Like?
Imagine a security system that anticipates an attack before it happens, automatically closes off vulnerable paths, and generates audit-ready evidence while it works. No analyst drowning in alerts. No compliance team buried in spreadsheets. Just an environment that defends and proves itself continuously.
It sounds appealing but is it realistic? More importantly, is it desirable? As in the car industry, Level 5 raises big questions about control, liability, and trust. For most organizations, the true sweet spot might be Level 4: high automation that carries the burden of day-to-day operations, while humans still guide strategy and policy.
The Road Ahead
The SAE framework doesn't just describe technology, it creates a shared language for security maturity that we desperately need. Applied to security, it gives us a clearer way to talk about progress. Instead of vague goals like "we need more automation", teams can have concrete conversations: What level are we at today? What level should we aim for? And what would it take to get there?
Whether your organization is stuck at Level 0 with manual processes or advancing toward Level 4 high automation, the key is having honest conversations about current state and realistic next steps. The future belongs to organizations that can balance the efficiency of automation with the judgment of human expertise. The question isn't whether to automate, but how to do it thoughtfully, safely, and in service of genuine security outcomes rather than just operational convenience.
That's how a simple idea from self-driving cars can reframe the future of secure systems.
So let me ask you: what level is your organization at today?