Testing Detection Rules: Unit Testing for Security Logic
If you write application code without tests, you are considered a bad developer. If you write detection rules without tests, you are considered... normal. This cultural gap between software engineering and security engineering is one of the reasons why the false positive rate in detections is still so high: untested rules deployed in production against millions of events per day.
The industry is rapidly converging toward a more rigorous approach. According to Splunk, in 2025, 63% of security professionals would like to use Detection-as-Code with systematic testing, but only 35% actually do. The gap is an opportunity: those who implement unit testing for their detection rules get more precise rules, fewer false positives, and a more sustainable maintenance process.
This article builds a complete unit testing framework for Sigma detection rules: from generating synthetic logs, to automated testing with pytest, to coverage analysis to identify detection gaps, to CI/CD pipeline integration.
What You Will Learn
- Unit testing principles applied to detection rules
- sigma-test: the dedicated framework for Sigma rule testing
- Generating synthetic logs for true positive and false positive testing
- Custom pytest framework for detection rules
- Coverage analysis to identify ATT&CK detection gaps
- CI/CD integration: quality gates before deployment
Why Detection Rules Must Have Tests
A detection rule is code. It has inputs (log events), logic (matching conditions), and outputs (alerts). Like any code, it can have bugs: incorrect logic, wrong fields, conditions too broad or too narrow. But unlike application code, detection rule bugs have consequences that manifest slowly: too many false positives cause alert fatigue, false negatives allow attackers to go unnoticed.
The types of tests needed for a detection rule are:
- True Positive Test: the expected malicious event MUST trigger the rule
- False Positive Test: common legitimate events MUST NOT trigger
- Edge Case Test: variants of malicious behavior (different encodings, optional parameters)
- Regression Test: ensures changes do not break existing detections
- Performance Test: verifies the rule does not impact SIEM performance
sigma-test: The Dedicated Framework
sigma-test (github.com/bradleyjkemp/sigma-test) is a specialized tool for testing Sigma rules that allows specifying test events directly in the rule's YAML file, as YAML annotations. This approach keeps tests co-located with the rule, facilitating maintenance.
# Sigma Rule with integrated tests (sigma-test format)
title: PowerShell Encoded Command Execution
id: 5b4f6d89-1234-4321-ab12-fedcba987654
status: stable
description: Detects PowerShell execution with encoding parameters.
tags:
- attack.execution
- attack.t1059.001
logsource:
category: process_creation
product: windows
detection:
selection:
Image|endswith:
- '\powershell.exe'
- '\pwsh.exe'
CommandLine|contains:
- ' -enc '
- ' -EncodedCommand '
- ' -ec '
condition: selection
level: medium
# Test cases (sigma-test format)
tests:
- name: "TP: PowerShell with -EncodedCommand"
should_match: true
event:
Image: 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe'
CommandLine: 'powershell.exe -EncodedCommand SQBFAFgAKABOAGUAdAAgAC4AIAAuACkA'
ParentImage: 'C:\Windows\System32\cmd.exe'
- name: "TP: PowerShell with -enc (shorthand)"
should_match: true
event:
Image: 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe'
CommandLine: 'powershell.exe -enc SQBFAFgA'
- name: "FP: Normal PowerShell without encoding"
should_match: false
event:
Image: 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe'
CommandLine: 'powershell.exe -ExecutionPolicy Bypass -File C:\scripts\deploy.ps1'
- name: "FP: PowerShell with 'encrypted' in path (not encoding param)"
should_match: false
event:
Image: 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe'
CommandLine: 'powershell.exe Get-Content C:\backup\encrypted.zip'
# Running sigma-test
# Install: go install github.com/bradleyjkemp/sigma-test@latest
# Test a single rule
sigma-test rules/windows/t1059_001_powershell_encoded.yml
# Test all rules in a directory
sigma-test rules/windows/
# Example output:
# PASS rules/windows/t1059_001_powershell_encoded.yml
# TP: PowerShell with -EncodedCommand ... PASS
# TP: PowerShell with -enc (shorthand) ... PASS
# FP: Normal PowerShell without encoding ... PASS
pytest Framework for Advanced Testing
For more complex tests (multi-platform, testing on real SIEM, performance benchmarks), pytest offers superior flexibility. pySigma, the official Python library for Sigma, already uses pytest as its test framework for its backends.
# pytest Framework for Detection Rules
import pytest
import yaml
from pathlib import Path
from sigma.rule import SigmaRule
RULES_DIR = Path("rules")
def load_all_rules() -> list[tuple[str, str]]:
rules = []
for rule_file in RULES_DIR.glob("**/*.yml"):
content = rule_file.read_text()
rules.append((str(rule_file), content))
return rules
class TestSigmaRuleSyntax:
@pytest.mark.parametrize("rule_path,rule_content", load_all_rules())
def test_valid_yaml(self, rule_path: str, rule_content: str):
try:
rule_dict = yaml.safe_load(rule_content)
assert rule_dict is not None
except yaml.YAMLError as e:
pytest.fail(f"Invalid YAML in {rule_path}: {e}")
@pytest.mark.parametrize("rule_path,rule_content", load_all_rules())
def test_required_fields(self, rule_path: str, rule_content: str):
rule_dict = yaml.safe_load(rule_content)
for field in ['title', 'description', 'logsource', 'detection']:
assert field in rule_dict, f"Missing '{field}' in {rule_path}"
@pytest.mark.parametrize("rule_path,rule_content", load_all_rules())
def test_detection_has_condition(self, rule_path: str, rule_content: str):
rule_dict = yaml.safe_load(rule_content)
detection = rule_dict.get('detection', {})
assert 'condition' in detection, f"Missing 'condition' in {rule_path}"
@pytest.mark.parametrize("rule_path,rule_content", load_all_rules())
def test_pysigma_parseable(self, rule_path: str, rule_content: str):
from sigma.exceptions import SigmaError
try:
SigmaRule.from_yaml(rule_content)
except SigmaError as e:
pytest.fail(f"pySigma cannot parse {rule_path}: {e}")
Synthetic Log Generator
True positive and false positive tests require realistic log events. Manually creating test data is tedious and incomplete. An automated generator produces systematic events covering all variants of the rule.
# Synthetic Log Generator
from dataclasses import dataclass, field
from datetime import datetime
import random, string
@dataclass
class WindowsProcessEvent:
Image: str = "C:\\Windows\\System32\\cmd.exe"
CommandLine: str = "cmd.exe"
ParentImage: str = "C:\\Windows\\Explorer.exe"
ComputerName: str = "WORKSTATION01"
User: str = "DOMAIN\\user"
ProcessId: str = "1234"
ParentProcessId: str = "5678"
UtcTime: str = field(default_factory=lambda: datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S"))
def to_dict(self) -> dict:
return {k: v for k, v in self.__dict__.items()}
class SyntheticLogGenerator:
TEMPLATES = {
'T1059.001': [
{
'should_match': True,
'name': 'PS encoded via cmd',
'event': WindowsProcessEvent(
Image='C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe',
CommandLine='powershell.exe -EncodedCommand SQBFAFgA',
ParentImage='C:\\Windows\\System32\\cmd.exe'
)
},
{
'should_match': False,
'name': 'Normal PS script',
'event': WindowsProcessEvent(
Image='C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe',
CommandLine='powershell.exe -ExecutionPolicy Bypass -File deploy.ps1'
)
},
]
}
def get_test_cases(self, technique_id: str) -> list[dict]:
cases = self.TEMPLATES.get(technique_id, [])
return [{
'should_match': c['should_match'],
'name': c['name'],
'event': c['event'].to_dict()
} for c in cases]
def generate_random_events(self, count: int = 100) -> list[dict]:
"""Generates random legitimate events for stress testing."""
legitimate = [
'C:\\Windows\\System32\\cmd.exe',
'C:\\Windows\\System32\\svchost.exe',
'C:\\Windows\\System32\\notepad.exe',
]
return [WindowsProcessEvent(
Image=random.choice(legitimate),
CommandLine=f"process.exe {''.join(random.choices(string.ascii_lowercase, k=15))}"
).to_dict() for _ in range(count)]
Complete Tests with pytest
# Complete pytest Tests
class TestRuleLogicWithSyntheticLogs:
@pytest.fixture
def generator(self) -> SyntheticLogGenerator:
return SyntheticLogGenerator()
def test_powershell_encoded_true_positives(self, generator):
rule_content = Path("rules/windows/t1059_001_powershell_encoded.yml").read_text()
simulator = SigmaRuleSimulator(rule_content)
for tc in [t for t in generator.get_test_cases('T1059.001') if t['should_match']]:
assert simulator.matches(tc['event']), \
f"FALSE NEGATIVE: '{tc['name']}' did not trigger the rule"
def test_powershell_encoded_false_positives(self, generator):
rule_content = Path("rules/windows/t1059_001_powershell_encoded.yml").read_text()
simulator = SigmaRuleSimulator(rule_content)
for tc in [t for t in generator.get_test_cases('T1059.001') if not t['should_match']]:
assert not simulator.matches(tc['event']), \
f"FALSE POSITIVE: '{tc['name']}' triggered unexpectedly"
def test_stress_no_false_positives(self, generator):
rule_content = Path("rules/windows/t1059_001_powershell_encoded.yml").read_text()
simulator = SigmaRuleSimulator(rule_content)
random_events = generator.generate_random_events(100)
fp_count = sum(1 for ev in random_events if simulator.matches(ev))
fp_rate = fp_count / len(random_events)
assert fp_rate <= 0.02, \
f"FP rate too high: {fp_rate:.1%} ({fp_count}/100)"
class TestRuleCoverage:
CRITICAL_TECHNIQUES = [
'T1059.001', 'T1003.001', 'T1055', 'T1053.005',
'T1078', 'T1021.002', 'T1562.001', 'T1070.004',
]
def test_critical_techniques_have_rules(self):
covered = set()
for rule_file in RULES_DIR.glob("**/*.yml"):
content = yaml.safe_load(rule_file.read_text())
for tag in content.get('tags', []):
if tag.startswith('attack.t'):
covered.add(tag.replace('attack.', '').upper())
uncovered = [t for t in self.CRITICAL_TECHNIQUES if t not in covered]
assert not uncovered, f"Critical techniques without coverage: {uncovered}"
CI/CD Integration: Quality Gates
Tests must be automatically run on every Pull Request before the rule is merged into the main repository. A failed gate prevents the deployment of faulty rules to production.
# GitHub Actions CI/CD for Detection Rules
# .github/workflows/test-detection-rules.yml
"""
name: Detection Rules CI
on:
pull_request:
paths: ['rules/**', 'tests/**']
push:
branches: [main]
jobs:
syntax-validation:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with: { python-version: '3.12' }
- run: pip install pySigma pyyaml pytest
- run: pytest tests/test_detection_rules.py::TestSigmaRuleSyntax -v
logic-testing:
runs-on: ubuntu-latest
needs: syntax-validation
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with: { python-version: '3.12' }
- run: pip install pySigma pyyaml pytest pytest-cov
- run: pytest tests/test_rule_logic.py -v --cov=rules
sigma-test-runner:
runs-on: ubuntu-latest
needs: syntax-validation
steps:
- uses: actions/checkout@v4
- run: go install github.com/bradleyjkemp/sigma-test@latest
- run: sigma-test rules/ --exit-on-failure
notify-failure:
needs: [syntax-validation, logic-testing, sigma-test-runner]
if: failure()
runs-on: ubuntu-latest
steps:
- name: Notify on failure
run: echo "CI failed for PR - notify team"
"""
Coverage Strategy for Detection Rules
Good coverage strategy for detection rules measures tested behaviors, not code lines. Recommended minimum targets:
- Every rule must have at least 2 TP tests and 2 FP tests
- "High" and "critical" rules must have at least 3 TP and 3 FP
- 100% of ATT&CK techniques classified as "critical" must have coverage
- Stress test (100 random events) on all rules with FP rate < 2%
Simulator Limitation: Not a Replacement for Real SIEM Testing
The Python simulator and sigma-test are excellent pre-validation tools, but they do not
perfectly simulate the field normalization of the target SIEM. A rule that passes all local
tests may fail on Splunk because the field is called process_path instead of
Image. Always add a test on a staging environment with a real SIEM before
production deployment.
Conclusions and Key Takeaways
Unit testing for detection rules is not overhead: it is the investment that allows maintaining a repository of hundreds of rules without quality degrading over time. With the framework described, every rule has an explicit contract of what it must detect and what it must not detect, automatically verified on every change.
Key Takeaways
- Detection rules are code: they need tests like any other code
- sigma-test allows co-located tests with the rule in native YAML format
- pytest offers flexibility for advanced tests: stress testing, coverage, parameterization
- Automatically generated synthetic logs cover more cases than manual events
- CI/CD gate prevents deployment of faulty rules to production
- ATT&CK coverage identifies detection gaps on critical techniques
- Simulators do not replace tests on real SIEM in staging
Related Articles
- Sigma Rules: Universal Detection Logic and SIEM Conversion
- Detection-as-Code Pipeline with Git and CI/CD
- AI-Assisted Detection: LLMs for Sigma Rule Generation
- MITRE ATT&CK Integration: Mapping Coverage Gaps







