IR.L2-3.6.1

IR.L2-3.6.1: Incident Handling

Establish an operational incident-handling capability for organizational systems that includes preparation, detection, analysis, containment, recovery, and user response activities.

Incident response is the practice where the gap between “having a plan” and “being able to execute” is most visible to an assessor. Every organization can produce a document that says “Incident Response Plan” at the top. Very few can put someone in a chair who can walk the assessor through what actually happens when something goes wrong.

What the assessor is actually evaluating

The NIST language covers the full incident handling lifecycle: preparation, detection, analysis, containment, recovery, and user response activities. That sounds academic. In the assessment room, the assessor is looking at four things:

Do you have a plan, and is it real? The plan needs to exist, and it needs to describe your actual environment and your actual people. A template you downloaded and filled in your company name is a starting point, not a finished product. The assessor will read it. If it references a “Security Operations Center” and you’re a 20-person company, they’ll notice. The plan should describe what your organization actually does, with real roles, real contact information, and real procedures that match how you’d actually respond. Your Incident Response Team should be defined in the plan, even if it’s your IT director, your FSO, your net admin, and your MSSP’s SOC manager. Those are your IRT members. Name them.

Have you tested it? This is the question that separates organizations that take IR seriously from organizations that wrote a plan to check a box. The assessor will ask when the plan was last tested. If the answer is “never,” that’s a finding. Tabletop exercises count. Simulations count. After-action reviews from real incidents count. What doesn’t count is “we haven’t gotten around to it yet.”

Can the people in the defined roles describe what they’d do? The assessor may ask to speak with the person listed as the incident response lead, or the person responsible for containment, or whoever your plan names as the communications contact. They’ll ask those people to describe their role during an incident. Not read from the plan. Describe it. If the person listed as incident commander says “I’m not really sure, I think I’d call our MSP,” that’s a problem. The assessor is checking whether the roles are real or just names on paper.

Can you show how incidents would be documented? Not every organization has had a declared security incident, and that’s fine. Assessors are not skeptical about this. It’s actually common. But you need to show the capability exists. That means having an IR form template that shows how an incident would be documented if one occurred, with sections for detection details, classification, containment actions, timeline, personnel involved, resolution, and lessons learned. If you have had incidents, the assessor wants to see how they were tracked and documented through to closure.

Incidents vs. alerts: a distinction your plan needs to make

This is something that trips up a lot of organizations and it deserves its own section.

Alerts happen constantly. Phishing emails get delivered. VPN brute force attempts hit your firewall. Someone downloads a PUP that gets quarantined before it executes. A blocked malware download triggers a SIEM alert. These are events. They’re real. They may even be true positives. But are they incidents?

Probably not. And your IR plan needs to make this distinction clear.

An incident, in most well-written IR plans, is unauthorized activity that results in or has the potential to result in damage, information disclosure, or disruption. Someone clicking a phishing link where the account gets automatically locked before anything is accessed is an alert that got handled. A compromised account where the attacker actually reached CUI is an incident. The line matters because it determines what process kicks in, who gets notified, and what gets reported.

Your plan should define what constitutes an incident for your organization. It should also be clear about who has the authority to formally declare one. That’s typically the incident response coordinator or the MSSP SOC manager, depending on your model. When an alert crosses the threshold into incident territory, someone needs to make that call, and everyone needs to know who that someone is.

The assessor may ask how you differentiate between alerts and incidents. They may ask what happens when an alert escalates. If someone in the room can clearly explain, “here’s how an alert gets triaged, here’s the threshold for declaring an incident, and here’s who makes that call,” you’re in good shape.

Worth noting: ITIL uses the word “incident” for any service disruption, including a user submitting a trouble ticket. CMMC does not. Make sure your terminology is consistent with how your IRP defines it, not whatever framework someone copied from.

What a realistic SSP definition looks like

Example SSP Language: IR.L2-3.6.1

[Organization Name] maintains an incident response capability covering all systems within the CUI boundary. The Incident Response Plan (IRP) defines procedures for preparation, detection and analysis, containment, eradication, recovery, and post-incident activity. The IRP is reviewed and updated annually or after any significant incident.

Incident response roles are defined in the IRP, including the Incident Response Coordinator ([name/role]), technical responders ([internal IT / MSSP name]), and management notification contacts ([role]). These individuals collectively form the Incident Response Team (IRT). Detection and initial triage are performed by [MSSP name]. Containment decisions and technical response actions are executed by the MSSP SOC team per documented procedures, with notification to the Incident Response Coordinator [per escalation procedure]. Business-level decisions including external reporting and prime contractor notification are handled by [role/FSO].

The IRP is tested at least annually through tabletop exercises involving personnel in defined incident response roles and MSSP technical staff. Exercises are documented with date, participants, scenario, actions taken, and findings. Lessons learned are incorporated into the plan following each exercise or real incident.

All declared security incidents are tracked in [ticketing platform] using a dedicated incident ticket queue with statuses for each phase of the IR lifecycle. Each incident ticket includes the completed IR form, timeline, actions taken, personnel involved, resolution, and lessons learned documentation. The lessons learned review is completed and documented in the ticket before it is closed.

The organization distinguishes between security alerts (events handled through standard monitoring and triage) and declared security incidents (unauthorized activity resulting in or potentially resulting in damage, disclosure, or disruption). The criteria for incident declaration and the authority to declare are defined in the IRP.

A few things to notice:

It names real roles and real people. “The Incident Response Coordinator is the IT Director” tells the assessor who to talk to. “Incident response is managed by the security team” in a company with no formal security team tells them nothing. Your IRT members are defined by name and role.

It commits to testing on a schedule. “At least annually through tabletop exercises.” This is a commitment the assessor will verify. They’ll ask when the last one was. If your SSP says annual and the last exercise was two years ago, that’s a gap.

It describes the incident documentation lifecycle. Incidents live in a dedicated ticket queue, not scattered across email threads or buried in a shared drive. The IR form, timeline, and lessons learned all go in the ticket. The ticket doesn’t get closed until the lessons learned review is done.

It defines what an incident actually is. This is the part most SSPs skip entirely, and it matters. The assessor shouldn’t have to guess what your organization considers an incident versus an alert.

How to present your evidence

When the assessor gets to IR.L2-3.6.1, have these ready:

Your incident response plan. Current, reviewed within the last year, with real names and real procedures. Make sure the person presenting it has actually read it recently and can speak to it without flipping through pages looking for answers. The plan should include your IRT roster, your incident declaration criteria, your escalation procedures, your communication plan, and your reporting requirements.

Tabletop exercise documentation. At minimum: the date, who participated, the scenario, what decisions were made, and what was learned. One well-documented exercise is worth more than three that were run and forgotten. If you ran a scenario where the team struggled with a particular decision, document that and show what you changed in the plan as a result. Assessors love seeing improvement over time.

Your IR form template. Even if you haven’t had a declared incident, having a structured form ready to go shows the assessor that you’ve thought about how documentation would happen. It should have sections for incident details, timeline, classification, containment actions, personnel involved, resolution, and lessons learned. This is a small thing that makes a big impression.

Incident tickets, if you have them. If you’ve had declared incidents, show the full ticket lifecycle. Detection through triage through containment through resolution through lessons learned, all in one place in your ticketing system. Redact sensitive details if needed, but show the process and show that the ticket wasn’t closed until everything was documented.

People who can talk. This is the most important evidence for IR and it’s not a document. The people listed in your plan need to be available and prepared. They need to answer from experience or preparation, not from reading the plan in real time.

Assessment room tips

Keep answers short. Show the evidence, don't describe it. Let the assessor drive. For more on how to present in the assessment room, see How to Present Evidence in the Assessment Room.

Assessor: "Walk me through your incident response process."
"Yes. [Pull up the IRP] Here's our plan. Our MSSP handles monitoring and triage. When something crosses the line from alert to incident, their SOC manager declares it and notifies me. They're empowered to contain immediately, and I own the business decisions from there. [Pull up the incident ticket board] Here's where everything gets tracked and documented."
Assessor: "When was the last time you tested this plan?"
"[Date]. Our MSSP designed and ran it. [Pull up the exercise documentation] Here's the scenario, who participated, what we found, and what we changed in the plan afterward."
Assessor: "Have you had any actual security incidents?"
If yes: "Yes, [number] in the last [period]. [Pull up the ticket board] Here's one, you can see the full lifecycle from detection through lessons learned."
If no: "No declared incidents. Here's what we'd do if one occurred. [Walk through the process briefly, pull up the IR form template and tabletop records]"

Not having had a declared incident is common and assessors are fine with it. What they won’t accept is no incidents AND no exercises AND no documentation framework. That combination means you have a plan nobody has ever tested and no mechanism for using it. That’s not a capability.

Common failures

What gets flagged

A plan that's never been tested. This is the single most common IR failure. The plan exists, it's dated, it has names in it. But nobody has ever run a tabletop, simulated an incident, or walked through the procedures. The assessor will ask when it was last tested, and "we plan to do that soon" is a finding.

People in defined roles who can't describe their responsibilities. The plan says Sarah is the Incident Response Coordinator. The assessor asks Sarah what she'd do if a ransomware event was detected at 3 PM on a Tuesday. Sarah looks at the plan. That's a bad sign. The people in your IR roles need to be able to describe the process from memory. Not perfectly, but confidently. If they can't, the assessor concludes that the roles are names on paper, not real responsibilities.

No distinction between alerts and incidents. If your plan treats every SIEM alert as an incident, or if nobody can explain the difference, the assessor will question whether you actually understand your own process. Your plan should define what constitutes an incident, who declares it, and how alerts get escalated when they cross that line.

No incident documentation system. Incidents that were handled informally, fixed, and never written down. Or incidents documented in scattered emails and shared drive files with no consistent structure. You should have a dedicated area in your ticketing system for incident tickets, with statuses for each phase of the IR lifecycle. The IR form, timeline, and lessons learned should all live in the ticket.

No post-incident improvement. You have incident records. Good. The assessor asks what changed as a result. Nothing? That undermines the entire incident handling capability. Every incident should produce at least one lesson. A procedure update, a configuration change, a training gap identified. The lessons learned review should be documented in the ticket before it gets closed. If your incidents never result in improvements, the assessor questions whether the review process is real.

What makes assessors move on satisfied

A plan that reads like it was written for your organization, with named IRT members and real procedures. A clear definition of what constitutes an incident versus an alert. People in defined roles who can explain what they'd do without looking at the document. At least one documented tabletop exercise with real findings and real changes to the plan. A dedicated incident ticket system with an IR form template ready to go. And when all of those things tell the same consistent story, the assessor checks the box and moves on.

If you use an MSP/MSSP

Incident response is probably the practice where the MSP/MSSP relationship matters most. It’s also where a good MSSP really earns their keep.

Your MSSP should be in the assessment room for this practice. They’re the ones running detection, triaging alerts, making containment calls, and executing the technical response. They should be able to explain all of that directly to the assessor.

Your MSSP should also have their own IR plan. Yours will have a lot in common with it, and that’s fine. What matters is that both plans exist, both reflect reality, and the handoffs between them are clear.

Here’s what the split usually looks like:

What’s typically on you (the contractor):

  • Owning your incident response plan (it’s your plan, but your MSSP likely helped build it)
  • Business-level decisions during an incident (notify the prime? report to regulators? shut down operations?)
  • Reporting requirements, including DFARS, prime contract obligations, and regulatory notifications. These are on you. Your MSSP will gather and provide whatever information you need, but they can’t advise you on what to report or when. Only your prime or the federal customer can tell you that. This typically falls on your FSO
  • Participating in tabletop exercises
  • Participating in post-incident reviews

What’s typically on the MSSP:

  • Monitoring and detection (SIEM, EDR, alert triage)
  • Initial triage and the alert-vs-incident determination
  • Technical containment and eradication. A good MSSP is empowered to make containment calls quickly. Isolating a compromised machine, disabling a breached account, blocking malicious traffic. These decisions happen fast, and they should. A quick SOC analyst decision can save thousands of dollars. The MSSP should have documented procedures (SOPs or playbooks) for how and when they make these calls, how you get notified, and how the decision gets documented
  • Forensic analysis if needed
  • Providing incident documentation and reports
  • Designing and facilitating tabletop exercises with the contractor (at least annually)
  • Their SOC manager should be able to explain their internal processes, including how their SOC team runs their own exercises and training

If your MSSP also provides compliance program management, this gets even smoother. They’ll already know what the assessor is going to ask, how to demonstrate the capability, and what evidence to bring. You shouldn’t have to spend hours prepping your MSSP on what to say. If you do, that’s a sign the relationship isn’t working the way it should.

A note on tabletop exercises with your MSSP

A good MSSP designs and runs tabletop exercises with their clients, not for them. Both sides at the table, working through a scenario together, testing the handoffs and communication channels that would matter during a real incident. This should happen at least annually as part of the compliance program. Internally, a mature MSSP SOC team also runs their own technical exercises on a more frequent basis, quarterly or more, with real tool walkthroughs and scenario simulations. If your MSSP does both, that's strong evidence. If your MSSP offers tabletop facilitation, take them up on it. If they don't offer it, ask. A joint tabletop that includes both the contractor's decision-makers and the MSSP's responders is the strongest evidence an assessor can see for IR.


This page covers IR.L2-3.6.1 from NIST SP 800-171 Rev 2 (3.6.1). The guidance here is based on experience in real CMMC assessments and is intended to help you prepare. It is not legal or compliance advice. Your organization’s situation is unique, and you should work with qualified professionals for formal assessment preparation.

New practice breakdowns and assessment tips every week. Follow on Substack to stay current as the November 2026 deadline gets closer.