Multi-Store Incident Response: Run a 60-Minute Tabletop That Actually Prepares Your Rooftops

If a DMS outage slammed your group at 10 a.m., could your stores keep selling and funding? If a Controller got a fake wire request at 2:37 p.m., would dual control stop it? If ransomware locked parts receipts on a Saturday, who declares the incident and who talks to the insurer?

This post gives you a dealership-native tabletop you can run in one hour with your real team. No cyber theater. Just a clear script that hits DMS outage, business email compromise, and ransomware across multiple rooftops.

Note: Practical guidance, not legal advice. Confirm breach notification and legal obligations with counsel and your insurer.

Who should be in the room

Required: Dealer Principal or designee, Controller, QI/Compliance lead, IT or MDR partner, GM (one per region if you have many rooftops), HR, and Communications/Marketing if you have it. Optional: Service Director and F&I Director for realism.

Time commitment: 60 minutes. Bring laptops closed and phones face down unless used for the exercise.

What you’ll walk away with

A tested call tree and decision log, three refined runbooks, gaps list with owners and dates, and evidence you can show an OEM, insurer, or auditor.

How to run the 60-minute tabletop

Minute 0–5: Framing
Set the ground rules. No blame. Speak in plain English. The QI facilitates. Designate a scribe to capture decisions and timestamps.

Minute 5–20: Scenario 1 — DMS outage
Prompt: “It’s Tuesday, 10:05 a.m. Your DMS is unreachable across all rooftops. Desking can’t pull rates, F&I can’t book deals, Accounting can’t post. Customer delivery queue is backing up.”

Decisions to practice:
Can we desk and deliver on paper? What lender documents and after-the-fact posting steps are approved? Who communicates to rooftops, and how often? How do we protect against duplicate data entry errors when systems return?

Evidence to produce:
Paper deal checklist, lender guidance reference, posting-after-outage steps, contact template to customers waiting.

Minute 20–35: Scenario 2 — Business email compromise (wire fraud attempt)
Prompt: “At 2:37 p.m., the Controller receives an email from the ‘Dealer Principal’ requesting an urgent $187,420 wire for a facility upgrade. The reply-to is spoofed. AP clerk is ready to push it.”

Decisions to practice:
Who stops the transaction? What is the exact callback script and which number is used? How is dual control enforced? What do we do if the wire already left? What evidence do we save?

Evidence to produce:
Callback log, dual-control signoff, bank contact list with known-good numbers, incident ticket with screenshots of headers and rules.

Minute 35–55: Scenario 3 — Ransomware in Fixed Ops
Prompt: “Saturday 8:12 a.m. Advisors report all parts invoices opening as .locked files. One store’s file server is impacted; EDR shows lateral movement attempts.”

Decisions to practice:
Who declares an incident? Who isolates affected devices, and how? Do we cut network to the store or segment? What’s our restore point and time goal? Who informs customers if ROs are delayed? When do we call the insurer and outside counsel?

Evidence to produce:
Containment checklist, backup restore test plan, customer communication template, insurer policy numbers and hotlines.

Minute 55–60: Debrief
List three wins, three gaps, owners, and due dates. Book a 30-minute follow-up in two weeks to review fixes.

Materials you need before you start

Printed call tree with personal cells for QI, IT/MDR, Controller, Dealer Principal, GMs.
Bank and insurer hotlines from the policy binder.
Paper deal packet and “DMS down” SOP.
Wire callback script and vendor-master phone numbers.
Backup/restore cheat sheet with RTO/RPO targets.
A simple scorecard sheet for the scribe.

Quick, dealership-native injects to keep it real

Use one inject per scenario if discussion stalls.

DMS outage injects
A lender rep texts: “Don’t deliver without the system. Funding risk.”
Sales Manager asks: “Can we print from the CRM and sign wet?”
Accounting asks: “How do we batch back-posting when DMS returns?”

BEC injects
AP says: “The vendor changed bank info last week; email says use this new routing number.”
Outlook shows the Controller has an auto-forward rule to a personal Gmail from months ago.
The “Dealer Principal” replies, “This is time-sensitive, do it now.”

Ransomware injects
Immutable backups pass a health check, but last successful test restore is 19 days old.
EDR wants to isolate 17 machines, including two F&I PCs about to deliver cars.
A local news stringer calls the GM asking for comment.

Roles and what “good” sounds like

QI/Facilitator
“Declare severity two. IT isolates affected endpoints. Communications sends holding statement at :30, updates hourly. Evidence lives in the incident ticket.”

Controller/AP
“Pause all outgoing wires. Dual control on any exceptions. I’m calling the vendor using the number in the master, not the email thread. Logging time, person, and confirmation.”

IT/MDR
“Blocking command-and-control, isolating hosts. Snapshots taken. Beginning log preservation. Next update in 10 minutes. No restores until forensics says clean.”

GM
“BDC is rescheduling deliveries with the provided script. We’re using the paper delivery checklist. I’ll report customer impact every 30 minutes.”

Service Director
“We’ll hand-write ROs and take photos of parts bins for reconciliation later. No USBs, no workarounds outside the SOP.”

Copy-ready scripts you can paste into your runbooks

Customer update (DMS outage)
“Hi [Customer Name], our system is temporarily down across multiple stores. Your delivery is still on for today. We’ll complete paperwork by hand now and upload to our system once it’s back. Your pricing and protections are locked. Thanks for your patience.”

Wire callback script (AP to vendor)
“This is [Name] with [Dealership Group]. I’m calling using the phone number in our vendor master to verify bank details for invoice #[###]. Please confirm the last four digits of the account and routing number. We cannot accept changes sent by email.”

Internal holding statement (ransomware)
“We are investigating a systems issue at [Store]. Sales and service remain open using contingency procedures. There is no evidence of unauthorized access to customer information at this time. Updates will be provided hourly by the QI.”

Media response (if contacted)
“We’re addressing a technical issue and continuing to serve customers. We’ll share more when we have confirmed facts. Please direct inquiries to [Contact], [Title], [Phone/Email].”

Score your tabletop in five categories

Use 0–2 points each, maximum 10. Keep it simple and visible.

Declaration speed
Did we identify the incident type and severity within five minutes?

Containment discipline
Did IT/MDR isolate systems with minimal blast radius and no risky shortcuts?

Communication clarity
Were stores, customers, and lenders updated with the right script and cadence?

Evidence and escalation
Did we capture screenshots, logs, and artifacts, and contact insurer/counsel per policy?

Business continuity
Could Sales, F&I, and Fixed Ops continue with paper processes and clear handoffs?

9–10: Operationally ready. 7–8: Ready with minor gaps. 5–6: Needs targeted fixes. ≤4: Redo in 30 days after remediation.

What to document for auditors, OEMs, and insurers

Agenda with attendees and timestamps.
Decision log with owners and deadlines.
Copies of scripts used and any edits made.
Screenshots of backup console health, EDR isolation events (staged is fine for the exercise).
A one-page after-action summary signed by the QI and Controller.

Common multi-rooftop pitfalls and how to avoid them

Over-centralized decisions
Fix: Pre-authorize GMs to use paper delivery and RO procedures under defined conditions.

Shadow access
Fix: Run a 6-month offboarding audit and kill old accounts before game day.

Vendor bottlenecks
Fix: Keep contract escalation contacts handy. Test their response during the tabletop.

Weekend blind spot
Fix: Add a Saturday inject by default. Validate after-hours contact paths.

Your 2-week follow-up plan

Within 48 hours: Distribute minutes, decisions, and the gaps list.
Within 7 days: Close quick wins like call tree updates and revised scripts.
Within 14 days: Validate fixes with a 20-minute micro-drill at one store.
In 30–60 days: Re-run the full tabletop rotating in different rooftops.

The business case to share with the Dealer Principal

A one-hour tabletop reduces recon delays, prevents redraws, protects cash from fraudulent wires, and shortens downtime in the lane. It also produces evidence your insurer and OEMs respect. That’s operational ROI, not just IT hygiene.

Need Additional Help?

Want a Word pack with the facilitator script, decision log template, and printable scorecards Insert your group name and I’ll send a clean, branded version you can run this week.

Share