Overslaan naar inhoud

48 Hours of Production Downtime, One Lost Client, a Damaged Reputation.

It Was All Preventable.
13 april 2026 in
48 Hours of Production Downtime, One Lost Client, a Damaged Reputation.
Louis Collard

Tuesday morning, 6:43 AM

The packaging line operator notices first. The supervision screens are black. He restarts the workstation but nothing. He calls the colleague next to him and same thing.
Within minutes, the entire line is down. Then the internal netw​ork. Then the office workstations. On every screen that still turns on, the same message: your files have been encrypted. An email address with a 72 hour deadline.

The CEO gets the call at 7:15 AM while he's still in his car.
What he doesn't know yet is that the next 72 hours won't be the hardest part. The 30 days that follow will be.

What Happened in the Hours That Followed

The first decision is instinctive: cut everything that can still be cut. Network, remote access, servers. The production environment — PLCs, sensors, supervision systems — was connected to the office network without proper segmentation. Once inside, the attackers had access to both.

Complete shutdown. Zero production.

By 10:00 AM, the IT Manager understands that notification is required. What many manufacturing SMEs don't know: under NIS2, companies operating in sectors deemed critical — including parts of the manufacturing industry — are legally required to report a significant incident to the relevant authority within 72 hours. Failing to do so triggers additional penalties at the worst possible moment.

The call to the main client comes next since the delivery scheduled for Thursday won't be going out. The client — who represents 30% of annual revenue — asks the direct question: "How long?" No one can answer with any certainty.

Mid-afternoon brings the next piece of bad news: the last complete, usable backups are 3 days old. 3 days of production data, orders, and quality traceability lost or compromised.

An external specialist firm is contacted on an emergency basis. Their availability comes at a price: three times the standard rate.

The Damage Report, 30 Days Later

The numbers settle. They're not catastrophic in any dramatic sense — the company didn't go under. But the damage is real, and it lingers.

The direct cost of the incident — external responders, system restoration, internal hours, replaced equipment — lands around €90,000. According to the Hiscox Cyber Readiness Report 2024, the median cost of a cyber incident for an SME sits between €40,000 and €150,000, excluding commercial impact. For manufacturing companies, where production environments are directly exposed, that figure tends toward the upper end of the range.

The main client puts the contract "under review." Two months later, they bring in a second supplier. The relationship isn't severed, but it will never be exclusive again.

In the industry, word gets around. Buyers talk, especially in tight-knit sectors like metalworking or food processing. Another long-standing client — a six-year relationship — uses the moment to send a revised supplier qualification questionnaire: information security policy, tested business continuity plan, up-to-date risk register. Initial response deadline: three weeks. Without structured documentation in place, the response took six weeks and required an on-site audit.

The IT Manager is left with an action list. One he'd largely had for the past eighteen months.

What Was Missing — And What No One Had Ever Formalised

This isn't a story about negligence, a missing budget or a lack of expertise.

It's a story about visibility.

Former contractor access rights had never been revoked — not through deliberate oversight, but because there was no automatic process to flag and remove them. The backup policy was "in place" in the sense that someone had set it up two years earlier, but no one was regularly verifying it actually worked. The IT asset register existed in a shared spreadsheet that three different people had edited at different times, with no single version holding authority.

And the incident response plan? It was there as well, in a Word document on the server. Never tested. Never updated after the last organisational restructure. And critically, no one had been formally designated to keep it alive.

That last point matters: NIS2 explicitly places the responsibility for cyber governance on the shoulders of company leadership not just the IT Manager. Management bodies are required to supervise, approve, and be trained on risk management measures. We cover this in detail in our article on director liability under NIS2 (see linked article).

These blind spots weren't hidden but invisible — because no system existed to make them actionable, prioritise them, and track them over time. That is precisely the function of an embedded GRC — Governance, Risk & Compliance — approach. Not an annual audit. A living system.

The Difference Between Reacting and Having Prepared

A few hundred kilometres away, an SME in the same sector faced a similar attack the same year. The breach was detected in under two hours. Contained in six. Production systems were untouched — the OT network was properly segmented. The client was notified proactively with a clear message and a confirmed recovery timeline. The regulatory notification was filed within the deadline.

The difference had nothing to do with team size or security budget. It came down to three concrete things this company had put in place before any incident occurred:

Continuous visibility over governance

A centralised dashboard gave both the IT Manager and the executive team a real-time view of their security posture at any given moment: 

  • which assets were exposed, 
  • which controls were active, 
  • which gaps remained open. 

Not once a year at an audit. Continuously.

Real preparation for operational disruption

The business continuity plan wasn't a document but it was a process, tested twice a year, with clear roles, documented failover procedures, and regularly verified backups. When production stopped, everyone knew exactly what to do within the first hour.

A structured response capability for attacks 

Escalation steps, emergency contacts, the decisions to make in the first 24 hours. All of it had been defined in advance, before any crisis. What took the neighbouring company hours of improvisation was executed here in under two hours.

What made all of this possible was a GRC solution integrated into their Odoo ERP, which made these three dimensions visible, actionable, and traceable without requiring dedicated full-time resources to maintain them.

When the incident happened, the response didn't need to be invented. It was executed.

What This Means for Your Own Situation

This scenario is not uncommon. It's just rarely talked about, because the companies it happens to don't publicise it.

What's more common is the belief that "it only happens to others" or that "we're too small to be targeted." The data says otherwise: industrial SMEs are an increasingly attractive target precisely because they're perceived as less protected than large corporations, yet connected enough to critical supply chains to be worth compromising.

The question is not whether your company will face an incident one day. It's whether, when that day comes, you'll know exactly what to do and whether you'll be able to prove it.

At Prism Technology, we've built a native GRC solution on Odoo, designed for manufacturing SMEs who want to make their risks visible and their response capabilities operational without adding another layer of complexity on top of their existing ERP. Available as a SaaS solution, it plugs directly into your existing Odoo environment and gives you the governance visibility you've been missing — within weeks, not months.

Want to see what it looks like in your environment? 

Book a free demo at prismtech.be.

ISO 27001 and NIS2: Stop Treating Them Separately
Here's How to Align Them and Cut Your Compliance Workload in Half