In February 2026, Dutch telecom provider Odido disclosed a cyber incident affecting personal data tied to more than six million accounts. (Link to Reuters). Odido is one of the country’s major telecom providers, serving millions. This incident caught my attention for the scale and because the moment data is already stolen, you’re no longer preventing impact, you’re managing consequences… This website version expands the original LinkedIn newsletter with additional series context, internal navigation and source direction. The core lesson remains the same: access is not the same as legitimacy. When identity trust becomes uncertain, resilience depends on whether the organisation can reduce exposure before uncertainty turns into wider consequence. The Odido story isn’t interesting because attackers were “clever.” It’s interesting because it shows how quickly a familiar pattern becomes large when warning signals exist, but decision pathways lag. (Link to IO+) There’s a moment in many incidents where nothing looks “down”… and yet control is already slipping. Not because systems failed. Because trust did. What makes the reporting uncomfortable is precisely that it isn’t exotic. Not a movie-plot zero-day. More like a chain of small, plausible weaknesses that combined into something large: social manipulation of an employee, access into a customer contact system, and permissions that enabled access to a broad set of records. Why this series starts with trust The Identity & containment series starts here because many modern incidents do not begin with a dramatic shutdown. They begin with ambiguity. An account appears legitimate. A system remains available. A workflow continues. But the organisation can no longer be fully certain that the identity behind the access should still be trusted. That is why the first resilience question is not only whether access can be authenticated. It is whether access can be narrowed, challenged, isolated or withdrawn quickly enough when trust changes. What makes the reporting uncomfortable is precisely that it isn’t exotic. Not a movie-plot zero-day. More like a chain of small, plausible weaknesses that combined into something large: social manipulation of an employee, access into a customer contact system, and permissions that enabled access to a broad set of records. What happened (the facts) Odido stated the incident involved personal data from a customer contact system, and that passwords, call details, and billing data were not involved. Public reporting describes attackers using social engineering posing as internal IT/helpdesk to persuade staff to approve access — rather than exploiting a novel software vulnerability. Several write-ups also highlight an uncomfortable detail: this general method was not hypothetical. Warnings about this kind of approach had been circulating, including in relation to widely used SaaS environments and identity/access patterns. That last point matters, because it changes the leadership question from: “How could anyone predict this?” …to something more practical: When warnings exist, the gap is rarely “more tools.” It’s whether the organisation can reduce the time between knowing and acting turning a credible signal into fast, governed containment that limits further compromise and restores control.
A practical point: this is also where capability can genuinely change outcomes not by creating more alerts, but by creating control. When you can reliably see lateral movement and detect data-exfiltration behaviour early enough, you can trigger containment actions that limit further spread and additional exposure. The difference isn’t “visibility for reporting.” It’s visibility that enables a governed move : reduce privileged pathways, isolate risky segments, and keep essential operations running while trust is rebuilt. The decision boundary that mattered Most organisations will read this and think: “So… train people better.” Training matters. But notice what that framing does. It quietly places the centre of gravity on the individual who took the call. A more useful lesson is structural: a secure environment shouldn’t rely on a single person being impossible to manipulate and the way access is designed and governed matters. This is the boundary leaders tend to underestimate: We treat identity controls as “security”. But in practice, identity controls are governance because they define who can act, what they can reach, and how quickly we can reduce risk without freezing operations. In other words: the incident doesn’t start when data leaves. It starts when leaders realise, they no longer know what “legitimate access” means. And once legitimacy is uncertain, every response step becomes governance: How broadly can we restrict access without breaking the business? Who has the authority to do it fast? What do we keep running while we regain confidence? This is the part that doesn’t show up in many incident decks: Detection buys awareness. Only pre-decided containment buys time.
One practical pressure-test suggestion Don’t start by writing a new policy. Run a 60-minute identity-trust exercise with three timed phases: T+0 to T+10: credible signal, incomplete evidence What do we do immediately that is reversible? Who authorises it? T+10 to T+30: suspicion of persistence How do we invalidate active trust (sessions/tokens), and what breaks operationally? T+30 to T+60: stabilisation How do we progressively restore controlled access without re-opening the same pathways? The value isn’t the tabletop discussion. The value is discovering whether your organisation can execute the first move without debate, confusion, or operational shock.
A closing thought So, I’ll leave you with a question I find more useful than “How do we prevent every breach?”: If identity trust broke tomorrow, would your first hour be governed… or negotiated? How this connects to newsletter issue # 2 This first issue focused on identity trust inside the organisation: the point where access may still work, but legitimacy becomes uncertain. The next issue moves the same question outward. What happens when the uncertain trust path is not an internal user or account, but a critical vendor, supplier route or platform dependency the organisation cannot quickly exit? That is where access control becomes dependency control, and where containment has to include the systems and relationships the organisation relies on to keep operating. Resources for CIOs/CISOs to make this practical Identity under pressure — 7 board questions (Trust vs Authority) Three pressure-test briefings: (Pressure test for leadership control / First-hour containment decisions / Why pressure testing matters) I can also share how I pressure-test this in practice through a free resilience assessment in your own sandbox environment, with your existing security stack enabled. If any of these would be useful, feel free to contact me. sgemert@s10group.com Where to go next Continue the series: When the vendor is non-negotiable, what can you still control? Move to the live incident moment: The First Hour: who is allowed to act?
SOURCES AND FURTHER READING This newsletter draws on public reporting and company communication about the Odido cyber incident, including the scale of exposed customer data, the reported attack method, earlier warnings about the technique used, and Odido’s own statement on what data was and was not involved. 1. Reuters “Dutch telecom Odido hacked, 6 million accounts affected” Reporting on the scale of the incident, the categories of exposed personal data, Odido’s response, and the affected customer contact system. 2. IO+ “Lessons from the Odido hack: Why devious hackers are no excuse” Analysis of the reported social-engineering method and why the incident raises questions about access design, permissions, warning signals and governed containment. 3. Odido “Update about cyberattack” Odido’s public statement on the cyberattack, the system involved, the affected data categories, the data not affected, and the response taken. 4. NOS “Toeleverancier Odido waarschuwde voor gebruikte hackmethode” Reporting that an Odido supplier had previously warned about the attack method used, reinforcing the newsletter’s focus on the gap between warning and action.
Why identity trust is the real fault line in modern incidents.
The move from system failure to failure A system failure is visible: outage, latency, broken process. A trust failure is quieter: the system is still running, but you can’t prove who is driving. That is why “we detected it” is not the same as “we controlled it”. And why “we restored services” is not the same as “we regained governability”. Odido customers could still use services, according to the company’s own communication. But the incident still carried weight because identity and access pathways touched sensitive data at scale. (Link to Reuters) Three board questions to ask next time These aren’t technical questions. They force clarity before the next uncomfortable hour arrives. 1 . When identity trust is uncertain, what is our “minimum operational mode”? What must keep running, and what can be intentionally degraded — by design — to protect control? 2 . Who can trigger rapid access restriction, and what evidence is “enough” to act? Not “who should be consulted”, but who has authority to pull the first lever when the picture is incomplete. 3 . Where do we have concentration risk in access — and do we know it? If one compromised pathway can touch “too much,” is that an accepted architectural choice… or an accidental one? If you want to make this practical, the next step isn’t more policy, it’s a simple pressure-test of the first-hour decisions.
Newsletter #1 - 17 Mar 2026 By Stan van Gemert | S10 Group DOWNLOAD PDF file Updated 14 May 2026
In February 2026, Dutch telecom provider Odido disclosed a cyber incident affecting personal data tied to more than six million accounts. (Link to Reuters). Odido is one of the country’s major telecom providers, serving millions. This incident caught my attention for the scale and because the moment data is already stolen, you’re no longer preventing impact, you’re managing consequences… This website version expands the original LinkedIn newsletter with additional series context, internal navigation and source direction. The core lesson remains the same: access is not the same as legitimacy. When identity trust becomes uncertain, resilience depends on whether the organisation can reduce exposure before uncertainty turns into wider consequence. The Odido story isn’t interesting because attackers were “clever.” It’s interesting because it shows how quickly a familiar pattern becomes large when warning signals exist, but decision pathways lag. (Link to IO+) There’s a moment in many incidents where nothing looks “down”… and yet control is already slipping. Not because systems failed. Because trust did. What makes the reporting uncomfortable is precisely that it isn’t exotic. Not a movie-plot zero-day. More like a chain of small, plausible weaknesses that combined into something large: social manipulation of an employee, access into a customer contact system, and permissions that enabled access to a broad set of records. Why this series starts with trust The Identity & containment series starts here because many modern incidents do not begin with a dramatic shutdown. They begin with ambiguity. An account appears legitimate. A system remains available. A workflow continues. But the organisation can no longer be fully certain that the identity behind the access should still be trusted. That is why the first resilience question is not only whether access can be authenticated. It is whether access can be narrowed, challenged, isolated or withdrawn quickly enough when trust changes. What makes the reporting uncomfortable is precisely that it isn’t exotic. Not a movie-plot zero-day. More like a chain of small, plausible weaknesses that combined into something large: social manipulation of an employee, access into a customer contact system, and permissions that enabled access to a broad set of records. What happened (the facts) Odido stated the incident involved personal data from a customer contact system, and that passwords, call details, and billing data were not involved. Public reporting describes attackers using social engineering posing as internal IT/helpdesk to persuade staff to approve access rather than exploiting a novel software vulnerability. Several write-ups also highlight an uncomfortable detail: this general method was not hypothetical. Warnings about this kind of approach had been circulating, including in relation to widely used SaaS environments and identity/access patterns. That last point matters, because it changes the leadership question from: “How could anyone predict this?” …to something more practical: When warnings exist, the gap is rarely “more tools.” It’s whether the organisation can reduce the time between knowing and acting turning a credible signal into fast, governed containment that limits further compromise and restores control.
A practical point: this is also where capability can genuinely change outcomes not by creating more alerts, but by creating control. When you can reliably see lateral movement and detect data-exfiltration behaviour early enough, you can trigger containment actions that limit further spread and additional exposure. The difference isn’t “visibility for reporting.” It’s visibility that enables a governed move : reduce privileged pathways, isolate risky segments, and keep essential operations running while trust is rebuilt. The decision boundary that mattered Most organisations will read this and think: “So… train people better.” Training matters. But notice what that framing does. It quietly places the centre of gravity on the individual who took the call. A more useful lesson is structural: a secure environment shouldn’t rely on a single person being impossible to manipulate and the way access is designed and governed matters. This is the boundary leaders tend to underestimate: We treat identity controls as “security”. But in practice, identity controls are governance because they define who can act, what they can reach, and how quickly we can reduce risk without freezing operations. In other words: the incident doesn’t start when data leaves. It starts when leaders realise, they no longer know what “legitimate access” means. And once legitimacy is uncertain, every response step becomes governance: How broadly can we restrict access without breaking the business? Who has the authority to do it fast? What do we keep running while we regain confidence? This is the part that doesn’t show up in many incident decks: Detection buys awareness. Only pre-decided containment buys time.
One practical pressure-test suggestion Don’t start by writing a new policy. Run a 60-minute identity-trust exercise with three timed phases: T+0 to T+10: credible signal, incomplete evidence What do we do immediately that is reversible? Who authorises it? T+10 to T+30: suspicion of persistence How do we invalidate active trust (sessions/tokens), and what breaks operationally? T+30 to T+60: stabilisation How do we progressively restore controlled access without re-opening the same pathways? The value isn’t the tabletop discussion. The value is discovering whether your organisation can execute the first move without debate, confusion, or operational shock.
A closing thought So, I’ll leave you with a question I find more useful than “How do we prevent every breach?”: If identity trust broke tomorrow, would your first hour be governed… or negotiated? How this connects to newsletter issue # 2 This first issue focused on identity trust inside the organisation: the point where access may still work, but legitimacy becomes uncertain. The next issue moves the same question outward. What happens when the uncertain trust path is not an internal user or account, but a critical vendor, supplier route or platform dependency the organisation cannot quickly exit? That is where access control becomes dependency control, and where containment has to include the systems and relationships the organisation relies on to keep operating. Resources for CIOs/CISOs to make this practical Identity under pressure 7 board questions (Trust vs Authority) Three pressure-test briefings: (Pressure test for leadership control / First-hour containment decisions / Why pressure testing matters) I can also share how I pressure-test this in practice through a free resilience assessment in your own sandbox environment, with your existing security stack enabled. If any of these would be useful, feel free to contact me. sgemert@s10group.com Where to go next Continue the series: When the vendor is non-negotiable, what can you still control? Move to the live incident moment: The First Hour: who is allowed to act?
SOURCES AND FURTHER READING This newsletter draws on public reporting and company communication about the Odido cyber incident, including the scale of exposed customer data, the reported attack method, earlier warnings about the technique used, and Odido’s own statement on what data was and was not involved. 1. Reuters “Dutch telecom Odido hacked, 6 million accounts affected” Reporting on the scale of the incident, the categories of exposed personal data, Odido’s response, and the affected customer contact system. 2. IO+ “Lessons from the Odido hack: Why devious hackers are no excuse” Analysis of the reported social-engineering method and why the incident raises questions about access design, permissions, warning signals and governed containment. 3. Odido “Update about cyberattack” Odido’s public statement on the cyberattack, the system involved, the affected data categories, the data not affected, and the response taken. 4. NOS “Toeleverancier Odido waarschuwde voor gebruikte hackmethode” Reporting that an Odido supplier had previously warned about the attack method used, reinforcing the newsletter’s focus on the gap between warning and action.
A cyber incident can unfold while access still works. The real question is whether identity trust can be contained before uncertainty becomes wider consequence.
Why identity trust is the real fault line in modern incidents.
The move from system failure to failure A system failure is visible: outage, latency, broken process. A trust failure is quieter: the system is still running, but you can’t prove who is driving. That is why “we detected it” is not the same as “we controlled it”. And why “we restored services” is not the same as “we regained governability”. Odido customers could still use services, according to the company’s own communication. But the incident still carried weight because identity and access pathways touched sensitive data at scale. (Link to Reuters) Three board questions to ask next time These aren’t technical questions. They force clarity before the next uncomfortable hour arrives. 1 . When identity trust is uncertain, what is our “minimum operational mode”? What must keep running, and what can be intentionally degraded by design to protect control? 2 . Who can trigger rapid access restriction, and what evidence is “enough” to act? Not “who should be consulted”, but who has authority to pull the first lever when the picture is incomplete. 3 . Where do we have concentration risk in access — and do we know it? If one compromised pathway can touch “too much,” is that an accepted architectural choice… or an accidental one? If you want to make this practical, the next step isn’t more policy, it’s a simple pressure-test of the first- hour decisions.
Newsletter #1 - 17 Mar 2026 By Stan van Gemert | S10 Group DOWNLOAD PDF file Updated 14 May 2026