Employability8 min read5 February 2026

IRM-Aligned Risk Management Education: Closing the Competency Gap

IRM competency standards demand applied risk judgement, but most university programmes still teach frameworks without the practice environments needed to develop it.

The Institute of Risk Management publishes one of the most detailed competency frameworks in any business discipline. IRM's Professional Standards distinguish between knowledge, skills, and behaviours — and they are unambiguous that all three are required for professional competence. The problem for universities is that most risk management programmes are very good at developing the first, adequate at developing the second, and almost entirely silent on the third. Behaviours under uncertainty cannot be assessed in an examination. They have to be observed in practice.

What the IRM Competency Framework Actually Requires

The IRM framework groups professional competencies into four domains: risk leadership and culture, risk identification and assessment, risk treatment and control, and risk monitoring and review. Each domain includes behavioural indicators — the observable ways in which a competent risk professional demonstrates their capability. Risk leadership and culture, for instance, requires the ability to influence organisational behaviour and challenge assumptions in high-stakes environments. That is not a knowledge outcome. It is a behavioural one, and it requires repeated practice in conditions that carry meaningful consequences.

The gap between what IRM requires and what most programmes deliver is not a secret. The IRM itself has noted that many graduates arrive at entry-level roles able to populate a risk register but unable to facilitate a risk workshop or challenge a flawed risk assessment made by a senior colleague. The competency gap is behavioural, not technical.

Only 34% of risk management hiring managers reported that recent graduates demonstrated the applied judgement and stakeholder communication skills needed for day-one effectiveness in a risk role.

IRM Employer Engagement Survey, 2023

Why Traditional Assessment Cannot Close a Behavioural Gap

Essay-based assessment of risk management knowledge serves an important function — it develops analytical rigour and the ability to synthesise complex frameworks. But it cannot assess whether a student will make a defensible risk escalation decision when a senior manager is pushing back, or whether they can maintain analytical clarity when a crisis is unfolding in real time. Those are behavioural competencies, and they require assessment instruments that can observe behaviour rather than read its description.

Simulation-based assessment addresses this directly. When students make risk decisions in a live simulation — committing resources, setting risk appetite parameters, responding to injected crises — they produce an observable behavioural record. Tutors can see whether risk treatment decisions are consistent with the stated appetite, whether teams escalate appropriately when thresholds are breached, and whether risk monitoring improves over successive turns as teams learn from earlier errors.

The Case for Simulation in Risk Programmes

The evidence base for simulation in professional education is well-established. A meta-analysis of 65 studies found that simulation-based learning produced significantly higher behavioural transfer rates than case-based instruction — not because simulation replaces conceptual learning but because it provides the repeated practice environment needed to convert knowledge into confident action. Risk management education needs both: the framework to understand why a decision matters, and the practice environment to develop the confidence to make it.

Aligning Simulation Sessions to IRM Learning Outcomes

SPPIN Sim's risk management module is designed with direct reference to the IRM Professional Standards. Each simulation turn is structured around a risk cycle that mirrors the IRM's four domains: teams identify emerging risks from live news-derived events, assess and prioritise them, select treatment options, and monitor outcomes across subsequent turns. The decisions students make in the simulation produce quantitative KPIs — diversification index, resilience score, continuity buffer — that correspond to the risk treatment and monitoring domains of the IRM framework.

For programme leaders seeking to strengthen the evidence base for IRM alignment in their module specifications, SPPIN Sim provides a session-level mapping document that links each simulation decision type to the relevant IRM competency indicator. That documentation supports both internal quality assurance processes and external review by professional body accreditors.

Building a Coherent Risk Programme Around Simulation

The most effective approach is not to treat simulation as a standalone activity but to embed it as the applied layer of a programme that moves between framework introduction, case analysis, and live practice. A typical 12-week risk module might introduce a framework conceptually in weeks one and two, examine it through case analysis in weeks three and four, and then run a simulation session in week five that requires students to apply the framework in real time. The debrief in week six then becomes a rich analytical exercise grounded in evidence students generated themselves.

See it in action

Book a free demo and watch the simulation run live with your cohort.

Book a free demo

See SPPIN Sim in action

Book a free 30-minute demo tailored to your discipline. We'll run a live turn — AI world event, countdown, leaderboard reveal — so you see exactly what your students experience.

Book a free demo →