The Wrong Emergency: How Higher Education Is Misreading the AI Moment
A Provocation, Not a Conclusion
There is a particular kind of institutional panic that looks, from the outside, like decisive action. Committees are formed. Policies are updated. Software is procured. Statements are issued. Everyone is very busy, very serious, and almost entirely focused on the wrong thing.
This is where much of higher education finds itself right now with artificial intelligence.
What Institutions Are Actually Doing
Walk into most universities and ask what they’ve done in response to AI, and you’ll get a familiar answer. They’ve updated their academic integrity policies. They’ve invested in detection tools (many of which are unreliable and most of which are already being outpaced). They’ve debated endlessly about what constitutes AI-assisted plagiarism and how to prove it. Some have gone further, banning the use of AI in assessments entirely, as if the goal were to preserve the examination hall as a kind of heritage site.
At the other end of the spectrum, institutions have made significant investments in AI-integrated management systems, streamlining admissions, student services, timetabling, and reporting. This is, in its own way, rational. Administrative efficiency has genuine value.
But here is what is striking about both responses: they are almost entirely about the institution’s needs rather than the learner’s future.
The plagiarism panic is fundamentally about protecting the integrity of existing assessment systems. The administrative AI deployment is about operational efficiency. Neither asks the harder question: What does a person need to know, and be able to do, in a world reshaped by AI, and is our curriculum preparing them for it?
The Misalignment at the Heart of the Response
There is a useful distinction between first-order and second-order problems.
The first-order problem is the one that lands on your desk and demands immediate attention: a student submits work that may have been AI-generated. What do you do? This is urgent, concrete, and politically uncomfortable. Of course, institutions respond to it.
The second-order problem is slower, quieter, and far more consequential: are we designing learning experiences that equip graduates for a world in which AI is a permanent, pervasive feature of professional and civic life? This question doesn’t send an email. It doesn’t trigger a disciplinary hearing. It simply accumulates, silently, as cohort after cohort graduates underprepared.
Higher education has historically been good at responding to first-order problems and poor at anticipating second-order ones. The AI moment is severely testing that weakness.
The risk is not that students will use AI to cheat on an essay. The risk is that we will spend five years perfecting our response to that problem while failing to redesign curricula that are fit for the world those students are entering.
The Faculty Development Gap Nobody Is Talking About
There is a second failure running alongside the curriculum one, and it may be even more foundational: most faculty are being asked to navigate a profound pedagogical shift without any structured support. There are clearly exceptions, and if you are one of them, these statements may seem unreasonable. But explore the corridors of your own campus and see how many of your colleagues are truly like-minded.
Consider how other professions handle this. Lawyers, doctors, accountants, engineers, nurses; virtually every regulated profession requires practitioners to complete a minimum number of hours of continuing professional development each year to maintain their licence to practise. CPD is not optional. It is not left to individual motivation. It is a structural guarantee that professional knowledge does not stagnate.
Higher education has no equivalent. A professor appointed in 2005 is under no institutional obligation to update their understanding of how learning, technology, or their discipline has changed since then. This was always a quiet weakness in the system. In the current moment, it is an acute one.
You cannot ask faculty to redesign the curriculum for an AI-shaped world if they have not had the opportunity, with support, to develop a sophisticated, up-to-date understanding of what that world looks like. How AI is actually being used in their disciplines, what it can and cannot do, where it is changing the nature of work and knowledge production, and what that means for what students should learn and how.
Good intentions are not enough. Individual enthusiasm is not enough. What is missing is a structural commitment: mandatory, resourced, ongoing professional development for all faculty, not a one-off workshop on spotting ChatGPT, but sustained engagement with the questions that AI is forcing onto the table.
This is not a radical idea. It is standard practice in almost every other profession. The question is why higher education has so long believed itself exempt from it.
What “Missing the Point” Actually Costs
It is worth being clear about the stakes, because the consequences of curriculum inertia are not abstract.
Graduates are entering labour markets in which AI fluency, not just familiarity, but genuine, critical, adaptive fluency, is increasingly a baseline expectation. They are entering professions that are being restructured around these tools. They are becoming citizens who will need to navigate AI-mediated information environments, automated decision-making, and the ethical and political questions that accompany them.
If the curriculum they studied was designed around concerns that predate this moment, or worse, designed defensively ‘against’ AI rather than thoughtfully ‘for’ a world that contains it, they are being sent out underprepared. And they will know it.
The institutions that will serve their students best in the next decade are not the ones that locked down their assessment policies most effectively in 2025. They are the ones who asked the harder, slower, more important question: What does learning need to become?
A Provocation, Not a Conclusion
This piece is deliberately diagnostic. The case for ‘what’ curriculum should contain in response to AI, the specific competencies, dispositions, and design principles that should be shaping what we teach, is a conversation for next week.
But the diagnosis matters because you cannot design the right solution if you have misidentified the problem.
Higher education is not facing an assessment crisis. It is facing a curriculum crisis, compounded by a professional development crisis. The institutions that recognise this and act on it will look very different from those that spent the decade perfecting their plagiarism detection.
The question is which kind of institution you want to be.
Next week: What AI actually demands of curriculum: the competencies, principles, and design choices that should be shaping what we teach next.


