By Digital Education Council
Digital Education Council
April 14, 2025
“The risk for regulators is that everything looks like a regulatory risk that we need to stamp out, rather than an opportunity,” said Joshua Fleming, Director of Strategy and Delivery at the UK Office for Students (OfS)
AI adoption in higher education requires both regulatory clarity and institutional encouragement to unlock its potential. While some regulators signal caution, others actively promote experimentation and innovation — particularly when sector-specific guardrails are in place.
This was the focus of the recent DEC Executive Briefing #015: A Global Map of Policy and Regulation for AI in Higher Education, featuring Agne Makauskaite, Head of Regulated Industries and Regulatory Strategy, APAC at Amazon Web Services (AWS), and Joshua Fleming from the OfS.
Moderated by Danny Bielik, President of the Digital Education Council, the panel explored how institutions and regulators can better navigate the shifting AI policy landscape in higher education.
Regional Approaches to AI Regulation in Education
From Singapore’s light-touch approach to Australia’s proposed mandatory guardrails, Asia-Pacific regulators are adopting diverse models to manage AI’s risks while encouraging innovation.
“We really support sector-specific guidance,” said Makauskaite. “We don’t see one-size-fits-all regulatory frameworks working effectively. It remains most effective when looking at specific use cases.”
She pointed to examples where education-specific approaches are essential. From cloud-powered transcript processing to Harvard Business School’s federated data platform, which integrates faculty, student, and alumni data for strategic decision-making.
In the UK, Fleming observed an initial wave of “knee-jerk” reactions to tools like ChatGPT, but said that attitudes have since matured.
“The trends I observe are much more pro-innovation and pro-use now,” he explained. “We’ve been reticent to block institutions from experimenting. One thing that we’ve been exploring and may likely do is to put out signals to make it clear that we are interested in innovation, and we want people to experiment."
High-Risk Use Cases Require More Than Blanket Regulation
Both panellists agreed that targeted regulation — especially in high-stakes areas — is more effective than broad restrictions.
In the EU’s AI Act, having AI evaluate student learning outcomes is designated a “high-risk” system. But Makauskaite noted that definitions vary across regions and interpretations differ.
“These all turn around when AI is used to make a decision that has a material legal impact — in areas like housing, employment, or education,” she said. “And whether it’s being used for decision making or it’s just helping humans to make those decisions.”
Fleming highlighted concerns around AI’s impact on academic integrity, including the ubiquity of tools that could facilitate cheating or lead to “learning atrophy”, a degradation of deep learning when tasks are outsourced to AI.
“But I’ve also spoken to students who are using it in really fantastic ways,” he added. “One, last week, was telling me how he uses it in kind of an inverted Socratic method. So he tells the LLM that he is the teacher and it is the student, and then he teaches the LLM whatever particular module he’s on. That really helps him test and revise his thinking.”
He cautioned that regulators risk viewing every development as something to stamp out, rather than recognising the broader opportunities to improve learning.
Can Regulation Keep Pace with AI?
One of the questions raised during the session: Can regulation remain relevant in a field evolving faster than the policies that govern it?
While the challenge extends well beyond education, Fleming noted that regulators themselves are also experimenting with AI. “I’m sure there will be solutions where regulation can iterate, develop and find new forms of regulations which enable these technologies to flourish — and for citizens, for students, whoever the end user might be, to be protected.”
Makauskaite echoed this, stating that regulation must evolve to support innovation. “Creating frameworks that are future-proof, that look around the corner and encourage AI use — not just address the risks — is extremely important,” she said.
Building the Right Mindsets for Long-Term AI Integration
Both speakers emphasised a mindset shift: from controlling risk to enabling responsible innovation.
Fleming summarised the regulator’s challenge: “The risk for regulators is that everything looks like a regulatory risk that we need to stamp out, rather than an opportunity.”
For Makauskaite, the issue is not just about adapting regulation, it’s about acknowledging that it rarely disappears. “In the past, we said that once the risks were addressed and the markets were competitive, maybe we wouldn’t need regulators anymore,” she said. “But that didn’t happen. If anything, we have more regulators and more regulation.”