As the Faculty Lead for the Principles for Responsible Management Education (PRME) at Greenwich Business School, I often find myself at the crossroads of technology, ethics, and sustainability. Artificial Intelligence, in particular, has become the defining technology of our era – and the amount of airtime it gets is honestly exhausting; it’s hailed as a solution to problems of productivity, efficiency, and even climate change. But as someone who has spent over a decade guiding students through the process of learning, my biggest concern is what we’re sacrificing.
The Environmental Shadow of AI
Before addressing my main concern, let’s acknowledge the environmental costs of AI. The large language models and image generators that now permeate education, business, and daily life aren’t in some abstract void. They’re powered by massive data centres that consume extraordinary amounts of energy and water to cool and maintain. Recent estimates suggest that training and operating AI models can use millions of litres of freshwater and emit tonnes of CO₂ equivalents.
This environmental footprint challenges the very notion of technological sustainability. However, as pressing as this is, and though it should be, it’s not my primary concern. The environmental effects, significant as they are, pale in comparison to the more insidious social consequences of letting AI replace, rather than augment, human thought. The kids are not alright, and too many are struggling to think for themselves.
The Slow Rot of Critical Thought
The more profound danger of AI lies in its impact on how we think, learn, and make meaning. Increasingly, I see students turning to AI systems like ChatGPT as their first – and sometimes only – point of reference in tackling complex questions. What was once an opportunity for dialogue, exploration, and even failure has become a transaction: input a question, receive an answer. Quick, clean, and efficient. But it can feel strangely sterile.
Education, at its best, isn’t about the efficient and speedy acquisition of ‘correct’ answers – most of the time, I don’t even ask questions with one right answer. Education is about the struggle to make sense of uncertainty, to question assumptions, and to learn from failure. Cultivating criticality is the key. When students outsource this process to LLMs, they deprive themselves of the discomfort and growth that come from trying to make sense of what they don’t understand.
What we see is a kind of intellectual atrophy – a slow rotting of the brain that dulls curiosity and undermines the development of analytical and reflective capability. By deferring to AI, we risk raising a generation of managers, leaders, and citizens who can consume information but can’t interrogate it.
Beyond the Environment
Sustainability is often framed in environmental terms: reducing emissions, conserving resources, and protecting biodiversity. But true sustainability is holistic. It includes economic viability and social responsibility as well as environmental integrity.
Social sustainability requires a collective ability to engage in thoughtful dialogue, challenge prevailing systems, and envision alternative futures. Economic sustainability requires innovators who can think creatively, identify overlooked problems, and persevere through failure. None of this is possible without critical thinking – and critical thinking doesn’t thrive in a culture of intellectual convenience.
When we let AI mediate what we think, we lose the exact skills that sustainability requires. The ability to question, to fail, and then rebuild. These are the foundations of responsible management and ethical leadership. They can’t be automated.
Learn through Failure
In my ten years of teaching, I’ve seen over and over again that real learning usually starts with failure. A student stumbles through a presentation, misinterprets a concept, or struggles to apply a theory to practice. But with reflection, persistence, and guidance, that same student often emerges with a deeper and clearer understanding. This process is neither quick nor easy, but it is transformative. Learn to fail, who gives a shit? It’s what happens next.
AI, on the other hand, offers the illusion of mastery without substance. It provides answers without the struggle that makes learning meaningful. When students use AI to avoid the discomfort of not knowing, they’re not saving time; they’re short-circuiting the educational process.
Reclaiming AI as a Tool
I’m not saying AI has no place in education. Just like any tool, its value depends on how we use it. A hammer can build, or it can break; a scalpel can heal or kill; and AI isn’t different. It’s neutral – it becomes dangerous when we forget that it’s a tool and not a deity: definitely not a substitute for human judgment.
I’ve heard some nonsense about banning AI in Universities to reduce cheating, which is ridiculous. As educators, our task isn’t to ban AI but to help students learn how to use it responsibly. This means teaching them to approach AI outputs critically, question their sources and assumptions, and to use the technology as a springboard for deeper inquiry: not a shortcut to superficial answers.
The Responsible Path Forward
The future of responsible management education has to grapple with the dual challenge of technological and intellectual sustainability. We have to teach students to manage the environmental costs of AI, and to resist the temptation to let it think for them. Being sustainable, truly sustainable – socially, economically, and intellectually – means our relationship with technology has to be grounded in human agency.
AI isn’t going anywhere and will continue to evolve so we need to deal with it. But what we can’t do is let it think for us: that’s how we end up with a generation of illiterate university graduates. What’s the purpose of University if the only skill we develop is letting chatGPT answer things for us.
