AI vs MBA Students: What Happens When Artificial Intelligence Sits at the Strategy Table?

When AI Sits at the Strategy Table: Inside ESCP's Groundbreaking Simulation Experiment

AI vs MBA Students: What Happens When Artificial Intelligence Sits at the Strategy Table?

The Experiment That Changes the Conversation

In February 2026, inside the halls of ESCP Business School — one of the oldest and most prestigious business schools in the world — something unprecedented quietly unfolded. Forty-eight MBA students from two of the school's flagship European campuses, London and Berlin, sat down for a three-day intensive business simulation. They ran virtual companies, made hard strategic calls, competed against each other across multiple rounds, and did exactly what MBA programs have trained students to do for decades.

Except this time, one team in each universe wasn't composed of MBA students at all.

It was composed entirely of AI.

Not AI as a research assistant. Not AI as a spreadsheet helper or draft editor. AI as the decision-maker — autonomous, unaided, and in direct competition with some of the most strategically-trained human minds in European business education.

This experiment, designed and led by Professor Markus Bick and doctoral researcher Ulrich Mohme, represents a genuinely new frontier in both business education and artificial intelligence research. And the implications reach far beyond the classroom.

Understanding the Arena: The Customer Value Challenge Simulation

To appreciate just how significant this experiment is, you need to understand the environment in which it took place: the Customer Value Challenge Simulation, developed by MEGA Learning Business Simulations.

Business simulations are not new. They've been a staple of MBA programs for decades, prized for their ability to create a compressed, risk-free environment where students can test strategic instincts, make decisions under uncertainty, and experience the downstream consequences of their choices — all in the span of a few days rather than a few years.

But the Customer Value Challenge is not your average simulation. It's a sophisticated, multi-round strategic environment where student teams manage virtual companies across a dynamic competitive landscape. The decisions they face aren't simple. Teams must allocate resources, set pricing strategies, manage workforce morale, navigate sustainability trade-offs, and respond in real time to the moves of competitors. Each round reveals new information, shifts in market conditions, and the ripple effects of prior decisions — demanding not just analytical intelligence but adaptive strategic thinking.

This is, in other words, exactly the kind of environment where human intuition, creative problem-solving, and organizational judgment are traditionally assumed to shine. It's the environment where future CEOs, consultants, and business leaders are tested and sharpened.

And it's exactly the environment MEGA Learning chose for this experiment. The choice was deliberate. If you want to know whether AI can compete strategically, you don't test it on a simple optimization puzzle. You put it inside a living, breathing competitive system and watch what happens.

The experiment ran across two "universes" — the term used to describe separate simulation cohorts operating independently of one another. The London universe comprised 29 MBA students; the Berlin universe, 19. In each, a single team was replaced by a fully AI-driven entity. The humans didn't know exactly how the AI team would perform. The AI didn't have the benefit of lived experience, business school socialization, or the competitive drive that comes with having tens of thousands of euros of tuition fees on the line.

They all played the same game. The results are coming.

What Was Actually Being Tested?

The research questions at the heart of this experiment are deceptively simple — and profoundly important.

Can AI define a coherent strategy?

This is not trivial. Strategy isn't just about picking the option with the highest projected return. It's about making a series of interconnected choices that reinforce each other over time. It's about deciding what you are not going to do. It's about identifying a sustainable competitive position and committing to it even when short-term pressures pull in other directions. Human strategists spend entire careers learning to do this well. Can a large language model, given access to simulation data and decision inputs, synthesize that kind of coherent directional thinking?

Can AI react to competitors?

Markets are not static. They're ecosystems of actors, each responding to the others. One of the most critical skills in business — and in life — is the ability to read competitive signals, anticipate responses, and adapt before you're forced to. Can AI detect when a competitor is pursuing an aggressive pricing strategy? Can it recognize a market share threat and recalibrate before the damage compounds? Can it bluff, hedge, or out-maneuver — or does it simply optimize in isolation?

Can AI balance profitability, morale, and sustainability?

Here's where things get genuinely interesting. Business strategy in the real world — and in a good simulation — is never a single-variable optimization problem. Companies don't just maximize profit. They manage tensions: between short-term returns and long-term investment, between financial performance and employee engagement, between shareholder demands and environmental responsibility. These tensions don't have clean algorithmic solutions. They require judgment — the kind of judgment that emerges from values, from experience, from an understanding of what matters and why. Can AI navigate these trade-offs, or does it collapse in the face of genuine complexity?

Can AI adapt across multiple decision rounds?

Perhaps most critically: can it learn? A single good decision might be luck or brute-force pattern matching. The real test is whether performance improves, degrades, or plateaus over multiple rounds as the competitive environment evolves. Does the AI get better as it accumulates context? Does it make the same mistakes twice? Does it recognize when its initial strategy is failing and course-correct — or does it double down on a failing approach with mechanical consistency?

These are questions that matter enormously not just in the context of an MBA simulation, but in the context of every boardroom, every strategy function, and every decision-making process in the global economy.

ESCP Business School: The Right Stage for This Question

It is not a coincidence that this experiment is happening at ESCP Business School.

Founded in 1819 in Paris, ESCP is the world's oldest business school — a fact that gives it both historical gravitas and, perhaps more importantly, a long institutional memory of what business education is actually for. With campuses in Paris, London, Berlin, Madrid, Turin, and Warsaw, ESCP has spent two centuries preparing students for leadership across diverse national and organizational contexts. It is a school that takes seriously the question of what it means to think strategically in a complex world.

Its London and Berlin campuses — the two sites of this experiment — are not chosen arbitrarily. London represents one of the world's most important financial and commercial hubs, a city where the intersection of global capital, regulatory complexity, and technological disruption is felt daily. Berlin, meanwhile, has emerged as Europe's leading startup and technology hub, a city that has become synonymous with innovation, disruption, and the willingness to question established ways of doing things.

Running an AI-versus-human strategic competition in these two cities, in these two contexts, in this particular school, is itself a statement. ESCP is not treating AI as a curiosity or a threat to be managed. It is treating it as a subject worthy of serious scholarly inquiry — and, implicitly, as a potential future participant in the very strategic processes it has spent centuries teaching humans to master.

The research is led by Professor Markus Bick, a scholar whose work sits at the intersection of digital transformation and organizational learning. Alongside doctoral researcher Ulrich Mohme, Bick is not simply running an experiment for novelty's sake. This is rigorous academic research aimed at generating empirical insight into a question that has enormous practical implications: what is the relationship between human strategic intelligence and artificial intelligence in competitive business environments?

Their findings — which will be shared through interviews, research publications, and ongoing engagement with the business education community — promise to add a new and important chapter to this conversation.

Why This Matters Now

The timing of this experiment is not accidental. It is happening at a specific inflection point in the development and deployment of AI systems.

For the past decade or so, the dominant narrative around AI and business has been one of augmentation. AI helps humans do their jobs better. AI surfaces insights humans might miss. AI handles the repetitive, the computational, the pattern-matching — and humans handle the creative, the interpersonal, the judgmental. The two are complementary, not competitive.

That narrative is becoming harder to sustain.

The latest generation of large language models can write persuasive strategy memos, synthesize market research, generate scenario analyses, and produce decision recommendations that are, in many cases, indistinguishable in quality from those produced by junior analysts or consultants. More than that: they can do it in seconds, at scale, without fatigue, without ego, and without the political considerations that inevitably color human strategic judgment.

This doesn't mean AI is better than humans at strategy. It means the question is now live in a way it wasn't before. And live questions deserve empirical answers, not theoretical arguments.

That is what MEGA Learning Business Simulations and ESCP Business School are attempting to provide. By putting AI inside a real, complex, multi-round competitive simulation — not a toy problem, not a benchmark dataset, but a genuine strategic environment with real competitive dynamics — they are generating data that goes beyond speculation.

The results of this experiment will not settle the question once and for all. No single study can do that. But they will add important empirical texture to a conversation that has, until now, been dominated largely by assertion.

MEGA Learning Business Simulations: Building the Future of Business Education

The role of MEGA Learning Business Simulations in this story deserves its own moment of attention.

MEGA Learning is not a passive backdrop to this experiment. They are the architects of the environment that made it possible. The Customer Value Challenge Simulation represents years of careful design work aimed at creating a learning environment that genuinely captures the complexity of real business decision-making.

That means building a system with enough variables to be interesting, enough structure to be learnable, and enough dynamism to reward genuine strategic thinking rather than just memorized playbooks. It means calibrating the difficulty so that teams can distinguish good decisions from poor ones — and so that the outcomes of the simulation mean something, both to the student teams competing within it and to the researchers observing from outside.

The decision to allow an AI team to participate in this simulation is, in itself, a significant act of intellectual courage. It opens the simulation to a new kind of scrutiny. If the AI performs poorly, it tells us something about the limits of current AI systems in complex strategic environments. If the AI performs competitively, it raises important questions about what — exactly — we are training MBA students to do, and whether the current model of business education is preparing them adequately for a world in which AI will be a genuine competitor in strategic processes.

Neither outcome is comfortable. Both are valuable.

MEGA Learning's willingness to embrace this uncertainty — to build the arena and let the experiment run — reflects a philosophy that takes seriously the responsibility of business education to engage honestly with the technologies that are reshaping the world their students will enter.

The Bigger Question: What Are We Actually Teaching?

Every time a technology reshapes what humans are capable of, it forces a reckoning with education. The calculator didn't eliminate the need to understand mathematics — but it did change what mathematical education needed to emphasize. The internet didn't eliminate the need for research skills — but it radically changed what "research" means in practice.

AI is doing the same thing to business education — but at a larger scale, with higher stakes, and with much less certainty about where the lines will settle.

If AI can produce competitive strategic analysis, what does that mean for the case-study method? If AI can model complex competitive dynamics and generate adaptive decision frameworks, what is the unique value proposition of an MBA? If AI can manage a virtual company through a three-day business simulation with results comparable to those of trained human teams — what exactly is the MBA preparing its graduates to do that AI cannot?

These are not rhetorical questions designed to undermine business education. They are genuine, urgent questions that business schools need to engage with honestly, empirically, and with intellectual rigor.

The experiment at ESCP, in the Customer Value Challenge Simulation, is a start. Not because it will answer these questions definitively, but because it is asking them in the right way: empirically, in a controlled environment, with real data, under conditions designed to illuminate rather than to confirm.

What Comes Next

Professor Markus Bick and Ulrich Mohme are preparing to share their early insights. An upcoming interview will shed light on what the data showed — how the AI team performed relative to its human counterparts, where it excelled, where it struggled, and what the implications are for the future of business education and AI deployment in strategic contexts.

This is a conversation worth watching closely. Not because the results will tell us everything, but because they will tell us something real — and right now, in a debate that has been dominated by hype on one side and skepticism on the other, something real is exactly what we need.

AI is no longer a tool sitting outside the strategic process, waiting to be consulted. In at least one MBA classroom in London and Berlin, in February of this year, it sat at the table. It made the decisions. It competed.

The question of what that means — for business, for education, for strategy, and for the uniquely human elements of judgment that we have always assumed were beyond algorithmic reach — is now, officially, an empirical question.

And the data is in.

Interested in exploring the Customer Value Challenge Simulation for your own institution or organization? Discover more about what MEGA Learning Business Simulations offers and how their programs are shaping the next generation of strategic thinkers — and the researchers asking what happens when AI joins them.