MeridianPath has completed UCL’s inaugural AI EdTech Labs cohort, a five-month programme in which research and entrepreneurship genuinely intersect in the edtech landscape. The programme brought together researchers, founders, and practitioners around a shared commitment to evidence-informed innovation and a learner-first approach, with sustained attention to ethics, impact, and educational value.
Challenging assumptions through research dialogue
The cohort enabled sustained dialogue between theory and practice, allowing startups to refine their models through research perspectives while researchers engaged directly with the realities of product development and scaling.
One session with Wayne Holmes (UNESCO Chair in Ethics of AI in Education) posed a fundamental challenge: when learners improve after using an intervention, how do you know the tool caused the improvement rather than natural development, other learning experiences, or workplace practice? The honest answer: without control groups, you cannot make causal claims.
MeridianPath’s 70% completion rate, compared with the 20% industry standard, tells part of the story. But correlation is not causation. The programme introduced Design-Based Research methodology, which treats iterative cycles of building, testing, analysing, and redesigning as rigorous inquiry rather than simple product iteration.
Precision in language, precision in thinking
Small shifts in language reflect deeper shifts in thinking. “AI-enabled” rather than “AI-powered” because electricity powers things while AI enables capabilities. Avoiding claims to “prove” anything because nothing is proven in education research without extensive validation. These distinctions matter when making claims to customers, investors, and the field workers MeridianPath serves.
Beyond completion rates: building a chain of evidence
The programme pushed MeridianPath beyond simple completion metrics toward comprehensive evaluation frameworks that connect learning to organisational outcomes.
The New World Kirkpatrick Model (Kirkpatrick & Kirkpatrick, 2016) inverts traditional evaluation by beginning with Level 4 (Results) and working backwards: what organisational outcome are we trying to influence? This ensures every design decision aligns with real-world performance, not just learner satisfaction.
The Phillips ROI Model adds a fifth level: calculating actual return on investment by isolating training effects from other variables and converting results to monetary value.
Brinkerhoff’s Success Case Method complements these quantitative approaches with qualitative rigour, identifying the highest and lowest performers after training to investigate what contextual factors determined whether learning transferred to practice.
For MeridianPath’s pharmaceutical and microfinance clients, this means tracing a chain of evidence from learner engagement through demonstrated knowledge and observable behaviour change in client interactions to measurable business outcomes. Completion rates become one data point among many, not the endpoint.
Measuring what matters and what might break

Perhaps the most valuable sessions explored “effective but harmful” interventions. A tool can achieve its target metrics while eroding key boundaries: work-life balance, worker autonomy, and intrinsic motivation.
For MeridianPath, this means asking not just whether completion rates improve, but whether the platform inadvertently extends the workday into personal time. Whether the manager’s visibility of completion data creates surveillance pressure. Whether convenience undermines the intrinsic value of learning.
Research and practice in dialogue
The programme culminated in a live presentation at Canary Wharf to an audience of researchers, peers, and international delegations. This moment symbolised the cohort’s broader ambition: to position AI in education not merely as innovation for innovation’s sake, but as a research-grounded, ethically informed endeavour with global relevance where learners remain at the centre.
Final sessions included roundtable discussions at IdeaLondon and connections with UCL Knowledge Lab, creating space for collective reflection on the future of AI in education. These conversations reinforced that responsible EdTech development depends on shared epistemic and ethical frameworks across research communities, policymakers, and entrepreneurs.
What comes next
MeridianPath is now implementing:
- New World Kirkpatrick evaluation across current pilots, beginning with the desired organisational outcomes
- Phillips ROI methodology to demonstrate financial return for clients investing in field team development
- Success Case Method interviews to understand why some learners successfully apply knowledge while others do not
- Longitudinal retention tracking at 30, 60, and 90 days using validated instruments
- Systematic measurement of both effectiveness and unintended consequences
The field workers MeridianPath serves across Latin America deserve learning technology built on evidence, not intuition. UCL EdTech Labs provided the frameworks, the challenge, and the community to pursue that standard.
Thank you to Houtan Froushan, Wayne Holmes, the UCL EdTech Labs team, and the cohort peers who made this programme a genuine intersection of evidence, ethics, and entrepreneurship.