Welcome to Global Good’s Impact Interview series. This series is designed to tell the stories of the people and companies working to drive impact in society.
In this edition, we speak with Galvin Lee Kuan Sian, founder of the Galvin Lee Innovation Lab and a higher education educator based in Malaysia — about designing AI-enabled learning that strengthens rather than replaces human judgement, the quiet crisis of educator burnout, and why the integrity of assessment may be one of the defining challenges of education in the AI era.
Can you introduce yourself and tell us about your role?
I’m Inv. Galvin Lee Kuan Sian, a Malaysian higher education educator, programme leader, and EdTech innovator working at the intersection of learning design, applied AI, and scalable teaching practice.
In my institutional role, I serve as a Lecturer I and Programme Coordinator (Diploma in Business), where I teach business and marketing-related modules and oversee programme delivery, assessment quality, and student learning outcomes. Day-to-day, that means designing learning experiences that are rigorous yet accessible, coordinating teams and processes that keep teaching consistent across classes, and ensuring assessments measure real competence rather than memorisation or polished writing.
Alongside my teaching work, I’m a PhD Marketing candidate, with research interests spanning consumer behaviour, e-commerce, dynamic pricing, trust, perceived fairness, transparency, and data privacy. That research lens shapes how I design learning, as I’m particularly focused on how people make decisions under uncertainty, how they interpret cues and incentives, and how trust is built or lost. I apply those ideas to classroom design by creating experiences where students must make and defend decisions, not just describe theories.
I also lead the Galvin Lee Innovation Lab, where I develop practical “tech for good” education solutions that improve learning quality while reducing educator workload. My work includes AI-enabled simulations and persona-based learning experiences that help students practise real-world judgement in a safe environment, and educator-facing tools that streamline constructive alignment, assessment design, and feedback systems.
My role is end-to-end: identifying painful learning problems, designing pedagogical workflows, building prototypes, testing them in live classes, and refining them based on evidence and classroom constraints. Across all of this, my guiding principle is simple — technology should increase human capability and equity, not add complexity or replace thinking.
How did your company come about, and what was the motivation behind it?
Galvin Lee Innovation Lab started from a simple frustration: that education often swings between two extremes — either staying analogue because change feels risky, or adopting technology that looks impressive but doesn’t improve learning outcomes. At the same time, educator workload is a silent constraint. Many good ideas fail not because they’re wrong, but because they aren’t sustainable.
The lab emerged as a response to that gap. I wanted to build practical, lightweight solutions that address real instructional pain points: making learning more experiential, making assessments more valid in an AI era, and reducing repetitive administrative effort so educators can focus on feedback and facilitation.
The motivation is “tech for good” in a very literal sense — technology should widen access to high-quality learning and reduce burnout, not add complexity or create new inequities.
Can you describe your company’s mission and values?
Mission: to make high-quality learning more scalable and more humane by designing technology that strengthens teaching practice, supports learner agency, and improves the integrity of assessment.
Values:
- Learning-first design: start with what learners must be able to do, not what a tool can do.
- Human agency: AI should augment judgement, not replace it.
- Equity and access: solutions must work for real constraints, including diverse learners and limited time.
- Integrity-by-design: assessments should reward reasoning, evidence, and decision-making, not just fluent output.
- Sustainability for educators: innovation must reduce friction, not increase workload.
- Responsible technology: minimise data collection, be transparent about use, and treat privacy and fairness as design requirements, not afterthoughts.
What are some of the most pressing social issues that your company is working to address through its technology?
Education is a social mobility engine, but it is increasingly strained by three issues.
First is access to quality learning experiences. Many learners can pass exams without developing applied judgement, problem-solving, and real-world decision-making skills.
Second is educator workload and burnout, which quietly limits innovation and reduces the consistency of feedback and learning support.
Third is the integrity crisis in the AI era, where institutions risk either over-policing students or accepting assessments that no longer measure genuine competence.
Our work targets these issues through tools and learning designs that increase practice, reflection, and defensible reasoning. We build experiences where learners must make choices under constraints and justify decisions using concepts, evidence, and trade-offs. We also build educator-facing systems that reduce repetitive planning and assessment work so teaching quality can scale without exhausting the people delivering it.
How does your company measure the impact of its work in creating positive change?
We treat impact as a combination of learning quality, educator sustainability, and integrity.
On the learner side, we look for evidence that students are moving beyond generic answers into more specific, theory-linked reasoning. This includes structured reflections, rubric-based performance indicators (decision quality, trade-off analysis, evidence anchoring), and patterns in student work over time. We also collect qualitative feedback on whether learners feel more confident applying concepts in realistic contexts.
On the educator side, we monitor whether tools reduce time spent on repetitive tasks such as structuring assessments, aligning outcomes, and producing consistent feedback resources. The aim is not automation for its own sake, but time reclaimed for higher-value teaching work.
We also review process integrity — including the clarity of permitted AI use, transparency in assessment expectations, and whether task design makes it harder for students to outsource thinking.
In your opinion, what impact will technology have in creating a better future?
Technology can lower barriers to knowledge, personalisation, and access, but its real value depends on whether it improves human capability rather than replacing it. In education, the best outcomes come when technology enables more practice, better feedback, and more equitable learning support.
AI in particular can expand what’s possible — for example, by generating scenarios, simulating interactions, and supporting educators with planning and assessment design. But it also carries risks: widening inequity if access is uneven, eroding trust if assessment is not redesigned, and creating privacy harms if data collection is careless.
A better future requires shifting from tool adoption to responsible system design. That means building guardrails into assessment, ensuring transparency, minimising data exposure, and keeping human agency central. The goal is not to produce “AI-first education,” but learning-first education that uses technology as a lever for equity, dignity, and competence.
What advice do you have for other companies looking to use tech for good and positively impact the world?
Start with the real problem, not the most exciting technology. Spend time understanding the lived constraints of the people you claim to serve, especially frontline users like educators and learners.
Design for measurable capability, not engagement theatre. If your product cannot clearly explain what skill it builds and how success is assessed, the impact will be unclear. Build with integrity: be transparent about limitations, avoid inflated claims, and treat privacy and fairness as first-order requirements.
Don’t underestimate implementation. Many “tech for good” projects fail because they increase workload or require unrealistic behaviour change. Make adoption easy, reduce friction, and support users with clear workflows, templates, and guidance.
Finally, iterate publicly and responsibly. Share what didn’t work, refine based on evidence, and focus on long-term trust. Impact is not a launch moment — it’s a sustained relationship with the communities you aim to serve.
Galvin’s perspective sits at one of the most contested intersections in education today: the place where AI capability meets pedagogical judgement. While much of the conversation around AI in education has focused on either techno-optimism or institutional anxiety, his framing reorients it around something more fundamental — whether the technology actually strengthens what learning is supposed to do.
The argument that assessment must reward reasoning rather than fluent output, and that educator sustainability is a precondition for educational quality, may seem modest in scale, but it cuts to the heart of what is at stake. AI does not break education by being too capable; it breaks education when it is layered onto systems whose foundations have not been rethought. The Galvin Lee Innovation Lab’s work is a quiet argument that the future of learning will not be decided by which tools are adopted, but by who is doing the thinking — and how we know they are.
In a sector where the temptation to chase novelty is high, that discipline is its own form of innovation.