Thailand Bets on AI to Fix Education: Will Tech Widen Inequality?
Can AI truly level Thailand’s education playing field or further entrench inequalities for its most vulnerable students?
Thailand’s gambit — an AI-powered effort to equalize educational outcomes — is less a story about algorithms and more a referendum on whether technology can truly disrupt the ingrained power structures of a society. It isn’t just about smarter software; it’s a symptom, and potentially a partial solution, to the persistent, deeply entrenched inequalities that plague access to opportunity, globally and particularly acutely in developing nations. More than that, it begs the question: in a world increasingly shaped by algorithmic decision-making, who gets to code the future, and whose interests will it serve?
The country’s National Science and Technology Development Agency (NSTDA), in partnership with the Ministry of Education, hopes its “LEAD Education” platform will use AI to personalize learning and bridge the education gap. It’s a compelling vision, but one that raises critical questions: Can technology truly overcome the structural forces that create educational disparities in the first place? What does it mean when “Building the Nation with Science and Technology” becomes the overriding paradigm, potentially eclipsing other vital investments in social welfare or democratic institutions?
The Bangkok Post reports that NSTDA’s initiative is part of its broader agenda, which also includes “Traffy Fondue,” a digital platform for city management, and a digital healthcare platform connecting millions to local clinics. They’re also developing pest-resistant rice and helping factories prepare for the EU’s Carbon Border Adjustment Mechanism. A diverse portfolio, hinting at a state grappling with a multitude of complex challenges, from climate resilience to public health, all while navigating a rapidly changing global economic order.
But LEAD Education, the AI-powered platform, stands out. Its success hinges on the quality of the data used to train the AI, and the fairness of the algorithms themselves. Research, like Cathy O’Neil’s in Weapons of Math Destruction, has demonstrated how algorithms can often perpetuate and even amplify existing biases if not carefully monitored and designed. These aren’t just abstract concerns; imagine an AI trained on data reflecting pre-existing educational inequalities, inadvertently steering students from disadvantaged backgrounds away from STEM fields, effectively codifying existing social stratifications.
“to deliver a sustainable research ecosystem that supports real solutions for the country,” Prof Sukit Limpijumnong, NSTDA president, is quoted saying. The drive for sustainable solutions resonates deeply in an era of climate change, global pandemics, and widening social divides. But sustainability, by definition, means tackling the root causes, not just the symptoms. As the late systems theorist Donella Meadows wrote, “There is profound leverage in understanding the system.” Intervening at the level of symptoms will always be less effective than addressing the underlying structures.
This is where we must zoom out. Thailand’s economic history, like that of many developing nations, has been shaped by colonialism, globalization, and internal power dynamics that have concentrated wealth and opportunity in specific regions and social classes. The disparity between urban and rural areas is stark, and access to quality education, healthcare, and infrastructure remains unequal. The 1997 Asian Financial Crisis, for example, disproportionately impacted rural communities, exacerbating existing inequalities and creating lasting setbacks in educational attainment. These historical forces continue to shape the landscape in which LEAD Education is being deployed.
The promise of AI to address these issues rests on the assumption that it can be deployed equitably and effectively. However, a 2023 UNESCO report on education technology highlighted the risk of exacerbating inequalities if digital infrastructure and access to technology are not universally available. The report warned that technological solutions can only be effective when combined with investments in teacher training, curriculum development, and addressing systemic barriers to education. A laptop and an AI tutor are of little use to a student who lacks consistent electricity, internet access, or adequate nutrition.
The ambition is clear: to leapfrog traditional development pathways through technological innovation. Yet, we can’t ignore the history of “technology as savior,” a narrative that often obscures underlying power dynamics and the need for fundamental social reforms. Think back to the enthusiasm surrounding mobile banking in Africa. While it expanded financial inclusion, it also opened new avenues for predatory lending practices and data exploitation, disproportionately impacting vulnerable populations.
Thailand’s experiment with AI in education is a microcosm of a larger global dilemma. Can technology truly democratize opportunity, or will it merely reinforce existing structures of privilege, perhaps even obscuring them behind a veneer of algorithmic neutrality? The answer, as always, lies not in the technology itself, but in the choices we make about how to design, deploy, and regulate it. We must proceed with cautious optimism, mindful of the potential for both progress and unintended consequences, ensuring the tools empower, not further marginalize. And crucially, we must remember that technology is never a substitute for justice.