Dr Adam Edmett is Head of EdTech Innovation at the British Council, where he works on the design and delivery of technology-enabled education programmes reaching hundreds of thousands of teachers worldwide through partnerships with government education ministries. This work has been recognised with multiple awards at the UK’s Learning Technology Awards, including gold in 2025 for Best Blended Learning in China and Indonesia, silver in 2024 for Best Learning Technology Project in Brazil and Jordan, and bronze in 2023 for Best Use of Mobile Learning in Rwanda and Nigeria. With nearly three decades of experience across classroom teaching, language education, academic research, and digital product development, Adam brings both a practitioner’s instinct and a researcher’s scepticism to the question of what technology can and cannot do for education. He holds a Doctorate in Education from the University of Bath, where his research focused on cognitive engagement in online teacher development, and his current work examines the impact of generative AI on English language teaching, with a particular focus on equity, responsible use, and the risks of widening existing digital divides.
In an exclusive conversation with K12 Digest, Dr Adam talks about the evolving relationship between education and technology and the lessons learned from decades of experimentation in digital learning. Drawing on nearly thirty years of experience across teaching, research, and global education programmes, he reflects on how thoughtful design and evidence-based decision-making can help schools use technology more effectively. He also shares insights from large-scale teacher development initiatives and explains why successful innovation often depends on understanding local realities rather than chasing technological hype.
You have built your career at the crossroads of education and technology. What pivotal moments or experiences shaped your journey into EdTech leadership?
In terms of technology… many! The fundamentals probably go back to the Sinclair ZX81, then the ZX Spectrum, then the Acorn Electron, before what was, at the time, my ultimate dream machine: the BBC Micro. And even earlier than that, there was Atari, Intellivision, and the first-ever gaming tech: do you remember the bat-and-ball game you could play on the TV? That’s actually a very good illustration of how far we’ve come in 40 years: literally a line going up and down and a dot bouncing around the screen, just a few pixels, and look where we are today. It’s staggering.
I think my early obsession with computing was the foundation for my interest in EdTech and teaching. I was a teacher, but with this hobbyist background in computers, I could see the potential. I’ve also been very fortunate to work for an organisation that has always innovated in the EdTech space. In the early 2000s, the British Council had a student community website called Global Village – essentially a Facebook before Facebook. The concept was brilliant: connecting all of our students globally through forums, chat rooms, and private messaging. But at the time, the technology wasn’t stable enough, and digital literacy levels weren’t where they needed to be, so the community never reached critical mass. It taught me a great deal about why brilliant tech ideas can fail within an educational ecosystem: even when the concept is sound, reality has a way of intervening.
Then there’s luck. EdTech hasn’t always been as fashionable as it is now. Take the field of international development, where distance learning via broadcast and print was supposed to bring education to everyone on the planet and transform our future. It didn’t. It failed on most measures. For many years afterwards, distance education and EdTech became almost dirty words. Funders weren’t interested, having invested heavily in promises that never materialised. But now EdTech is the flavour of the month again. Whether the results will follow this time remains to be seen.

In your current role at the British Council, what has been the most impactful innovation initiative you’ve led, and what made it successful?
I can refer to external validation for this question: the UK’s Learning Technology Awards, where we’ve won medals or been a finalist for four of the last five years. In 2021, we were shortlisted for our Digital Task Force initiative: our response to the pandemic, where we switched all our face-to-face activity to online over a very short period. Aside from the change in mode of delivery, we also adapted the way we worked together as a global team, refiguring hierarchies and areas of work, galvanised by the need to respond to the changing needs of English teachers around the world.
In 2023, we won bronze for the Best Use of Mobile Learning for projects in Rwanda and Nigeria, where connectivity and device access are a challenge. In 2024, we won silver for Best Learning Technology Project for our teacher development work at scale in Brazil and Jordan. Finally, in 2025, we won Gold for Best Blended Learning for innovative teacher development programmes in China and Indonesia. I know you asked for one innovation, but they all have a common thread to what brings success! The British Council is an endorser of the Principles for Digital Development, a framework that helps organisations design and deliver technology-enabled programmes that are inclusive, sustainable, and evidence-driven. These principles guide our work globally and shape how we adapt our projects to local contexts.
We can take the Brazil initiative as an example of how we do this. The programme provided tailored professional development for teachers focused on up-to-date and practical classroom methodology. We supported 502 English teachers – 374 of whom graduated – and indirectly benefitted more than 106,000 students across the country.
Its success was built on a deliberate application of the Digital Development principles, starting with understanding the existing ecosystem. From the outset, the project was designed around the realities of teachers’ access to technology and connectivity. We selected a combination of online learning via a Learning Management System (LMS) and low-bandwidth tech, such as WhatsApp, to ensure participation even in areas with limited internet stability. Modules were translated into Portuguese, regional cohorts were formed to align with local cultural variations, time zones, and school calendars, and Brazilian e-moderators were engaged to lead delivery: all active choices reflecting the infrastructure and working patterns in the Brazilian education system.
Equally important was designing with people, not for people – it’s a crucial distinction. We asked teachers nationwide to use our Self-Assessment Tool (SAT) to understand their professional development priorities. Translated into Portuguese, it generated 1,670 responses, which directly informed the course content, pacing, and delivery methods. We also ascertained English language proficiency, and each teacher’s digital and time limitations for training outside of school hours. After building the course based on this input, teachers could select the most relevant modules, participate in live sessions or self-paced learning, and work within supportive regional peer groups.
Designing for inclusion meant reducing barriers to participation at every turn: translating module landing pages into Portuguese, allowing flexible module selection, providing continuous support via WhatsApp, organising regional cohorts for localised networking, and ensuring first-language support from Brazilian e-moderators (i.e., translanguaging). Local e-moderators also brought deep knowledge of the Brazilian education system, helping participants adapt the methodology to their classrooms. This built capacity and increased the likelihood that new approaches would be sustained beyond the programme.
On the data side, the British Council’s corporate approach – underpinned by GDPR compliance and other relevant regulations – ensures that all legal requirements for data handling are met in all of the countries we operate in. Our LMS supplier, eCom Learning Solutions, based in Scotland, apply their own rigorous security protocols, with the LMS consistently delivering high-quality data protection and privacy standards for all users.
And critically, we used evidence throughout to improve outcomes. Participant feedback was collected during the course, allowing for real-time refinements to activities and guidance. It’s not enough to understand how teachers have engaged with content after an educational programme has finished. E-moderators were shown how to monitor participants’ performance and progress, offering support in their first language and individual contact. The final results were clear: 98 per cent of participants said the course met their expectations and had a positive impact on their teaching. The programme achieved a 75 per cent graduation rate, meaning three out of four participants completed all requirements and were awarded a certificate. This is an outlier in terms of industry standards for online and remote learning.
There are other principles we adhere to, and if interested, you can find them here: https://digitalprinciples.org/. Personally, I feel this framework is relevant for a whole range of contexts and at different levels: from national CPD programmes right through to school-level digital innovation and transformation.

School education systems worldwide are navigating rapid change. From your perspective, what are the most pressing challenges schools face today in integrating technology effectively?
The main one for me is the lack of clear, evidence-based information on technology. The challenge is that tech is evolving so rapidly that it’s incredibly difficult to gather the evidence, act on it, and embed good practice before the technology has already taken root in classrooms and is affecting learners. The digital literacy gap for teachers is only going to be made worse by AI. The ‘silver bullet’ mentality, the sense that this shiny new thing will change everything, is back, and it’s only going to intensify as major players look to monetise the enormous investment that’s poured into AI. It’s encouraging to see schools and education systems beginning to push back, and one hopes this is the beginning of a more critical, sustained engagement rather than a one-off reaction.
That said, never underestimate the power of these technologies once they are woven into the fabric of daily life. Look at Nepal’s attempt to ban social media: whatever its political origins, that overnight ban cut thousands of small businesses off from their customers. We’ve reached a point where something that started out as a fun way to reconnect with old school friends has become so embedded in economic and social life that a sovereign government struggles to enforce its own policy within its own borders. That should give us pause.
AI is reshaping conversations in education. How do you see artificial intelligence influencing teaching, assessment, and student learning over the next five years?
As with previous technological shifts, I think the experience will depend heavily on where you live. I’m generally optimistic, but there is a real danger that AI will widen rather than close existing digital divides. For example, we have already seen an explosion of AI tools that promise to aid teacher productivity, freeing up time so teachers can focus more on what matters – teaching. If we examine one popular GenAI benefit, the creation of materials (e.g., worksheets, reading texts, assessment tasks), there seems to be a real positive gain: it can save a teacher’s time, could improve the quality of resources, and, where there is scarcity, could even increase the availability of materials. However, when you examine this more closely, it is still skewed towards benefiting those who have resources rather than those without. What if your school does not have a printer or photocopier, as is often the case in very low-resource contexts? Furthermore, best practice is that a teacher needs to be part of the creation process, identifying when AI-generated material or content is displaying bias or inaccuracies. An issue with GenAI is that you need to be competent in the area of output. If you are not, then you will be unable to see where it has made mistakes. With the global teacher shortage crisis (South Asia and Sub-Saharan Africa in particular), there is an increased chance of AI use by less qualified teachers who are unable to critique AI-generated content. So, something like material creation that appears an obvious win, might be problematic in a low-income country.
I’ve painted a somewhat negative picture, but there are clear positives. My specific area of interest is language education, and I can see significant change on the horizon — not just in terms of how technology might assist learners, but in deeper questions about linguistic justice and what it means to be a language user when computers can perform many language tasks more efficiently than humans. Language was supposed to be one of the things that made us distinctively human. That assumption is now worth examining carefully.
But returning to the classroom and the change I can see happening. Firstly, the development of a learner’s ability to speak English. A typical obstacle to developing this skill has been that a learner may want to practise speaking, but her teacher does not speak English, nor can her peers, nor perhaps anyone in her village or town. The potential change here in terms of an entirely new opportunity for practice is, I feel, highly significant. There may even be additional benefits of a non-human conversational partner; for example, research has described reduced levels of anxiety when learners talk to a chatbot. There are also the usual digital advantages over a human in that AI is available whenever it is needed, and the cost could be very low in an open-source context. And when you add the possibility of personalisation for carefully tailored input and individual feedback on oral performance, then you can see how powerful a tool this will be for language learning. Perhaps in terms of English language teaching methodology, more communicative approaches will be able to flourish where previously they were hindered by a lack of opportunity to genuinely communicate. This new capability will have implications for assessment as well; for example, are human oral examiners soon to become a thing of the past?
With AI, there’s a lot of talk about personalisation. The promise is that precisely tailored instruction will allow anyone to learn what they need, when they need it, and be delivered in a manner that best suits how they learn. At long last, we can break free from the outdated one-size-fits-all classroom model where the gifted are held back, and those who need more assistance do not receive it. The potential for education is transformative, but we should bear in mind that, like AI, the quest for digital personalisation has a history. Going back to 2008, who remembers the EdTech startup Knewton? They, like others, were going to change the world and did raise millions of USD with funding driven by promises of using what was then termed Big Data to personalise education. Predictably, the outcome did not match any of the hype, and Knewton was eventually sold off to a publisher for a fraction of the money that was invested. So, are we merely in another hype cycle, or are we genuinely at a real transformational moment in education?
My view is that GenAI will allow for more personalisation, for example, when assessing and grading writing and providing ongoing formative feedback, or with a GenAI conversational agent that adapts the level of language and topic area as per learner needs. However, serious questions remain. What’s the actual impact of personalisation on learning, and at what point does overreliance on AI begin to impair the development of critical thinking? Given that GenAI can provide feedback at a level comparable to, or even surpassing, a teacher, is there a risk that busy teachers entirely forgo their role in that process? And where do we draw the line on privacy – are we prepared to allow data on our emotional states to be analysed to enhance specifically tailored instruction? The EU has already banned this. The US has not yet, and China is already leveraging the surveillance potential of AI. Currently, most educational technology is deployed outside the classroom in both high- and low-income countries. If that trend persists, and personalisation increasingly means isolated learning, then we need to think carefully about how collaborative learning can be preserved even as instruction becomes more individualised.
Lastly, a few words on accessibility. The next few years will transform various aspects of accessibility, initially in terms of the improved ability to interact with a computer without the use of a keyboard. And not so far into the future, AI agents will be able to perform tasks that may have been either very difficult or, in some cases, impossible for those with a disability. There will be other new capabilities that transform accessibility that are not in view right now. This is mainly because advancements in the area of accessibility are often an incidental outcome of more general commercial technology progress – i.e., they are not intentional in terms of direct funding. Again, in our ideal world where governments were willing to intervene, equity of access could be accelerated to make a real impact on inclusion.
As for the raging debate on whether AI will replace teachers, I don’t see this at all. Very particular tasks will change, and some roles will look quite different, but the idea that you can remove the human relationship from education misunderstands what education is.

As a leader driving digital transformation, how do you balance innovation with safeguarding pedagogical values and ethical considerations?
Safeguarding is absolutely central to the British Council’s work — we have a dedicated team located around the world who provide guidance and assurance across our programmes and projects. Ethical considerations are equally paramount. Our AI in ELT report, for example, was one of the earlier pieces of work in this space to raise issues such as algorithmic bias, as well as the more overlooked question of language standardisation and what it means when AI systems effectively privilege certain varieties of English over others.
Digital transformation can and should improve pedagogy — though how we use computers is always shaped by the educational thinking of the time. Take Computer-Assisted Language Learning as an example. Pedagogy is sometimes driven less by principle than by what’s convenient or what practical constraints make possible. I remember students asking me why they had to practise speaking with the person sitting next to them, who also couldn’t speak the language, and might reinforce mistakes. I’d give the approved answer, i.e., that they wouldn’t actually learn errors that way, and that peer practice was very valuable. But the honest answer was often simpler: it was the best available option within the constraints of a classroom. That tension between the ideal and the feasible is something technology keeps reforming.
Beyond your professional responsibilities, are there personal interests or values that influence the way you approach leadership and decision-making?
On leadership: let people get on with it. Command-and-control management styles only create the illusion of control while demotivating the people who are being micromanaged. Something that has stayed with me from Tolstoy’s War and Peace is the contrast between Napoleon on the hill at Borodino surveying the battle below, issuing orders that barely reach the front lines, by which time conditions on the ground have already shifted — and Kutuzov, old and apparently half-asleep, who offers a completely different model of leadership: someone who understands the limits of control and works with the flow of events rather than against them. Tolstoy uses this contrast to make one of his central philosophical arguments: that grand commanders who believe they’re directing the course of history are largely deluding themselves. The chaos of battle is shaped by so many individual decisions on the ground that no single person on a hill, however brilliant, can truly control it.
I think the same applies to teams. A team performs best when everyone knows their role but has the freedom and independence to determine how they fulfil it. Everyone needs to be a leader within their own sphere of influence, and giving people the support and confidence to do that is, I think, the most effective form of leadership. It can look like passivity from the outside, and you may occasionally have to defend it. But it works — particularly, I should say, when you’re fortunate enough to work with people who genuinely believe in the mission, are motivated by the work itself, and take pride in doing it well.
On decision-making: information and analysis matter enormously, but so does instinct — the kind of intuition that only comes from experience. Whether AI can ever replicate that is an open question. To me, it feels like the most human and creative part of strategy: the moment when you move from processing information to actually committing to a direction.

What advice would you offer to students and young professionals who aspire to build careers in education, innovation, or global development?
This is, honestly, an incredibly difficult question to answer at this particular moment. So, I’d step back and say this: things don’t have to be as they are. Challenge the system. Read thinkers like Foucault to understand historical contingency — how power, usually not our own, shapes what we think is possible, what we can say, what we can’t say, and perhaps even what we’re able to think. We don’t have to accept the parameters we’ve been handed.
We can use this new wave of technology to benefit humanity, but we should never mistake good intentions for good outcomes. Unforeseen consequences are the norm in technology, not the exception. I’d want every major tech leader to have that reminder every single morning: the road to hell is paved with good intentions. I’ve been involved in technology for most of my professional life, and I remain enthusiastic about its potential, yet I still find myself wishing, sometimes, that the smartphone had never been invented.
