AI is reshaping learning at breakneck speed. Cutting through the myths is essential to unlock its real benefits—and protect what matters most in education.

GUEST COLUMN | by Josh Nesbitt

KORAKIT INTRAPRASERT

Artificial intelligence is reshaping education faster than most students and educators can learn how to use it effectively, leaving governments, learning bodies, and students to grapple with the question: what role should AI play in learning?

‘…what role should AI play in learning?’

I asked AI and it told me that, in referencing this analysis of media coverage of AI in Education, 44.1% of global headlines are negative, 25.3% neutral, and 30.6% positive. ChatGPT would have me believe that within education, AI is a troubling topic—unsurprising in such a traditional, slow-moving sector where digital transformation is still contending with complex systems and mixed investment.

Recent surveys show the unease for educators and learners is real—nearly 70% of educators worry about how AI can be misapplied, while 62% of learners fear AI is eroding critical thinking, creativity, and problem-solving skills.

But how much of this anxiety is grounded in reality, and how much is fueled by myth?

Is AI truly all-knowing?

Not since the emergence of search engines has the power of technology to consolidate and surface knowledge been so impactful.

It’s easy to assume that tools like ChatGPT possess near-infinite knowledge, producing authoritative answers instantly. Certainly, KPMG’s recent global study found that 53% of students have high confidence in AI answers. Yet each query has a disclaimer: ‘AI can make mistakes, check important information.’

I referred to the negative AI media coverage with seemingly definitive data. To the untrained eye, statistics this precise may seem convincing. But the query was broad: it didn’t specify a timeframe and asked for a general summary of sentiment, which is subjective. ChatGPT didn’t scan every headline, but formed a “most likely” answer based on patterns in the data it’s seen.

AI’s allure often leads to what psychologists call the Oracle illusion: projecting human-like certainty and wisdom onto machines. Users may take output as fact, skipping verification. But AI is not a sage; it is a pattern-recognizer, impressive but inherently limited.

‘But AI is not a sage; it is a pattern-recognizer, impressive but inherently limited.’

AI can empower learners with exponential access to information, but turning that access into true knowledge requires AI literacy: framing queries, interpreting outputs, and questioning biases.

Is AI replacing teachers?

With each technological evolution, there is the fear that something intrinsic to humanity will be lost. Certainly, Pew Research’s recent survey shows that 43% of the US public believe AI will lead to fewer teaching jobs in the next 20 years.

Yet Arizona State University’s partnership with OpenAI is a clear example of how traditional universities can leverage AI to augment course delivery, rather than replace the role of the teacher completely. Their Freshman Composition course—ASU’s largest offering— uses AI to provide writing assistance to accelerate students’ development. Experienced faculty shape the content, then introduce AI for real-time tutoring otherwise unattainable in a course of its size. Here, the teacher’s role is honed, not replaced.

AI can act as a co-instructor, provide personalized tutoring based on a teacher’s resources, differentiate a curriculum for more accessible courses, run simulations and virtual scenarios, and mark assignments with definitive answers. All tasks that remove unproductive friction from the learning process and free the teacher to provide expertise.

If anything, AI highlights the true value of the teacher. Education isn’t about transferring knowledge—it’s relational, not transactional. Human experience, personal perspective, and emotional intelligence cannot be replicated by an algorithm.

‘Human experience, personal perspective, and emotional intelligence cannot be replicated by an algorithm.’

Rather than replacing teachers, AI has the powerful potential to enhance education, personalizing instruction, automating repetitive tasks, or supporting curriculum development.

Is using AI amplifying cheating?

The emergence of AI also shines a light on an uncomfortable truth:  any unchecked technology has the potential for misuse.

Students can and do ask AI to generate work for assessments, the modern equivalent to paying a peer to do the work for them. The challenge for colleges is to safeguard assessment methods as the nature of cheating changes and AI-related misconduct takes hold. It’s an age-old, academic problem in a fresh setting. If a student is motivated to cheat, this is the new method to do so.

For the discerning learner—equipped with AI literacy and an expert teacher—AI can unlock skills rather than shortchange their academic experience. Using AI as a research and development tool is no more cheating than asking Google in lieu of scouring physical books; the student needs to know what to ask, how to analyze the results, and apply information. The uninvested student may merely copy and paste.

‘Using AI as a research and development tool is no more cheating than asking Google in lieu of scouring physical books; the student needs to know what to ask, how to analyze the results, and apply information.’

Indeed, AI tools can supercharge the learning process so a student can study more effectively, faster. Testing knowledge against an AI tutor, getting real-time feedback, improving accessibility to content are all within the modern student’s arsenal and empower deeper critical and creative skillsets. For a student who wants to engage in class, AI tools can only make the learning experience more rewarding and enjoyable.

The distinction is clear: the problem lies in misuse, not in the tool itself. Colleges need to continue to question why students cheat, rather than scapegoat their means. AI tools that equip, empower and encourage students could produce more motivated students less inclined to take a shortcut to success.

Treating AI as a tool, not a solution

The dialogue around AI and learning is about how it will remove our agency: inhibiting students’ critical thinking, replacing the teacher, and abusing the integrity of our academic institutions.

That’s why edtech developers need to design with agency in mind, leveraging AI to remove unproductive friction in the learning process—inefficient tasks that are necessary but not valuable—and scaffolds skills development.

‘…edtech developers need to design with agency in mind, leveraging AI to remove unproductive friction in the learning process.’

The question is not whether AI will transform education—it already has—but how we actively guide that transformation. Only by focusing on the human behind the machine can we embrace it as a tool rather than a threat.

Josh Nesbitt is Chief Technology Officer at Genio, a company creating beautifully simple learning tools that boost knowledge, skills, and confidence in learners everywhere. Josh is a software engineer and technical leader based in the UK. He’s been working on the web for the last 19 years, and during that time, he’s worked with a wide range of clients, from indie start-ups to some of the largest organizations in the world. His work spans from hands-on projects building large platforms to leading some of the best-performing teams in our industry. Alongside his work as a consultant, he also runs an internationally recognized conference called All Day Hey!, which brings people from all over the world to the heart of Leeds to learn, inspire and share stories. He can be reached on LinkedIn.

The post Knowledge is Power: Debunking Myths Around AI to Unlock Better Learning appeared first on EdTech Digest.