Generative AI is reshaping higher education, influencing all six key areas highlighted in the 2025 EDUCAUSE Horizon Report. The report offers expert insights and actionable steps to proactively shape GenAI’s role in teaching and learning.
Kids Need Soft Skills in the Age of AI, but What Does This Mean for Schools?
Generative AI is forcing K-12 schools to reconsider what key skills to teach students.
How to Use Your Phone to Make Life Better
We’ll unlock a few new tips to use our phones in surprising and useful ways. And, because putting your phone down entirely sometimes is a good idea, we’ll cover a few tips for using it less, too. Let’s get into …
Canada needs a national strategy for generative AI in higher education
Without co-ordination among Ottawa and the provinces, universities risk deepening inequities in how students and faculty use generative AI across campuses.
Nothing launches AI tool for building mini apps using prompts
AI-powered app development is really taking off, and smartphone maker Nothing seems intent on capitalizing on the bandwagon: the company on Tuesday revealed Playground, an AI tool that lets users create apps with simple text prompts and deploy them to …
What Games Have to Teach Us About AI-Enhanced Learning
The article explores how principles from educational games and simulations can guide the design of AI-enhanced learning, emphasizing agency, interactivity, challenge, and social engagement. It proposes a framework of principles and elements to ensure AI augments human learning while preserving …
AI for Student Engagement: A Global Review of Emerging Strategies
The article surveys 106 case studies (not listed) and identifies 24 emerging AI methodologies to boost student engagement, grouped into six categories. Most categories are useful, though “instructional delivery” feels less convincing, and access requires bypassing a spamwall.
The AI Tsunami Is Here: Reinventing Education for the Age of AI
In the age of artificial intelligence, higher education must move beyond content delivery toward interactionalism—a human-centered approach to learning that fosters collaboration, creativity, adaptability, feedback, and well-being. This article series will explore this system redesign in-depth, beginning with teaching and …
Enacting assessment reform in a time of artificial intelligence
Enacting assessment reform in a time of artificial intelligence builds on the principles and propositions outlined in Assessment reform for the age of artificial intelligence.
Perplexity Comet and online quizzes
The author argues for redesigning assessments to authentically measure learning rather than banning AI, noting tools like Perplexity’s new Comet browser make “cheating” trivial. Comet’s sidebar AI can access secure course content (e.g., Brightspace quizzes) directly in the browser, correctly …
Open AI, Google, and Anthropic all offer AI tutors for students. Do they work?
AI is rapidly becoming embedded in education, with OpenAI, Google, and Microsoft rolling out free AI tutor tools and forging partnerships with schools.
A tech reporter tested these new AI tutors firsthand, curious whether they truly help students learn effectively.…
Learn Your Way: Reimagining textbooks with generative AI
Textbooks are limited as one-size-fits-all resources, but Google’s Learn Your Way experiment uses generative AI to create personalized, multi-format learning materials. Early studies show promising results: students using it scored 11 points higher on retention tests compared to standard digital …
AI in the Classroom Is Often Harmful. Why Are Educators Falling Prey to the Hype?
Teachers who rush to embrace chatbots are ignoring the essence of education.
19 AI-infused apps that prove NPUs are already changing how we work
With Intel’s Core Ultra and AMD’s Ryzen AI chips integrating NPUs, more apps now run AI tasks locally, making them faster and more energy-efficient. This shift delivers quicker responses, offline functionality, and new automation options, improving both everyday and professional …
Why Language Models Hallucinate
Large language models “hallucinate” because training and evaluation reward confident guesses over admitting uncertainty, making errors statistically inevitable. To curb this, benchmarks must be redesigned so models aren’t penalized for expressing uncertainty, fostering more trustworthy AI.