What does it mean to build AI literacy and use it responsibly across classrooms, continents, and generations?
That question grounded our second AI and Education Summit, held last week on the Massachusetts Institute of Technology campus. Organized by the MIT RAISE Initiative with the App Inventor Foundation, the 2025 Summit brought together educators, students, researchers, policy leaders, and technologists to examine how artificial intelligence is shaping and being shaped by the future of learning.
Over three days, participants examined both the promise and pitfalls of AI through workshops, youth panels, research presentations, and a Global AI Hackathon. This year’s event featured more than 50 papers, live demos, and an expanded youth track highlighting student-led work. Participants came from over 80 countries, ranging in age from five to seventy-seven.
From App Inventor prototypes countering climate misinformation to calls for multilingual models grounded in equity, the Summit emphasized the urgency of preparing learners to shape AI. As Ana Lucía Pérez, a 10th grader from Puerto Rico, put it, “Change doesn’t start with perfection. It starts with curiosity and courage.”
Addressing Entrenched Pedagogies
Creativity “is part of what pushes humanity forward,” said Stanford’s Victor Lee during the opening-day plenary on AI and learning. That theme came through in sessions that explored how AI intersects with pedagogy, equity, and student agency.
In a conversation moderated by RAISE co-principle investigator Eric Klopfer, Lee joined Karen Brennan of Harvard and Mitch Resnick of MIT to ask what education should preserve and what it might need to reinvent. Brennan noted the inertia in school systems: “Do not underestimate the power of the education system to not change.” Resnick pointed out that AI could just as easily reinforce traditional teaching models as disrupt them. “Because AI could add some incremental advantages to what I see as outmoded approaches to pedagogy, it could reinforce them, re-entrench them, and not allow for the type of changes that we need for a changing society.” The panel emphasized that complexity does not drive engagement. “Rather than easy or hard, I’d focus more on how to make things meaningful and connect to people’s interests and passions,” Resnick said. Lee added that AI can feel like fast food – “cheap, quick things on demand.” But the creative work,” Lee said, “is actually the interesting compositions…the ingredients of this other task. It’s like facilitating nourishment.”
In other sessions, educators shared strategies for building AI literacy that centered voice, reflection, and cultural context. Jie Chao of the Concord Consortium described a language arts module that uses AI-generated text to support both technical and critical thinking. Ulia Zaman (Ulia Z.) presented findings from a classroom feedback chatbot that helped students reflect more deeply. Instructors could quickly grasp emerging themes. “It encouraged elaboration,” she said.
Teacher development also has a share of the spotlight. Karen Kirsch Page of Teachers College at Columbia University emphasized that successful integration hinges on educator preparation. “Teachers need to design with AI for their own subjects, students, classroom environments, and learning goals.” Others pointed to the widening gap between tool availability and teacher readiness.
Workshops led by the MIT RAISE team gave attendees hands-on time with AI-infused pedagogy, multilingual maker tools, and strategies for teaching AI metacognition. The shared goal was to help students build and question AI systems with intention.
Equity, Policy, and Global Readiness
“Who defines what responsible AI in education looks like? Whose values are embedded in the algorithms?” asked Salima Bah, Sierra Leone’s Minister of Communication, Technology, and Innovation. In a keynote address, she warned that AI trained on limited or biased data can misrepresent or erase entire communities.
“When our languages, identities, and histories are missing from the data, we are either misrepresented or made invisible,” she said. She called for open-source, multilingual learning tools and ethical standards that are universal in principle and flexible in practice.
Her keynote set the tone for sessions that challenged assumptions about AI’s global reach. In a policy panel moderated by Harvard’s Max Lu, speakers examined the gap between AI’s promise and the infrastructure required to support it. Felipe Neves of Google LATAM noted that even when AI tools are free, bandwidth and language barriers persist. Taylor Reynolds of MIT added that many users outside the United States rely on feature phones, not high-end devices. Policy needs to reflect that reality.
Victor Lee in the plenary with Brennan and Resnick raised concerns about English-dominated tools. “People are expected to prompt in English, left to right, with American cultural assumptions baked in.” Inclusive access, he said, means redesigning systems from the ground up.
The Global AI Hackathon reinforced these issues. With submissions from 86 countries, over 1,300 participants tackled real-world problems in education, equity, and climate.
UNESCO’s AI Competency Frameworks also featured in a panel discussion. Kelly Shiohira outlined the framework’s emphasis on critical thinking, sustainability, and lifelong learning. Cătălina Rață shared how her team in Romania developed an AI curriculum for students in digital poverty. Rishi Mazumdar described how they begin by training teachers before expanding to students.
“We’re going beyond Bloom’s taxonomy,” Shiohira said. “We’re asking students not just to apply knowledge, but to critically examine and co-create with AI.”
Speakers emphasized that access is not just about devices. It is about language, trust, and cultural fit. As one panelist said, “We can’t build for communities unless we build with them.”
Innovation in Tools and Practice
What happens when students stop using AI and start designing with it?
At Emerson College, Daniel Pillis conducted a year-long study on the Jibo robot, used as a “robotic actor” in film classes. Jibo improved participation and reduced anxiety, especially during solo projects. In one class, the robot even performed as a drag persona.
The Global AI Hackathon showcased apps created using MIT App Inventor. Winning entries addressed accessibility, misinformation, and community wellness.
Demo lounges featured work from the Personal Robots Group, MIT Scheller Teacher Education Program, and the RAISE curriculum team. These tools were designed to be adaptable, remixable, and usable across diverse contexts.
Youth Voices…and Uncertainty
A youth panel brought together high school students from across the Americas to share their experiences with AI. One panelist described asking ChatGPT for a recipe. Another had built a climate-focused app. All had used AI in school and expressed frustration with unclear rules. “Right now there’s a lot of confusion about if it should be added, if there’s proper usage,” one said. “If you fully know how to use it, it’s really useful.”
One student compared AI to driving: we don’t ban driving because people don’t know how to drive. We teach driver’s ed.
Student-led papers and posters added to the discussion. Winners were recognized alongside adult honorees during Friday’s closing ceremony.
Pérez, the 10th grader from Puerto Rico, stood out as a youth speaker. After participating in the 2024 hackathon, she launched a climate-themed AI hackathon for teens in her community. The event filled up in under a week.
What’s Next
The Summit ended with awards and a challenge. Best papers, posters, and hackathon projects were honored. Professor Cynthia Breazeal, Director of MIT RAISE, closed the event by calling for AI that centers equity, creativity, and care. “Whether you’re a learner, a parent, a policymaker, AI and education now go hand in hand,” she said. “To build with AI, to use it responsibly, that can happen only in learning environments that encourage both creative exploration and being mindful of the ways that AI directly impacts you, your family, your community, and beyond.”
Attendees left with new partnerships, sharper questions, and a sense of responsibility. The next generation is not simply inheriting AI. They are helping define it.