Daniella DiPaola is a Ph.D. student in the Personal Robots Group at the MIT Media Lab. Her research interests include understanding the social-emotional, ethical, and legal implications of AI and robots, particularly for those who are growing up with AI. She has developed various curricula to inspire K-12 to think about the societal implications of artificial intelligence. Most recently, Daniella helped create the RAISE Day of AI curriculum with an approach to weave ethical, social, and policy considerations throughout technical explanations. One of the main goals of which is fostering discussion of the “Blueprint for an AI Bill of Rights” released by the White House’s Office of Science and Technology Policy (OSTP) in late 2022.
Daniella received her B.S. in Engineering Psychology from Tufts University in 2016. Before beginning graduate school, she worked as a researcher in the consumer robotics industry.
How did you initially become interested in AI and Robotics?
I’ve always been fascinated with the intersection of people and technology. In my undergraduate studies, I studied human-computer interaction and design principles to create user-friendly technology. I decided to work at a consumer robotics company after graduation because I knew the design guidelines for robots were not well defined and I was excited about the opportunity to contribute to the field.
I quickly learned that the design process for social robots was very different from other technologies because it asked explicitly humanistic questions. For example, on any given day, we might be discussing: How should the robot make someone laugh? How should the robot respond to an insult? Should the robot act the same towards every member of the family? These prompts facilitated new types of collaborative conversations among my colleagues because we spoke about our own values as we designed these very familiar, human-like interactions. I became really interested in emulating these experiences for K-12 students and leveraging AI literacy to bring other important topics to the classroom.
Obviously educators are nervous about the impacts AI will have in the classroom. What are some ways to alleviate those fears for all stakeholders when dealing with new technology?
I tend to think that by becoming more informed about AI technologies and having a clear picture of their capabilities as well as their shortcomings, it will help us make decisions on how we can best integrate them into schools. I recently heard about an educator who created a workshop for other educators in her school to learn more about ChatGPT– she said that her peers were not only thankful for the information, but it opened up a dialogue between all of them about how to use it appropriately.
What has been the most rewarding aspect of your work to date, and what impact do you hope your work will have on the future of education?
In January, New York City Public Schools decided to ban ChatGPT for fear of how students might use it to cheat on their work. Soon after, our team created a curriculum on ChatGPT that defined creativity, explained how ChatGPT works technically, and facilitated a collaborative lesson that leveraged ChatGPT to create a classroom policy for its appropriate use. The curriculum was released as part of Day of AI and embodies everything we strive for in our work: hands-on activities, technical rigor, ethical framing, and co-creation with AI.
On the Day of AI, the chancellor of NYC Public Schools released a statement, reversing the decision to ban ChatGPT and encouraging students and educators to learn about it and work with it responsibly. He cited our work as an example of how to embrace it appropriately. It was incredibly rewarding to realize that our team was able to swiftly provide resources for students and school districts to make sense of this novel technology, and that it had a direct impact on decisions within the largest school system in the United States.
Where do you see AI literacy making the biggest impact in the classroom, and where might educators be best off without AI for certain lessons?
I’m really excited about the opportunity for the affordances of AI literacy to bring in topics that are not as common in the classroom, but especially important for students to engage with. For example, when we teach students how to create their own machine learning classifiers, we discuss the importance of representative datasets. This lends itself to rich discussions about fairness and equity, and is a tangible way for students to reflect their values into what they build. Similarly, we have conversations on credit and ownership with generative AI and communicating our emotions with affective computing.
What is your favorite thing about being at MIT?
MIT, especially the MIT Media Lab, is a really exciting place for those who want to engage in interdisciplinary work. I’m really inspired by the way that my peers bring together the fields of engineering, science, art, psychology, design (just to name a few) to tackle some of the world’s most pressing problems.