Artificial Intelligence & Education
UC Irvine School of Education researchers are developing smart uses of AI and investigating ways it can advance the student and teacher experience.
By Christine Byrd
Huge leaps in artificial intelligence are already disrupting many aspects of education – from the way students complete writing assignments to how teachers evaluate mastery of skills. Yet with the new technology comes new concerns.
Worries about data privacy, biases in the algorithms and ethics prompted a series of executive orders from the White House earlier this year, including a proposal for an AI Bill of Rights that would protect users. Meanwhile, the National Science Foundation (NSF) announced $140 million in funding for AI research in topics including AI-augmented learning.
Huge leaps in artificial intelligence are already disrupting many aspects of education – from the way students complete writing assignments to how teachers evaluate mastery of skills. Yet with the new technology comes new concerns.
Worries about data privacy, biases in the algorithms and ethics prompted a series of executive orders from the White House earlier this year, including a proposal for an AI Bill of Rights that would protect users. Meanwhile, the National Science Foundation (NSF) announced $140 million in funding for AI research in topics including AI-augmented learning.
In UCI’s School of Education, researchers have spent years evaluating early uses of AI in education, such as automated essay scoring and intelligent tutoring systems. But the increased funding opportunities and recent release of ChatGPT have supercharged their efforts.
“We can’t prepare students to live in the society that we grew up in. We have to prepare them to live in the society that they’re going into,” says Mark Warschauer, professor of education and informatics, and founder of the Digital Learning Lab at UCI. Roots in Science of Learning The conversation about AI in education often forgets the two fields’ intersecting histories: artificial intelligence was born partly out of a desire to understand how humans learn. |
“Many of the pioneers of AI were cognitive scientists who were interested in simultaneously understanding how the human mind works and developing machines that could embody the kind of intelligence humans have,” explains Shayan Doroudi, assistant professor of education. Doroudi earned his doctorate in computer science in 2019 from Carnegie Mellon University, where some of the foundational AI development occurred in the 1950s. Over time, the field shifted to focus more on engineering and automation, but Doroudi still sees opportunities to advance the science of learning.
“It goes both ways: we can take insights from human cognition to build better AI systems and, at the same time, if we can create machines that exhibit intelligent properties, we can study them to better understand how people learn and even improve how we teach,” he adds.
Right now, though, teachers are hungry for guidance on how to manage AI tools. When ChatGPT, a type of generative AI that can produce anything from a book summary to a poem, was released by OpenAI in Fall 2022, it rang alarm bells for educators. Worries about the potential for academic dishonesty and plagiarism led districts to quickly ban the technology. But that didn’t keep students – or teachers – from exploring it.
In July 2023, UCI and the Spencer Foundation hosted a half-day virtual conference about generative AI in education, which drew more than 1,500 registrants from across the U.S. and around the world, to discuss how AI can be used in writing and personalized learning. This reflects the sense of urgency teachers feel to understand and master these tools. In the conference, Warschauer described the “June-July contradiction” where students may be penalized for using AI like ChatGPT in school in June, but will be behind the curve if they don’t know how to leverage the tools when they enter the workforce in July.
Two Types of AI Literacy
Educators should consider the two distinct kinds of AI literacy, UCI experts say. The first is knowing how to access and use the tools, which falls under the umbrella of digital skills and media literacy. The second type includes learning to modify and create artificial intelligence tools, which enter the realm of computer science. UCI researchers are working together to build and evaluate curricula to support both types of AI literacy.
Warschauer recently received an NSF grant to support a project with UCI’s The Henry Samueli School of Engineering that will incorporate ChatGPT into an upper-division class on professional communication. The research team is building a platform called PapyrusAI that students can log in and access the latest version of ChatGPT, but with constraints on how they can use it – such as generating feedback on a draft essay or getting quizzed on content for an upcoming exam.
Students might begin piloting the system as early as this winter, with plans underway to develop a version for K-12 schools as well.
“It goes both ways: we can take insights from human cognition to build better AI systems and, at the same time, if we can create machines that exhibit intelligent properties, we can study them to better understand how people learn and even improve how we teach,” he adds.
Right now, though, teachers are hungry for guidance on how to manage AI tools. When ChatGPT, a type of generative AI that can produce anything from a book summary to a poem, was released by OpenAI in Fall 2022, it rang alarm bells for educators. Worries about the potential for academic dishonesty and plagiarism led districts to quickly ban the technology. But that didn’t keep students – or teachers – from exploring it.
In July 2023, UCI and the Spencer Foundation hosted a half-day virtual conference about generative AI in education, which drew more than 1,500 registrants from across the U.S. and around the world, to discuss how AI can be used in writing and personalized learning. This reflects the sense of urgency teachers feel to understand and master these tools. In the conference, Warschauer described the “June-July contradiction” where students may be penalized for using AI like ChatGPT in school in June, but will be behind the curve if they don’t know how to leverage the tools when they enter the workforce in July.
Two Types of AI Literacy
Educators should consider the two distinct kinds of AI literacy, UCI experts say. The first is knowing how to access and use the tools, which falls under the umbrella of digital skills and media literacy. The second type includes learning to modify and create artificial intelligence tools, which enter the realm of computer science. UCI researchers are working together to build and evaluate curricula to support both types of AI literacy.
Warschauer recently received an NSF grant to support a project with UCI’s The Henry Samueli School of Engineering that will incorporate ChatGPT into an upper-division class on professional communication. The research team is building a platform called PapyrusAI that students can log in and access the latest version of ChatGPT, but with constraints on how they can use it – such as generating feedback on a draft essay or getting quizzed on content for an upcoming exam.
Students might begin piloting the system as early as this winter, with plans underway to develop a version for K-12 schools as well.
“Students will learn how to better understand AI tools, different ways of accessing them, prompting techniques, methods to corroborate, and how to incorporate AI into their work ethically,” explains Warschauer.
It’s always better when students can be creators and not just consumers of new media and technologies. CodeAI, a project by graduate student Daniel Ritchie in the Digital Learning Lab, is a four-week curriculum that introduces elementary school students to AI and gives them the opportunity to train in a small-scale version of ChatGPT. Students have piloted the program over the last two summers at the Delhi Center in Santa Ana. |
Similarly, School of Education’s Rossella Santagata, professor of education, and Ha Nguyen, Ph.D. ’22, assistant professor of instructional technology and learning sciences at Utah State University, are collaborating on an NSF-funded project that will incorporate an AI chatbot into an existing science curriculum. The duo is partnering with Sara Ludovise, coordinator of the Orange County Department of Education’s hands-on science program Inside the Outdoors. They are developing a curriculum that teaches high school students principles of science communication by having them interact with and train chatbots that each have personas with different perspectives about marine life – an Indigenous fisherman, a marine biologist, a climate scientist or a surfer.
“These chatbots will provide opportunities to engage in discussions that we know are beneficial for learning, but that teachers may not be able to provide to every student,” says Nguyen. “What’s exciting about the curriculum is that developing science communications skills by engaging
with the con is one part, but having kids identify where the chatbot falls short is also part of the curriculum. What are the limitations of the chatbot, and how can understanding that help kids develop AI literacy?”
AI as Advisor & Mentor
In the Design & Partnership Lab directed by Professor June Ahn, researchers aim to leverage AI as an educational advisor, mentor and collaborator. Ahn points out that good advisors have strong listening and summarizing skills, which in turn inform their advice and recommendations – something Ahn believes AI may be able to effectively emulate.
Earlier this year, Ahn launched an AI project funded by the Chan Zuckerberg Initiative, in partnership with Anaheim Union High School District, to analyze students’ soft skills such as collaboration, communication, critical thinking, creativity and compassion. The goal is that students will share their reflections about team-based projects and the AI will evaluate the student’s growth in five key areas – not with a score, but with a summary that is informative and easy to act upon to support deeper learning.
“You don’t need to rely exclusively on test scores,” says Ahn. “Large language models allow us to give students a more holistic and nuanced summary of their learning – the type of thing that would require the teacher to interview every single student to produce, which is not feasible. That’s a big leap forward.”
In another project, funded by the Bill and Melinda Gates Foundation, Ahn’s team is creating an AI-powered mentorship tool to support individuals on specific career pathways. Students currently participating in the pilot program can type questions or share concerns about their future in the field of data science, and get responses about what kinds of questions they should ask, and who they might go to for help.
“Instead of AI giving you the answer, it guides you to get the answer,” explains Ahn. “In our pilot work, we’re finding that this process is super helpful for students because one of the big issues is they don’t know what they don’t know. Sometimes they are not even sure what questions to ask their professor or mentor.”
“Always, the idea is not to use AI as a replacement for humans, but as an enhancement,” says Ahn.
“These chatbots will provide opportunities to engage in discussions that we know are beneficial for learning, but that teachers may not be able to provide to every student,” says Nguyen. “What’s exciting about the curriculum is that developing science communications skills by engaging
with the con is one part, but having kids identify where the chatbot falls short is also part of the curriculum. What are the limitations of the chatbot, and how can understanding that help kids develop AI literacy?”
AI as Advisor & Mentor
In the Design & Partnership Lab directed by Professor June Ahn, researchers aim to leverage AI as an educational advisor, mentor and collaborator. Ahn points out that good advisors have strong listening and summarizing skills, which in turn inform their advice and recommendations – something Ahn believes AI may be able to effectively emulate.
Earlier this year, Ahn launched an AI project funded by the Chan Zuckerberg Initiative, in partnership with Anaheim Union High School District, to analyze students’ soft skills such as collaboration, communication, critical thinking, creativity and compassion. The goal is that students will share their reflections about team-based projects and the AI will evaluate the student’s growth in five key areas – not with a score, but with a summary that is informative and easy to act upon to support deeper learning.
“You don’t need to rely exclusively on test scores,” says Ahn. “Large language models allow us to give students a more holistic and nuanced summary of their learning – the type of thing that would require the teacher to interview every single student to produce, which is not feasible. That’s a big leap forward.”
In another project, funded by the Bill and Melinda Gates Foundation, Ahn’s team is creating an AI-powered mentorship tool to support individuals on specific career pathways. Students currently participating in the pilot program can type questions or share concerns about their future in the field of data science, and get responses about what kinds of questions they should ask, and who they might go to for help.
“Instead of AI giving you the answer, it guides you to get the answer,” explains Ahn. “In our pilot work, we’re finding that this process is super helpful for students because one of the big issues is they don’t know what they don’t know. Sometimes they are not even sure what questions to ask their professor or mentor.”
“Always, the idea is not to use AI as a replacement for humans, but as an enhancement,” says Ahn.
Helping Teachers & Enhancing Teamwork
Researchers see ample possibilities for AI to enhance and support teachers who are already stretched thin. Doroudi and graduate student Christopher Lechuga have developed an algorithm that would enable teachers to create small groups of students who are ready to work on a particular math topic, based on their results in the online math program ALEKS (which was, coincidentally, developed by UCI cognitive sciences researchers in the 1990s), or even to pair up students where one could tutor the other on a specific math concept. Their goal is to leverage AI to empower teachers and support solid pedagogical practices. |
The Converse to Learn project from the Digital Learning Lab, funded by NSF, offers another example of an effort to empower teachers. ChatGPT will be used to embed dialogic questions into hundreds of children’s e-books – and AI will allow young readers to answer questions with feedback as they go through the books. Crucially, the questions and answers from each book will be programmed with review and input from educators to ensure they are age appropriate. Ultimately, the tool will allow a child reading alone to engage in the kind of back-and-forth conversation similar to that of a parent or teacher reading alongside them.
Nia Nixon (née Dowell), assistant professor of education, is also using AI to understand and enhance collaboration in team environments, such as students working together on a project or in a research lab. Her previous research found discernible differences between how men and women interact online, and between people of different races, so now her team is considering how to create intelligent systems that might guide students in online spaces like Zoom or Slack to better ways of communicating with their peers.
“We’re not focused on measuring learning gains, but how students feel when they finish working with a group – and those feelings are usually tied to learning gains,” she says. “This is a much more intimate communication than something like intelligent tutoring systems that can identify what you’re doing wrong in a math problem.”
Nixon is also looking at opportunities to embed a chatbot that can recognize and encourage creativity in online teams, infusing a burst of energy and enthusiasm right when a group needs it. The possibilities are exciting – though not fully understood.
“We’re at a fascinating and scary place because we don’t yet know the upper boundary of what AI can do,” she says. “It’s going to force us to have conversations about how to manage this technology appropriately and prepare society for it. But it’s not like a switch – we can’t turn it off.”
Embracing the Change
UCI experts encourage educators to first develop a basic understanding of what current AI tools can and cannot do – for example, ChatGPT can synthesize and predict language but it cannot think or create new insights.
Secondly, experts say teachers should help students learn how to use AI to enhance their work, not just to get answers. Ahn asks his students to share with the class how they use AI in their writing process.
“What they find is that the AI helps them quickly synthesize information, but doesn’t help them write really good papers,” he says. “Students need to be asked to do higher-order thinking. That’s what will differentiate folks in the era of AI – being able to move beyond the basics.”
Finally, teachers should be aware of the potential inequities and unforeseen consequences of AI.
“There are things we can do with AI, but they are uncritical uses that have negative consequences for other humans,” says Ahn. “You need to ask yourself when using an AI tool: Am I exacerbating an inequity? Am I dehumanizing the learning process? Am I amplifying a negative experience for students?”
“Let’s think critically about what kind of equitable or supportive experiences we want to promote and how we make sure we’re not promoting the potential negative uses of it – and let’s get ahead of the game and not wait to be reacting to it,” Ahn adds.
Doroudi, who has written about ethics in AI, believes that both computer scientists and education researchers need to collaborate with philosophers and ethicists to more fully think through the major ethical questions raised by the tools.
In the meantime, he advises teachers to trust their own insights about their specific classes when deciding when and how to introduce AI, and to maintain a healthy dose of skepticism.
“Whoever developed the algorithm doesn’t know anything about a teacher’s specific class, so teachers should leverage their insights about their group of students to determine how to use an AI tool in a way that supports their students, and in the most equitable way,” Doroudi says.
Despite the concerns, experts agree there is no going back: AI is part of our daily life, including in our classrooms.
“ChatGPT is us – our words, our language, our creation – and being able to use it to its fullest means developing our students to understand it, to utilize it, to experiment with it, and to create new things with it,” says Warschauer. “We need to help students know how to understand it, access it, prompt it, interrogate it, and ultimately how to master this very powerful tool.”
Nia Nixon (née Dowell), assistant professor of education, is also using AI to understand and enhance collaboration in team environments, such as students working together on a project or in a research lab. Her previous research found discernible differences between how men and women interact online, and between people of different races, so now her team is considering how to create intelligent systems that might guide students in online spaces like Zoom or Slack to better ways of communicating with their peers.
“We’re not focused on measuring learning gains, but how students feel when they finish working with a group – and those feelings are usually tied to learning gains,” she says. “This is a much more intimate communication than something like intelligent tutoring systems that can identify what you’re doing wrong in a math problem.”
Nixon is also looking at opportunities to embed a chatbot that can recognize and encourage creativity in online teams, infusing a burst of energy and enthusiasm right when a group needs it. The possibilities are exciting – though not fully understood.
“We’re at a fascinating and scary place because we don’t yet know the upper boundary of what AI can do,” she says. “It’s going to force us to have conversations about how to manage this technology appropriately and prepare society for it. But it’s not like a switch – we can’t turn it off.”
Embracing the Change
UCI experts encourage educators to first develop a basic understanding of what current AI tools can and cannot do – for example, ChatGPT can synthesize and predict language but it cannot think or create new insights.
Secondly, experts say teachers should help students learn how to use AI to enhance their work, not just to get answers. Ahn asks his students to share with the class how they use AI in their writing process.
“What they find is that the AI helps them quickly synthesize information, but doesn’t help them write really good papers,” he says. “Students need to be asked to do higher-order thinking. That’s what will differentiate folks in the era of AI – being able to move beyond the basics.”
Finally, teachers should be aware of the potential inequities and unforeseen consequences of AI.
“There are things we can do with AI, but they are uncritical uses that have negative consequences for other humans,” says Ahn. “You need to ask yourself when using an AI tool: Am I exacerbating an inequity? Am I dehumanizing the learning process? Am I amplifying a negative experience for students?”
“Let’s think critically about what kind of equitable or supportive experiences we want to promote and how we make sure we’re not promoting the potential negative uses of it – and let’s get ahead of the game and not wait to be reacting to it,” Ahn adds.
Doroudi, who has written about ethics in AI, believes that both computer scientists and education researchers need to collaborate with philosophers and ethicists to more fully think through the major ethical questions raised by the tools.
In the meantime, he advises teachers to trust their own insights about their specific classes when deciding when and how to introduce AI, and to maintain a healthy dose of skepticism.
“Whoever developed the algorithm doesn’t know anything about a teacher’s specific class, so teachers should leverage their insights about their group of students to determine how to use an AI tool in a way that supports their students, and in the most equitable way,” Doroudi says.
Despite the concerns, experts agree there is no going back: AI is part of our daily life, including in our classrooms.
“ChatGPT is us – our words, our language, our creation – and being able to use it to its fullest means developing our students to understand it, to utilize it, to experiment with it, and to create new things with it,” says Warschauer. “We need to help students know how to understand it, access it, prompt it, interrogate it, and ultimately how to master this very powerful tool.”