When Chat GPT-4 was released, Cory Kohn couldn’t wait to introduce it to the classroom. A biology lab coordinator in an integrated science department at Claremont McKenna, Pitzer and Scripps colleges, Kohn saw the tool as useful.
This promised to increase efficiency, he argued. But more than that, teaching his science students how to interact with the tool would be important for their own careers, he first told EdSurge last April. According to him, this would amount to familiarizing his students with a first version of the calculator, and students who had not used it would be at a disadvantage.
Kohn is not the only teacher confronting generative AI. Although he’s excited about its potential, others aren’t sure what to make of it.
For businesses, artificial intelligence has proven to be extremely profitable, even by some accounts increasing overall revenue. amount of funding provided to edtech Last year. This has led to a frenzied rush to commercialize educational tools under the name AI. But the desire of certain entrepreneurs to use these tools to replace teachers or personal tutors has aroused skepticism.
Discussions about the ethics of how these tools are implemented are also somewhat overshadowed, according to one observer. Nonetheless, teachers are already deciding how – or even if – adopt these tools in class. And the decisions these teachers make may be influenced by factors such as their level of familiarity with technology or even their gender, according to a new study.
A difference of opinion
People are still figuring out what the limits of this shiny new technology are in education, says Stephen Aguilar, an assistant professor at the Rossier School of Education at the University of Southern California. This can lead to missteps, such as, he says, viewing chatbots as replacing instructors or paraprofessionals. Deploying these tools in this way assumes that rapid, iterative feedback stimulates critical thinking — when what students really need are in-depth conversations that will take them in unexpected directions, Aguilar says.
If the tools are to deliver on their promise to improve education, Aguilar thinks it will take deeper thinking about what generative AI can do, thinking that goes beyond the tools’ promise to catalyze education. efficiency.
A former sixth- and seventh-grade teacher in East Palo Alto, California, Aguilar is now associate director of the Center for Generative AI and Society, which announced its launch at the same time. $10 million in seed funding, Last year. The center works to understand how AI is reshaping education so it can develop useful recommendations for educators, Aguilar says. The goal is to truly understand what’s happening on the front line, because no one knows exactly what the main implications will be at this stage, he adds.
As part of her role at the center, Aguilar has conducted research on how teachers think about AI in the classroom. THE study, “How Teachers Navigate the Ethical Landscape of AI in Their Classrooms,” surveyed 248 K-12 teachers. These teachers were largely white and from public schools, which introduced limitations.
The main conclusions? Teachers’ confidence or anxiety about using technology impacted their thoughts about AI.
Perhaps more surprisingly, the study also found that teachers weigh the ethical implications of these tools in different ways depending on their gender. According to the report, when thinking about AI, women tend to rely more on rules in their reasoning, considering guidelines to follow to use these tools beneficially. They emphasized the need to maintain confidentiality or avoid bias or confusion arising from the tools. Men, on the other hand, tend to focus more on specific outcomes like the ability to spark creativity, the report says.
Artificial tools, human judgments
When EdSurge first spoke to Kohn, the lab coordinator, he was using ChatGPT as a teaching assistant in biology classes. He cautioned that he could not completely replace his human teaching assistants with a chatbot. Sometimes, he says, the chatbot was simply missing the point. For example, he would recommend control variables when evaluating model experiments with students that simply did not make sense. Its usefulness therefore had to be assessed on a case-by-case basis.
Kohn also teaches a first-year writing course, AI Chatbots in Science, and he remains optimistic. He says his students use ChatGPT Plus, the paid version of OpenAI’s ChatGPT, to brainstorm research questions, help digest scientific articles, and simulate datasets. They also perform AI review of their writing, Kohn says.
This fits with what Aguilar has observed so far, about how the chatbot craze could affect the teaching of writing. Ultimately, Aguilar says, big language models could represent an engaging way for students to think about their own writing. That’s on the condition that students can approach them less as generators of writing and more as readers, he says, an extra pair of digital eyes capable of probing the text. This still requires students to evaluate the feedback they receive from these tools, he adds.
These days, Kohn considers a chatbot to be a kind of TA-plus. It can perform the tasks of a human TA, he says, but also more varied tasks that would traditionally have been accomplished by a librarian or editor, helping students sift through literature or refine their ideas.
Students still need to use it judiciously, he adds: “It is not a panacea for telling the truth. »