Overworked teachers and stressed high school students are turning to artificial intelligence to ease their workload.
But they’re not sure how much they can trust the technology — and they see many ethical gray areas and potential for long-term problems with AI.
How are both groups approaching the ethics of this new technology, and what can school districts do to help them make the most of it, responsibly?
That’s what Jennifer Rubin, principal researcher at Foundry10, an organization focused on improving learning, decided to find out last year. She and her team held small focus groups on AI ethics with a total of 15 teachers across the country as well as 33 high school students.
Rubin’s research is expected to be presented at the annual conference of the International Society for Technology in Education later this month in Denver.
Here are four important takeaways from his team’s in-depth interviews with students and faculty:
1. Teachers see the potential for generative AI tools to ease their workload, but they also see big problems
Teachers said they are trying to use AI tools like ChatGPT to help with tasks such as lesson planning or creating quizzes. But many educators are unsure to what extent they can trust AI-generated insightsor were unhappy with the quality of the responses they received, Rubin said.
Teachers “expressed many concerns about the credibility of the information,” Rubin said. “They also found that some of the information in ChatGPT was really out of date or not aligned with learning standards” and therefore was not particularly useful.
Teachers also worry that students will become too reliant on AI tools to complete their writing assignments and therefore “fail to develop the critical thinking skills that will be important” in their future careers, Rubin said.
2. Teachers and students need to understand the strengths and weaknesses of technology
There is a perception that adults understand how AI works and know how to use the technology responsibly.
But that’s “not the case,” Rubin said. That’s why school and district leaders “should also consider ethical use guidelines for teachers” as well as students.
Teachers have big ethical questions about what tasks can be given to AI, Rubin added. For example, most teachers interviewed by the researcher view the use of AI to grade student work or even offer feedback as an “ethically murky area due to the importance of human connection in how which we provide feedback to students regarding their written work,” Rubin said. .
And some teachers have returned to using pen and paper rather than digital technologies so students can’t use AI tools to cheat. This frustrated students used to taking notes on a digital device…and goes against what many experts recommend.
“The AI might have this unintended reaction when some teachers in our focus groups removed the use of technology in the classroom altogether, in order to circumvent the potential for academic dishonesty,” Rubin said.
3. Students have a more nuanced perspective on AI than one might expect
The high school students Rubin and his team spoke with don’t see AI as the technological equivalent of a classmate who can write their homework for them.
Instead, they use AI tools for the same reasons adults do: to cope with a stressful and overwhelming workload.
The teens talked about “having an extremely busy schedule with schoolwork, extracurricular activities and after-school work,” Rubin said. Any conversation about student use of AI needs to be grounded in how students use these tools to “help alleviate some of that pressure,” she said.
For the most part, high school students use AI to help them with research and writing for their humanities courses, as opposed to math and science, Rubin said. They can use it to think about essay topics, to get feedback on a thesis statement for an article, or to help with grammar and word choices. Most said they weren’t using it for wholesale plagiarism purposes.
Students were more likely to rely on AI if they felt like they were doing the same work over and over and if they had already “mastered that skill or done it enough times,” Rubin said .
4. Students should participate in the process of developing ethical use guidelines for their schools
Students have their own ethical concerns about AI, Rubin said. For example, “they’re really concerned about the vagueness and unfairness of some students using it and some not and getting grades on something that can impact their future,” said Rubin.
Students told researchers they wanted advice on how to use AI ethically and responsibly, but they weren’t getting that advice from their teachers or schools.
“There’s a lot of policing” for plagiarism, Rubin said, “but not a lot of productive conversations in classrooms with teachers and adults.”
Students “want to understand what the ethical limits of using ChatGPT and other generative AI tools are,” Rubin said. “They want to have guidelines and policies on what that might look like for them. And yet, at the time these focus groups (took place), they weren’t getting that from their teachers or their districts, or even their parents.