Students are leaning into AI. Faculty need to catch up, according to Penn GSE study

January 26, 2024
A smartphone is held in the center of the image with the ChatGPT website open on its browser. The phone is above a computer keyboard. You can see that the person has input “explain nuclear fusion in simple terms” as their prompt and the AI is generating an answer in return, illustrating how the program works for end-users.

As more students plug queries into ChatGPT and other artificial intelligence (AI) tools, it’s time for more faculty and schools to catch up and offer a roadmap on fair and ethical usage. That’s one recommendation from a recent Penn GSE study on how AI tools, including ChatGPT, impact higher education.

Penn GSE researchers, led by Adjunct Associate Professor Ross Aikins, interviewed 30 undergraduate, graduate, and doctoral students to outline students’ perceptions and usage of rapidly evolving AI tools last spring. The findings show generative technology is moving fast and furious; students are experimenting with it and would benefit from guidance to help them navigate what students in Aikins’ study call “the Wild West of AI.”

AI technology became widely available on campuses when ChatGPT released its free version in November 2022.

“It was like a switch,” Aikins said. “It was a moment where students and faculty suddenly had access to this new thing that seemed more like a magic trick than a party trick.”

The Penn GSE study also revealed a breadth of AI applications, with students deploying AI tools for various academic activities, including crafting emails and cover letters, generating essay ideas, editing, and researching sample exam questions.

Among adopters, AI tools were helpful for just about everybody, but international students self-reported that it was “especially” helpful for them, as it lowered language barriers and made it easier to access information. Several reported using it to draft emails to professors or prospective employers. One called the availability of it “a blessing.”

A few students said they use AI to run queries for friends in countries where the technology is blocked.

Students in the study acknowledged that using AI in the professional world might be more acceptable than in academia and believed it could help them work more efficiently.

Understandably, faculty and administrators are wary of students taking shortcuts or plagiarizing. Students in the study were largely aware of such concerns, but still reported a lot of confusion whenever AI policies were absent or improvised mid-term. To help them, Aikins said students need clearer policies from their instructors and colleges, including in syllabi.

Students do understand there are limits, Aikins noted. Respondents said they understood chatbots and AI platforms can produce factually incorrect or irrelevant material or so-called “hallucinations.” As more students turn to AI for academic work, they could be tripped up by disinformation or faulty suggestions.

“The current flaws in this tech seem to allow for a teachable moment about information literacy and for us all to be a little more attuned to the potential of coming across misinformation, which actually could be a net good thing,” Aikins said.

While students explore AI, they receive limited — if any — guidance on its use from faculty. In the 2023 Penn GSE study, 36 percent of respondents said that they did not receive explicit guidance from instructors on whether they could use AI for academic work.

“There was considerable confusion in the Spring about exactly which tools or use cases were potentially of concern to academic integrity,” Aikins said, noting that “AI” is used liberally in the tech industry as a marketing construct. “Different people have different conceptions of what ‘AI’ constitutes, so while I may think ChatGPT is one of the more useful text generators, students are seeing ‘AI’ or ‘AI powered’ to describe a lot of products that they may have already been using for a long time.”

Without guidance, students are adrift. According to a separate survey of 1,000 students last spring by Best Colleges, about half of the students surveyed said their instructors had not spoken openly about using AI tools, and 60 percent said their schools and teachers had yet to discuss using AI responsibly or ethically.

“Students asked me if Grammarly or auto-complete were considered AI in terms of what’s totally safe to use or what could be improper,” Aikins said. “I certainly didn’t have any definitive answer to that, and I think that’s kind of where we’re all at in terms of trying to parse out what separates AI from just a really smart program, and the extent to which any of that even really matters in terms of creating and evaluating academic work.”

Some students said they struggled in unclear situations. One fourth-year student in the Penn GSE study recalled working on a group project and suddenly seeing AI-generated work populating the shared Google document. No standards were spelled out for AI use, so the students had to meet and set their policy, describing their predicament as “awkward.”

“AI should make faculty more concerned about the learning process rather than the final product in terms of the paper or assignment or deliverable,” Aikins said, adding, “It helps to have policies and guidance.”

The tech is not a fad. In Aikins’ study, 17 students were asked if they planned to use AI again, and they all answered affirmatively. They said AI helps them work and process information more efficiently. “We see with other technologies and behaviors that once busy students perceive there to be an efficacy boost, it’s very hard to say no to that in especially competitive college environments,” said Aikins.

For the study's second phase, Aikins plans to interview students in fine and studio arts, including architecture, art, and design, to learn how they deploy generative graphic AI and how it impacts academic work differently across other majors and fields.

It’s helpful for faculty to continually explore the technology. Aikins advises faculty and staff to explore ChatGPT and any other AI tools commonly used by students, and to be both intentional about how to incorporate it into classwork and transparent about why. He suggested that instructors have clear AI policies in their syllabi and assignments and to be prepared to update them regularly as these tools evolve.

“Faculty who look at these technologies as a net negative or simply as a vehicle for plagiarism are missing a huge piece of what students are doing with AI right now, and the enormous leveling potential of these tools,” he said. “There are arguments for limiting its use, but I think we all agree that it’s important to educate students about what constitutes responsible use.”