By Piper Hutchinson, Louisiana Illuminator
After an artificial intelligence detecting function was released on a popular plagiarism detection software, LSU saw a nearly 500% increase in reports to its student accountability office for plagiarism. Professors and administrators are grappling with the best way to approach the rapidly evolving technology.
In the spring 2022 semester, the LSU Office of Student Advocacy and Accountability found 28 students reported for misrepresenting their work responsible. In spring 2023, 136 students were found culpable, an increase of 486%, Jonathan Sanders, the office’s director told Faculty Senate members at a meeting Monday.
Most of the increase happened after Turnitin, a plagiarism detection software many professors use, unveiled its AI-detecting function in April, Sanders said.
The Office of Student Advocacy and Accountability also reported an increase in the number of students cleared of wrongdoing, from 11 in spring 2022 to 48 in spring 2023.
GET THE MORNING HEADLINES DELIVERED TO YOUR INBOXSUBSCRIBE
Artificial intelligence is already being used at LSU and by students across the nation, but there’s little agreement on how to handle academic integrity concerns.
While some professors have taken their courses totally offline, having students complete essays by hand during class time to avoid all doubt they are using artificial intelligence, others are choosing to turn a blind eye or even incorporate the software into their courses.
For students in less permissive courses, even staples such as Grammarly, a grammar-checking tool used commonly and openly for years, can lead to discipline.
Even LSU’s office charged with disciplining students for academic integrity missteps acknowledges there’s no fool-proof method to determine if something was created with artificial intelligence.
AI-detection software is far from certain, so the Office of Student Advocacy and Accountability has to rely on other evidence, including interviews with students, to reach a preponderance of proof that academic fraud occurred, meaning that a panel votes whether it is more likely than not that a violation took place.
Some faculty have raised concerns that not having a unified approach to student use of artificial intelligence could lead to more problems. students who are allowed to use the technology in one course, for instance, might believe they are allowed to use it in another course where the professor does not have a specific policy.
While the university advises faculty to include a policy on artificial intelligence in their syllabi, it does not require them to do so.
While LSU does not yet have policies on the topic, LSU Provost Roy Haggerty acknowledged the need to develop some. He announced Monday the administration would convene a working group with the Faculty Senate to craft policy for the university.
But while students are cautioned to be wary of relying on artificial intelligence, the administration is embracing its use and encourages faculty to explore ways to use it themselves.
Haggerty is teaching a course on artificial intelligence this semester and even used it to write the first version of his syllabus, he told faculty senators Monday. Haggerty emphasized the need to train AI practitioners for the university and the business community.
Not only that, Haggerty is exploring ways to incorporate AI into the day-to-day operations of the university. In his own research into the technology, Haggerty said he has used ChatGPT to generate ideas for improving freshman retention, to compile and analyze data, and to generate Python computer programming code.
Haggerty said he would like to have a closed-source large language model (LLM) specific to the university. Unlike open-source LLMs such as ChatGPT, a closed-source LLM is not made publicly available. In this case, only university community members would have access and be able to find information on policy, courses and events at LSU.
Some faculty are already incorporating the technology in their own research. Scott Baldridge, a math professor, asked Haggerty if LSU was looking into getting a university-wide license for ChatGPT so its faculty would not have to pay for it out of pocket, which Haggerty indicated would likely be coming in the near future.
But as faculty and administration embrace artificial intelligence, others are concerned about a double standard.
“We are using it and we’re using it to catch them,” LSU College of Business professor Roy Heidelberg said. “So are we making an issue out of something right now that perhaps we have no right to make an issue out of?”