Higher ed’s next big test
AI returns to campus for another semester. Are universities ready?
This story appears in the fall 2023 issue of RED Magazine.
Q: What were the names of the bears that the Soviets sent into space?
A: The bears that were used in the early Soviet space program were not named. They were part of a series of biological experiments conducted by the Soviet Union to test the effects of space travel on living organisms. … Unfortunately, many of the animals did not survive these missions, including the bears, due to the harsh conditions of space travel.
A grisly scenario, but plausible, right?
There’s only one problem: The Soviets never sent bears into space.
The above is an exchange with ChatGPT, the much-ballyhooed artificial intelligence chatbot from OpenAI. When asked for the names of the make-believe cosmonauts, the text-based tool generated false information, a phenomenon known as “hallucination.” This scenario highlights one of many challenges and inevitabilities faced by universities across the world with the sudden rise of generative AI.
“We can use AI tools to accomplish many tasks, but many of those tools are still ignorant,” said Shaun Schafer, Ph.D., associate vice president for Curriculum, Academic Effectiveness and Policy Development at Metropolitan State University of Denver. “If we cede all our processes to AI, it results in an incomplete education.
“At the same time, the cows are out of the barn. We live in a world where this (technology) exists. Now, what should we do with it?”
Debating impact
That question kicked off the University’s Generative Artificial Intelligence Taskforce, or GAIT, in February. The group, comprising faculty members and administrators, is mandated to develop a response to AI-focused issues in teaching and learning, academic integrity and assessment.
A substantial outcome of GAIT’s work thus far is clarity of expectations for students. Beginning this fall, each MSU Denver faculty member in their syllabi had to address their pedagogical relationship to AI, ranging from incorporation to prohibition.
Schafer, a task force co-chair, noted that the broad representation of constituents in the group is critical to ensure its effectiveness, with faculty members leading the way.
RELATED: AI goes to college
English Professor Jessica Parker, Ph.D., a taskforce member, has been cautiously experimenting with ChatGPT and Google’s Bard to spice up PowerPoint presentations.
Though she understands the wide range of responses to AI in academia, she thinks the rumors of the college essay’s demise may be greatly exaggerated — for now.
Parker said the technology can mimic text composed by humans but can’t replace it because the tools can’t think beyond specific parameters or understand emotion. “It’s also only as effective as users are discerning with their prompts,” she added. “Garbage in equals garbage out.”
Jeff Loats, Ph.D., director of MSU Denver’s Center for Teaching, Learning and Design and GAIT co-chair, would like to see more “hair-pulling” at universities grappling with how to assess learning differently. “I don’t think higher ed is addressing this with as much urgency as we might need,” he said.
He noted that, with discipline-dependent exceptions, educators largely use writing as a proxy for thinking: A subject-matter expert evaluates what the learner knows based on what they write. AI tools skew this process.
In response, faculty members have switched to reintroducing in-person paper-based and oral exams. But at an institution such as MSU Denver, where one-third of classes are remote, that remains “a two-thirds solution at best, not even getting into matters of accessibility,” Loats said.
RELATED: Hello, world: AI comes to life
One of the key differences between AI-generated work and traditional plagiarism is that the latter is detectable by machines. In late July, OpenAI (the creator of ChatGPT) pulled the plug on its own detection software due to its low efficacy. And even if such tools proved successful, with an error rate of 1%-2%, Loats said implementing them at scale would be a logistical nightmare.
“Assuming it’s looking at every assignment from every (MSU Denver) student, that’s at least 10,000 per week. At that rate, 100-200 students weekly could be wrongly accused of cheating,” he said. “The field doesn’t yet have a great example of how we deal with the change in assessment — that’s the work I think deserves an intense response from the instructors and departments.”
What is lost?
As a Journalism and Communication Studies student, Shania Rea knows the importance of asking the right questions and fact-checking her work. The first-generation senior recently used ChatGPT for a course on communication theory. She found the tool potentially helpful but the results “a bit iffy.” She suggested it would be best-suited to assist with secondary tasks or to help fill gaps.
But she was also quick to note the potential pitfalls of trading critical thinking for convenience. “If we become overly reliant upon computers for everything, what have we lost?” she asked.
Rea may not be the only skeptic. Traffic to ChatGPT’s website decreased by nearly 10% in June, according to internet analytics firm Similarweb.
Some have speculated this was due to the end of the school year. But Steve Geinitz, Ph.D., assistant professor of Computer Sciences at MSU Denver, suggested an alternate explanation.
“The functionality has its limits,” he said. “For every little task someone wants to make more efficient, a lot of times it just isn’t going to work for any number of reasons.”
While Geinitz, a former data scientist with Facebook, doesn’t foresee an “AI winter,” he is not surprised by the cooling of the initial hype cycle.
Financial markets are still bullish on AI: Cloud and enterprise investment into AI semiconductor manufacturer Nvidia has more than tripled its stock price this year. Goldman Sachs’ spring forecast projected that AI could potentially raise global GDP by 7% — alongside eventually displacing 300 million jobs.
Practically speaking One of a teacher’s primary challenges is the sheer volume of grading. Jangwoo Jo, Ph.D., assistant professor in Beverage Management in MSU Denver’s School of Hospitality, has used AI tools to help manage his assessment of 60-plus students each semester. The result is the ability to spend more time directly working with individual students to ensure that they’re getting the most out of their MSU Denver education. “The utility and benefit of AI are acknowledged,” Jo said. “And though there are concerns, I’m optimistic about this opportunity to prepare students for the advances they’ll encounter in the workforce.”
|
|
Although Geinitz favors in-class testing with no devices to gauge learning, he knows that students need to understand AI tools as they look to compete in the workforce.
As a single mom who works while going to school, Bailey Evans would seem the ideal candidate for a time-saving tool. The Business Management major recently experimented with AI tools to source citations for a research paper. The results were subpar, and ironically, Evans ended up writing the citations by hand to save time.
Perhaps even more meaningful, however, was the sense of self-authorship.
“When I write something, I want people to know it’s coming from me and not the computer. Otherwise, it feels disingenuous, like cheating,” she said.
Issues of accuracy will undoubtedly be addressed as AI continues to iterate its technological abilities at a breakneck pace. But as Evans’ response indicates, the technology’s integration on college campuses extends far beyond course-correcting for nonexistent space bears.