What is Learnosity's GPT-4 Powered Test Generating Tool? The CEO Explains
A new GPT-4-powered tool from Learnosity can instantly generate questions and detailed and varied feedback. Here’s what educators need to know.
Learnosity, an assessment technology company, recently entered the AI arena with a new tool powered in part by GPT-4 and designed to help create high-quality assessments for students.
Proponents of AI in education generally point to test generation as the type of time-consuming administrative task that with proper oversight AI can do more efficiently than a human. Learnosity’s tool is designed to do just that by allowing test publishers to quickly create massive volumes of item banks as well as complete a task such as convert multiple-choice questions into more challenging contrast-and-compare questions or short-answer questions.
AI can also help create many more responses to different student answers, all of which can enhance student learning, says Gavin Cooney, CEO & Co-Founder of Learnosity.
What Is Learnosity’s GPT-4 Powered Test Generating Tool Good At?
Instead of trying to replace human authors, Cooney says he and others at Learnosity began exploring the potential of AI test generators by asking, “Can I make an existing expert author 10 times more effective, and can they create more content, better content, more interactive content with better students feedback, and so on?'” he says.
Until recently, AI technology wasn’t at a point it could do this, but GPT-4 advancements, along with other tools, now make it feasible. The idea is that an assessment can be richer and do more to enhance learning when it provides students with more detailed feedback, particularly when it comes to wrong answers/distractors.
“It would be massively expensive and labor-intensive to write student feedback for every possible distracter all the way along, but now it's possible,” Gavin says.
What AI-Generated Questions Can Look Like in Practice
When he spoke to Tech & Learning, Cooney had just returned from seeing Elton John perform at the Glastonbury Festival. This was (allegedly) the legendary performer's last concert in the United Kingdom, and if you were to ask students a multiple-choice question about where John performed his last United Kingdom concert, Cooney says you’d want a variety of plausible answers as well as good student feedback.
Tech & Learning Newsletter
Tools and ideas to transform education. Sign up below.
“If the answers are Glastonbury, Coachella, Wembley Stadium, or Giant Stadium in New York, the idea is are they valid options?” he says.
You’d also want to make sure each option provides detailed feedback to students and enhance student learning. For instance, it should respond, “Yes, he did play Glastonbury and it was broadcast,” he says. “No, he didn’t play Coachella. He never played Coachella, and Coachella is in California, and you should have looked at British things.”
Cooney quickly adds this is a simple example and that multiple choice isn’t the most effective way to ask questions. However, it’s a quick example of the type of more detailed feedback AI can generate almost instantly.
Advice For Educators on Using AI to Generate Tests
Right now Learnosity’s tool is geared toward authors who create tests professionally, however, as more AI tools become available and easier to use, more and more educators may consider using one to help them create test questions. When educators do this, Cooney says it's important to focus on avoiding mere knowledge tests and instead focus on deeper learning, problem-solving, and critical thinking.
“The example I gave you about where Elton John played is a knowledge test,” he says. “There’s a temptation to just make it knowledge-based stuff when it can be comprehension and understanding and application.”
Beyond checking AI-generated questions for educational efficacy, educators need to be mindful of potential mistakes and cultural biases.
“You need to check for inherent biases in there,” he says. “It could be a question about Christmas that doesn’t appeal to certain demographics. There’s norms that are fine for you in Massachusetts but not so fine for somebody in Texas or in Mexico, or in Ireland for that matter.”
Ultimately, AI should not be left navigating student learning on its own and it’s best for educators to think of the tool as a copilot. “What we want them to do is be able to direct the AI very, very, closely,” Gavin says. “We're trying to offer review and edit as you go along.”
To share your feedback and ideas on this article, consider joining our Tech & Learning online community here.
Erik Ofgang is a Tech & Learning contributor. A journalist, author and educator, his work has appeared in The New York Times, the Washington Post, the Smithsonian, The Atlantic, and Associated Press. He currently teaches at Western Connecticut State University’s MFA program. While a staff writer at Connecticut Magazine he won a Society of Professional Journalism Award for his education reporting. He is interested in how humans learn and how technology can make that more effective.