Large Language Models and Automatic Item Generation

Educational Exercises and Activities Community Group,

Hello. I am pleased to share a paper about uses of large language models (LLMs) for automatic item generation (AIG):

Generating Multiple Choice Questions from a Textbook: LLMs Match Human Performance on Most Metrics (2023) [PDF<https://ceur-ws.org/Vol-3487/paper7.pdf>]
Andrew M. Olney

Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled human evaluation of three conditions: a fine-tuned, augmented version of Macaw, instruction-tuned Bing Chat with zero-shot prompting, and human-authored questions from a college science textbook. Our results indicate that on six of seven measures tested, both LLM’s performance was not significantly different from human performance. Analysis of LLM errors further suggests that Macaw and Bing Chat have different failure modes for this task: Macaw tends to repeat answer options whereas Bing Chat tends to not include the specified answer in the answer options. For Macaw, removing error items from analysis results in performance on par with humans for all metrics; for Bing Chat, removing error items improves performance but does not reach human-level performance.


Best regards,
Adam Sobieski

Received on Friday, 29 September 2023 05:41:43 UTC