AI-Infused Semantic Model to Enrich and Expand Programming Question Generation
DOI:
https://doi.org/10.37965/jait.2022.0090Keywords:
assessment, programming, semantic modeling, automatic question generationAbstract
Creating practice questions for programming learning is not easy. It requires the instructor to diligently organize heterogeneous learning resources, i.e., conceptual programming concepts and procedural programming rules. Today’s programming question generation (PQG) is still largely replying on the demanding creation task performed by the instructors without advanced technological support. In this work, we propose a semantic PQG model that aims to help the instructor generate new programming questions and expand the assessment items. The PQG model is designed to transform conceptual and procedural programming knowledge from textbooks into a semantic network by the Local Knowledge Graph (LKG) and Abstract Syntax Tree (AST). For any given question, the model queries the established network to find related code examples and generates a set of questions by the associated LKG/AST semantic structures. We conduct analysis to compare instructor-made questions from 9 undergraduate introductory programming courses and textbook questions. The results show that the instructor-made questions had much simpler complexity than the textbook ones. The disparity of topic distribution intrigued us to further research the breadth and depth of question quality and also to investigate the complexity of the questions in relations to the student performances. Finally, we report an user study results on the proposed AI-infused semantic PQG model in examining the machine-generated questions quality.
Metrics
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.