Explainable Artificial Intelligence for Workflow Verification in Visual IoT/Robotics Programming Language Environment
DOI:
https://doi.org/10.37965/jait.2020.0023Keywords:
explainable AI, π-calculus, VIPLE, educationAbstract
Teaching students the concepts behind computational thinking is a difficult task, often gated by the inherent difficulty of programming languages. In the classroom, teaching assistants may be required to interact with students to help them learn the material. Time spent in grading and offering feedback on assignments removes from this time to help students directly. As such, we offer a framework for developing an explainable artificial intelligence that performs automated analysis of student code while offering feedback and partial credit. The creation of this system is dependent on three core components. Those components are a knowledge base, a set of conditions to be analyzed, and a formal set of inference rules. In this paper, we develop such a system for our own language by employing π-calculus and Hoare logic. Our detailed system can also perform self-learning of rules. Given solution files, the system is able to extract the important aspects of the program and develop feedback that explicitly details the errors students make when they veer away from these aspects. The level of detail and expected precision can be easily modified through parameter tuning and variety in sample solutions.
Metrics
Published
How to Cite
Issue
Section
License
Copyright (c) 2021 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.