GenAI potential and challenges in programming knowledge control
Abstract
The purpose of this article is to critically analyze the potential and challenges associated with the use of GenAI (generative artificial intelligence) in the automated verification and formation of students' programming knowledge assessments. In terms of methodology, this study employs a mixed approach: analyzing the functionality of popular generative models and platforms for competitive programming (Codeforces, LeetCode, HackerRank), as well as comparing their applicability to various task formats and types of knowledge (syntactic, semantic, logical). The possibilities of such systems in providing adaptive feedback, automating routine checks, and supporting individualized training are considered. It was revealed that, despite advantages such as scalability, interactivity, and the ability to generate contextually relevant content, there are significant limitations, including risks of incorrect assessment, vulnerability to model deception, lack of pedagogical interpretation of results, and potential dependence of students on AI assistance. In conclusion, the need to develop regulatory, pedagogical, and technical frameworks that ensure the ethical and responsible application of GenAI in programming knowledge assessment systems is emphasized.
Authors

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.