TY - GEN
T1 - How Large Language Models are Transforming Teachers' Assessment of Student Competency
T2 - 2025 International Conference on Electronics, Information, and Communication, ICEIC 2025
AU - Lee, Dongyub
AU - Kim, Daejung
AU - Loeser, Martin
AU - Seo, Kyoungwon
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Evaluating student competencies is critical to fostering academic success, but the traditional process of assessment and report generation is time-consuming and labor-intensive for teachers. The emergence of Large Language Models (LLMs) presents an opportunity to automate this process, potentially reducing the workload for educators. However, concerns such as hallucinations in LLM outputs and the lack of usability studies involving teachers raise questions about their reliability and practical application. In response, we developed an LLM-based report writing system specifically designed for real-time competency evaluation. To assess its effectiveness, we conducted a case study involving five educational experts, each with over 15 years of experience. These experts used the system to generate student competency reports and found them to be both sensible and specific. While they expressed concerns about issues like insufficient detail, they recognized the system's significant time-saving benefits. Our findings demonstrate that LLMs can positively impact teachers' competency assessments by streamlining the reporting process, offering valuable support for educators.
AB - Evaluating student competencies is critical to fostering academic success, but the traditional process of assessment and report generation is time-consuming and labor-intensive for teachers. The emergence of Large Language Models (LLMs) presents an opportunity to automate this process, potentially reducing the workload for educators. However, concerns such as hallucinations in LLM outputs and the lack of usability studies involving teachers raise questions about their reliability and practical application. In response, we developed an LLM-based report writing system specifically designed for real-time competency evaluation. To assess its effectiveness, we conducted a case study involving five educational experts, each with over 15 years of experience. These experts used the system to generate student competency reports and found them to be both sensible and specific. While they expressed concerns about issues like insufficient detail, they recognized the system's significant time-saving benefits. Our findings demonstrate that LLMs can positively impact teachers' competency assessments by streamlining the reporting process, offering valuable support for educators.
KW - Artificial intelligence
KW - Assessment
KW - Competency
KW - LLM-based report writing
KW - Large language model
KW - Student
UR - https://www.scopus.com/pages/publications/86000022252
U2 - 10.1109/ICEIC64972.2025.10879736
DO - 10.1109/ICEIC64972.2025.10879736
M3 - Conference contribution
AN - SCOPUS:86000022252
T3 - 2025 International Conference on Electronics, Information, and Communication, ICEIC 2025
BT - 2025 International Conference on Electronics, Information, and Communication, ICEIC 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 19 January 2025 through 22 January 2025
ER -