TY - JOUR
T1 - Development and validation of Generative AI Competence Scale (GenAIComp) among university students
AU - Lee, Seul Chan
AU - Baby, Tiju
AU - Vongvit, Rattawut
AU - Lee, Jieun
AU - Kim, Young Woo
AU - Cha, Min Chul
AU - Yoon, Sol Hee
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2026/3
Y1 - 2026/3
N2 - The rapid development of Generative Artificial Intelligence (Generative AI) across several sectors underscores the need for a systematic tool to evaluate AI competence. Current digital literacy frameworks lack AI-specific competencies, resulting in inconsistencies in the assessment of AI competence. This study aims to establish a standardized assessment framework for Generative AI competence by identifying key skill factors and empirically validating a structured evaluation tool called the Generative AI Competence Scale (GenAIComp). The proposed GenAIComp has five essential factors: Information and Data Literacy, Communication and Collaboration, Digital Content Creation, Safety and Ethics, and Problem-Solving. A quantitative approach was employed, incorporating expert validation, pilot testing, and extensive empirical evaluation involving 1000 participants, principally university students. The factor analysis confirmed a robust 5-factor structure with strong psychometric properties. The final model demonstrated excellent fit indices, confirming its reliability and validity in assessing Generative AI competence across the five key factors. Research demonstrates that educational background considerably impacts AI competence, with individuals from technical disciplines showing a greater aptitude for problem-solving and content generation. Gender-based disparities were noted, with males achieving marginally higher scores in several factors, but with minimal effect sizes. Correlation analysis indicated that perceived AI expertise and frequency of AI utilization significantly influenced competence, especially in data literacy and problem-solving, and exhibited less correlation with ethical awareness. GenAIComp provides a reliable tool for assessing AI competence, helping educators, industry experts, and policymakers to design AI training programs and integrate AI literacy into curricula and thereby AI technology advancement in society. Future research should explore its applicability across cultures and include performance-based assessments to enhance AI competence.
AB - The rapid development of Generative Artificial Intelligence (Generative AI) across several sectors underscores the need for a systematic tool to evaluate AI competence. Current digital literacy frameworks lack AI-specific competencies, resulting in inconsistencies in the assessment of AI competence. This study aims to establish a standardized assessment framework for Generative AI competence by identifying key skill factors and empirically validating a structured evaluation tool called the Generative AI Competence Scale (GenAIComp). The proposed GenAIComp has five essential factors: Information and Data Literacy, Communication and Collaboration, Digital Content Creation, Safety and Ethics, and Problem-Solving. A quantitative approach was employed, incorporating expert validation, pilot testing, and extensive empirical evaluation involving 1000 participants, principally university students. The factor analysis confirmed a robust 5-factor structure with strong psychometric properties. The final model demonstrated excellent fit indices, confirming its reliability and validity in assessing Generative AI competence across the five key factors. Research demonstrates that educational background considerably impacts AI competence, with individuals from technical disciplines showing a greater aptitude for problem-solving and content generation. Gender-based disparities were noted, with males achieving marginally higher scores in several factors, but with minimal effect sizes. Correlation analysis indicated that perceived AI expertise and frequency of AI utilization significantly influenced competence, especially in data literacy and problem-solving, and exhibited less correlation with ethical awareness. GenAIComp provides a reliable tool for assessing AI competence, helping educators, industry experts, and policymakers to design AI training programs and integrate AI literacy into curricula and thereby AI technology advancement in society. Future research should explore its applicability across cultures and include performance-based assessments to enhance AI competence.
KW - AI competence
KW - Digital literacy
KW - GenAIComp
KW - Generative AI
UR - https://www.scopus.com/pages/publications/105016463418
U2 - 10.1016/j.techsoc.2025.103059
DO - 10.1016/j.techsoc.2025.103059
M3 - Article
AN - SCOPUS:105016463418
SN - 0160-791X
VL - 84
JO - Technology in Society
JF - Technology in Society
M1 - 103059
ER -