Learning Analytics and Responsible Artificial Intelligence for Data-Driven Pedagogical Decision-Making in Basic Education
Analítica de aprendizaje e inteligencia artificial responsable para la toma de decisiones pedagógicas basadas en datos en educación básica
Análise de Aprendizagem e Inteligência Artificial Responsável para a Tomada de Decisões Pedagógicas Orientadas por Dados na Educação Básica
1Augusto Paolo Bernal Parraga*, ORCID: https://orcid.org/0000-0003-0289-8427
1Universidad de las Fuerzas Armadas ESPE. Quito- Ecuador
*Autor para la correspondencia: abernal2009@gmail.com
Abstract
This study investigates the role of learning analytics under responsible artificial intelligence in facilitating data-driven methods in teaching, particularly in basic education, to analyze the effectiveness and ethics of the practice. A research design of two phases, respectively, a systematic literature review and a quantitative explanatory empirical study, was adopted. First, in compliance with the PRISMA 2020 standard, a systematic literature review was presented, in which the author synthesized 85 peer-reviews articles for the period of 2020 to 2025 in renowned international databases. Second, an explanatory study of quantitative nature within a cross-sectional design was carried out within the public basic education system in the empirical phase of the study, which served 312 students of the upper primary and lower secondary cycles, and 60 teachers of the languages, mathematics, and science of the disciplines. Data showed that the learning analytics indicators of academic and behavioral risks of students and the interactivity of students positively influence data-informed pedagogy. In addition, responsible artificial intelligence of teachers, particularly transparency, negligence, data custodianship, and human trump, the most robust constrains of predictive of trust and the most effective utilization of the insights supported by analytics. This study made the first empirical contribution to the field by proposing and estimating an integrative concept of learning analytics as a decision support system within orthodox AI and a teacher’s ethics combined. This study is effective for most educators and policy makers to promote data-centered innovations that are defined sustianably, equitably, and responsibly in basic education.
Keywords: learning analytics; responsible artificial intelligence; data-driven decision-making; basic education; educational equity; ethical AI
Resumen
Se estudia el papel de la analítica de aprendizaje apoyada por inteligencia artificial responsable en la toma de decisiones pedagógicas basadas en datos en educación básica, integrando evidencia empírica y consideraciones éticas. Se empleó un diseño en dos fases que integró una revisión sistemática de la literatura y un estudio empírico cuantitativo de tipo explicativo. En una primera fase se desarrolló una revisión sistemática de la literatura siguiendo la guía PRISMA 2020, analizando 85 estudios publicados entre 2020 y 2025 en bases de datos de alto impacto. En una segunda fase, se llevó a cabo un estudio empírico cuantitativo de tipo explicativo y transversal en instituciones públicas de educación básica. La fase empírica incluyó 312 estudiantes de educación primaria superior y secundaria inferior, así como 60 docentes de las áreas de Lengua, Matemática y Ciencias. Los resultados evidencian asociaciones significativas entre indicadores de analítica de aprendizaje y la toma de decisiones pedagógicas informadas. Asimismo, las percepciones sobre inteligencia artificial responsable—especialmente transparencia, explicabilidad y supervisión humana—emergieron como los predictores más fuertes de la confianza docente en los sistemas analíticos. El estudio propone y valida empíricamente un marco conceptual integrador que articula analítica de aprendizaje, inteligencia artificial responsable y agencia docente, contribuyendo al desarrollo de enfoques éticos, inclusivos y basados en evidencia para la innovación educativa en educación básica.
Palabras clave: analítica de aprendizaje; inteligencia artificial responsable; toma de decisiones basada en datos; educación básica; equidad educativa; IA ética
Resumo
Este estudo examina o papel da análise de aprendizagem apoiada por inteligência artificial responsável na tomada de decisões pedagógicas baseadas em dados na educação básica, integrando evidências empíricas e considerações éticas. Foi empregado um delineamento em duas fases, combinando uma revisão sistemática da literatura e um estudo empírico quantitativo explicativo. A primeira fase envolveu uma revisão sistemática da literatura seguindo as diretrizes PRISMA 2020, analisando 85 estudos publicados entre 2020 e 2025 em bases de dados de alto impacto. A segunda fase consistiu em um estudo empírico quantitativo explicativo transversal conduzido em instituições públicas de educação básica. Esta fase empírica incluiu 312 alunos do ensino fundamental II e do ensino médio I, bem como 60 professores dos departamentos de Língua Portuguesa, Matemática e Ciências. Os resultados demonstram associações significativas entre indicadores de análise de aprendizagem e a tomada de decisões pedagógicas informadas. Além disso, as percepções de inteligência artificial responsável — especialmente transparência, explicabilidade e supervisão humana — emergiram como os preditores mais fortes da confiança dos professores em sistemas analíticos. Este estudo propõe e valida empiricamente uma estrutura conceitual integrativa que articula análise de aprendizagem, inteligência artificial responsável e protagonismo docente, contribuindo para o desenvolvimento de abordagens éticas, inclusivas e baseadas em evidências para a inovação educacional na educação básica.
Palavras-chave: análise de aprendizagem; inteligência artificial responsável; tomada de decisão baseada em dados; educação básica; equidade educacional; IA ética
Introduction
The speed of implementing digital innovations into education processes in the world has changed the teaching and learning system. In the last decade, the role of Information and Communication Technology (ICT) in teaching and learning has changed from being regarded as mere teaching aids to becoming fundamental teaching design, learning assessment and educational management elements. In the recent literature, artificial intelligence (AI) was found to influence teaching methodologies, educator line of reasoning, and school management frameworks. AI has also called for educators to embrace new forms of pedagogy and teaching that are data driven (Velander et al., 2024; Bergdahl & Sjöberg, 2025). In the positive transformation brought about in the education system, the world has still retained some structural issues. These problems include under trained educators in data, and the unequal capacity of educational institutions to harness digital innovations to improve the education system (Mandinach & Schildkamp, 2021).
The development of Learning Analytics (LA) as a discipline that intersects educational data, AI, and the learning sciences is now able to make use of students’ interactions with digital environments. In primary education, learning analytics has been used to assist in the early understanding of academic risk, personalizing instruction, formative assessment, and feedback sharpening. Indications from primary and secondary education suggest that, when properly designed and implemented, analytics-based tools, and in particular teacher-facing dashboards, enhance the awareness and support of instructional decision making (Tirado-Olivares et al., 2024; Possaghi et al., 2025). Furthermore, design-based research points to the fact that collaboratively created learning analytics systems enable teachers to appropriate and integrate data into classroom practice (Mohseni & Masiello, 2025; Cheng et al., 2024).
There are still unqualified AI systems still being introduced into educational environments which are concerning from an ethical and pedagogical standpoint. These systems are concerning from a social standpoint as well, especially within primary basic education, where the consequence of educational decision making directly impacts minors. There is a lack of extensive research pertaining to K-12 systems due to the concerns mentioned, but the appropriate and unqualified use of learning analytics presents the dangers of data misuse, opacity, and a lack of actionable transparency (Beerwinkle, 2021). These concerns have resulted in a shift in focus towards Responsible AI (RAI), emphasizing ethical frameworks of fairness, explainability, privacy, autonomy, data, and the human in the loop (Nguyen, 2023; Khosravi et al., 2022). Research concerning teacher attitudes towards AI within primary and secondary education reiterates that, in primary and secondary education analytics systems, trust is largely contingent upon transparency and the retention of professional autonomy (Velander et al., 2022).
While there have been numerous studies internationally regarding learning analytics and AI in education, there continues to be a lack of studies that focus on pedagogical decision-making in basic education. Most studies focus on the either the technical frameworks or the accuracy of predictive reporting, without considering how a teacher(s) uses the analytics to make instructional adjustments. Empirical studies suggest that there is a lack of understanding regarding the mindsets surrounding analytics and decision making based on analytics, and a lack of analytics capacity that continues to impede the adoption and use of educational analytics within practical school frameworks (Mandinach & Schildkamp, 2021; Schildkamp et al., 2020). Additionally, some of the most recent qualitative studies suggest a lack of recognition of teachers’ agency and contextual knowledge in many analytics-driven frameworks; consequently, the pedagogical impact can be significant (Valtonen et al., 2025; Possaghi et al., 2025). The challenges related to teachers’ data literacy and the ability to make sense of educational data have been documented in many school contexts and quite extensively (Beck & Nunnaley, 2021; Kippers et al., 2018).
The challenges mentioned above are worsened in developing educational contexts and contexts where educational opportunities are underrepresented and where the use of digital and AI tools is often fragmented and poorly integrated into constructive systematic planning tools within the educational system. Teachers often have little or no data literacy and even less teaching opportunities at the institutional level (Mandinach & Schildkamp, 2021; Beerwinkle 2021). Without address data ethically, the potential of learning analytics within AI to be transformative in education will be lost.
The aim of the study is to address the above issues to understand how the learning analytics and responsible AI assist in pedagogical data driven decision making in basic education. The study comprises a systematic literature review and an empirical research study. The study is intended to add to the understanding of how analytics based on ethical use and teacher autonomy can positively improve educational practices, timely identification and intervention of student learning challenges, and equitable education. The outcomes are aimed at educators, policymakers and researchers who seek ethical, sustainable and innovative basic education.
Methodologies
2.1 Research Design
In this current study, we utilized a two-phase research design combining systematic literature review (SLR) with a quantitative explanatory empirical research study. The SLR was conducted following the PRISMA 2020 (Page et al., 2021) guidelines to help identify trends, gaps, and conceptual foundations pertaining to learning analytics and responsible artificial intelligence in basic education. The findings of the SLR guided the construction of the conceptual framework and the research questions.
A cross-sectional quantitative empirical study was then conducted to assess the relationships that exist between the indicators of learning analytics, teachers' perceptions of responsible AI, and data-driven pedagogical decision-making within real school environments. The study, while integrating two phases, was wholly empirical and thus, was exclusively quantitative in nature. For this reason, the research, in the strictest sense, cannot be considered to employ a mixed-methods design.
2.2 Procedures in Conducting a Systematic Literature Review
2.2.1 Data Sources and Strategies for Literature Search
In this study, the literature search was performed across five international scientific databases which include: the Web of Science Core Collection, ERIC, Scopus, ScienceDirect, and IEEE Xplore. These databases were selected based on their relevance and comprehensive coverage for high-impact research in education, learning analytics, and artificial intelligence.
Boolean operators were used in a structured search strategy applied to multiple databases in order to keep the same concepts. The search was delimited to the years 2020 to 2025 and combined the terms learning analytics, artificial intelligence, pedagogical decision-making, and primary education (K–12).
2.2.2 Eligibility Criteria and Study Selection
Inclusion criteria were defined as journal articles, from peer-reviewed sources, published within the years 2020 to 2025, having a thematic focus on basic education, and concerning learning analytics and/or artificial intelligence in connection with pedagogical decision-making. Empirical and theoretical studies written in English or Spanish were included.
Studies were also out of scope if they were grey literature, conference proceedings, focused on higher education exclusively, system-development papers with no pedagogical component, and papers without the complete text.
The selection of studies was conducted in the four stages of PRISMA: identification, screening, eligibility, and inclusion. The screening and eligibility assessment were conducted by two independent reviewers, with discrepancies managed through discussion to bolster inter-raters reliability.
2.2.3. Data Extraction, Quality Assessment, and Synthesis
For each learning analytics application, artificial intelligence used, and the outcomes reported, as well as the educational context, research purpose, and ethical issues, information was systematically gathered through a data extraction template.
The factors which determined the quality of the Methodology, which include the Research Design, the Appropriateness of the Analytical Framework, Model Transparency (Explainability), Ethical Model, and Credibility of Findings, helped in setting the boundaries of the study's methodological quality assessment. The study's synthesis included only those assessed to be of high or medium quality.
Due to the varied methodologies, a synthesis by narrative was utilized. The publication year, geographic distribution, and number of AI techniques used are all examples of trends which were analyzed and summarized in a quantitative manner.
2.3 Empirical Study
2.3.1 Research Context and Participants
The research empirical study was conducted in the academic year 2024-2025 within the public basic education institutions of a Latin American educational system. The choice of context was to cover the gap identified in the SLR where a developing region in learning analytics research was underrepresented.
The sample consisted of 312 students from the upper primary and lower secondary education (5-9 grades) and 60 teachers of the language, mathematics, and science subjects. The sampling strategy was purposive non-probabilistic, which was determined by the prior institutional consent and the presence or absence of digital learning platforms.
2.3.2 Tools of Research and Data Collection
The study utilized three (3) complimentary data sources:
Learning Analytics data which were pulled from the school’s learning management systems regarding time on task, assessment score, task access frequency, and attendance data.
Pedagogical decision-making questionnaire, measuring teachers usage of data-informed decisions, perceived usefulness of analytics, and trust in AI-supported insights (Cronbach’s α = 0.87).
Responsible AI perception scale, focusing on transparency, fairness, data privacy and oversight (Cronbach’s α = 0.84).
2.3.3 Data Analysis Procedures
In compliance with responsible AI principles, learning analytics data were preprocessed and analyzed using clear and interpretable analytic methods. The analysis relied on descriptive statistics, Pearson correlation coefficients and multiple linear regression to identify predictors of data-informed pedagogical decision-making. The significance level was set at p < .05.
2.3.4 Ethical Considerations
Ethical reviews were obtained from the appropriate educational authorities. Consent was obtained from teachers, parents and school administrators. Student data were anonymized and handled in accordance with data protection legislation.
Results
Results of the Empirical Study
Descriptive statistics
Learning analytics indicators are split by student (N=312) and teacher (N=60) levels. Descriptive stats will be presented separately for each unit of analysis.
Descriptive statistics have been summarized for both the main learning analytics indicators and the pedagogical decision making variables presented in Table 1.
Table 1. Summary of the indicators of learning analytics and variables of pedagogical decision making.
Student-level variables (N = 312)
|
Variable |
Mean |
SD |
Min |
Max |
|
Time-on-task (minutes/week) |
184.6 |
42.3 |
96 |
268 |
|
LMS access frequency (weekly) |
14.2 |
4.1 |
6 |
23 |
|
Formative assessment score |
7.84 |
1.12 |
4.9 |
9.8 |
|
Academic risk index |
0.37 |
0.18 |
0.05 |
0.82 |
Teacher-level variables (N = 60)
|
Variable |
Mean |
SD |
Min |
Max |
|
Data-informed pedagogical decisions |
3.91 |
0.62 |
2.4 |
4.8 |
|
Trust in AI-supported insights |
3.76 |
0.58 |
2.6 |
4.7 |
|
Responsible AI perception |
3.68 |
0.64 |
2.3 |
4.6 |
Teacher-level variables rely on self-reported questionnaires completed by the participating teachers (N=60). Data at the student-level stemmed from learning system management records (N=312).
The findings show a positive level of engagement with digital learning environments and positive views from teachers regarding the level of value and reliability of AI-enhanced learning analytics.
Correlation Analysis
The author calculated Pearson correlation coefficients to determine the possible connections among learning analytics and the various facets of pedagogical decision-making.
Table 2. Correlation matrix between learning analytics and pedagogical decision-making
|
Variables |
1 |
2 |
3 |
4 |
|
1. Time-on-task |
— |
|||
|
2. LMS access frequency |
.48** |
— |
||
|
3. Academic risk index |
−.52** |
−.41** |
— |
|
|
4. Data-informed pedagogical decisions |
.46** |
.39** |
−.44** |
— |
Note. p < .01
The analysis indicates that student engagement indicators and pedagogical decision-making are statistically connected. Higher levels of time-on-task and LMS access frequency are positively associated with data-informed teaching decisions, while academic risk indicators correlated inversely and moderately.
________________________________________
Regression Analysis
To analyze the learning analytics indicators and their impact on pedagogical decision-making, a multiple linear regression analysis was also performed.
The regression analyses were performed with aggregated student-level learning analytics indicators, which are connected to teacher-level pedagogical decision-making variables, resulting in degrees of freedom determined by the student-level sample size.
Table 3. Results from multiple regression analysis on data-informed pedagogical decision-making
|
Predictor |
B |
SE |
β |
t |
p |
|
Time-on-task |
0.31 |
0.07 |
.34 |
4.43 |
< .001 |
|
LMS access frequency |
0.24 |
0.06 |
.29 |
3.98 |
< .001 |
|
Academic risk index |
−0.28 |
0.08 |
−.26 |
−3.51 |
.001 |
|
Responsible AI perception |
0.36 |
0.05 |
.41 |
6.72 |
< .001 |
Model fit:
R² = 0.49, Adjusted R² = 0.46, F(4, 307) = 23.18, p < .001
Responsible AI perception, time-on-task, and LMS access frequency are the strongest predictors for data-informed pedagogical decision-making. Academic risk is demonstrated to negatively impact the amount of pedagogical action taken, suggesting that as the risk increases, more pedagogical action is taken.
________________________________________
Responsible AI and Pedagogical Decision-Making
In order to investigate the role of ethics more closely, a further regression model was applied with focus on the responsible AI components.
Table 4 presents findings from a regression analysis correlating specific dimensions of responsible AI to teacher trust in AI-supported insights
|
Predictor |
β |
t |
p |
|
Transparency & explainability |
.38 |
5.84 |
< .001 |
|
Fairness & bias awareness |
.29 |
4.12 |
< .001 |
|
Data privacy & security |
.21 |
3.08 |
.002 |
|
Human oversight |
.33 |
4.97 |
< .001 |
R² = 0.52, p < .001
The most ethical dimensions of responsible AI, especially transparency and human oversight, are critical to trust development in teachers regarding AI-supported learning analytics. This reinforces the need for the most responsible forms of AI to be integrated.
Summary of Principal Empirical Findings
The empirical evidence shows:
Discussion
This study principal goal was to evaluate the impact of learning analytics, aligned with the framework of responsible artificial intelligence, on the improvement of data-driven pedagogy decision making within the realm of basic education. The literature review and empirical study findings provide a gaps coherent, and evidence-based response to the gaps in the extant research on learning analytics and AI in school contexts. The empirical study outcomes affirm that teacher data-driven pedagogical decisions are significantly influenced by learning analytics metrics fuel student engagement, and are characterized by time-on-task, and the number of interactions with digital learning tools. This corroborates the existing body of research in primary and secondary education, which posits that learning analytics, when integrated within classroom practices, has the potential to enhance instructional awareness and facilitate targeted pedagogical adjustments among educators. (Tirado-Olivares et al., 2024; Possaghi et al., 2025; Han et al., 2021).
Most studies reference learning analytics from a predictive or monitoring viewpoint, while the present study identifies learning analytics as tools that assist in the exercise of pedagogical discretion. Research that is design-based and school-centered, as with most analytics systems, demonstrate that when teachers are involved in the analytics, and in the operationalization of the actionable insights, the systems are most effective (Mohseni & Masiello, 2025). This is in line with previous studies which indicate that dashboards that are designed for students and teachers fail to achieve their intended purpose as a ‘one-size-fits-all’ model, and subsequently, they must be designed and interpreted with regard to their specific context (Teasley, 2017). This finding aligns with many of the most recent systematic reviews, which highlight the necessity for learning analytics to be integrated with pedagogical design in a manner that illustrates the importance of authentic, teacher-centered decision-making processes within the school context (Villa-Torrano et al., 2025). The current study findings corroborate the existing literature that learning analytics assist pedagogical decision-making only when the analytics outputs are deemed relevant, clear, and actionable by educators (Susnjak et al., 2022). For instance, the study contributes to existing literature by claiming that the most robust predictor of teachers’ trust in insights from analytics is the perception of responsible artificial intelligence. The present finding is congruent with literature that places the ethical use of AI as the primary consideration for meaningful use of technology in K–12 education (Nguyen et al., 2023).
Research related to school systems reveals that teachers' confidence in analytic systems and subsequent pedagogical usage can be negatively affected by opacity, explainability barriers, data protection surface issues, and systematic disregarding of the protection of relevant data (Beerwinkle, 2021). The current study further contributes to this area by quantitatively evidencing the extent to which the factors of transparency, fairness, data protection, and human oversight, affect teachers’ trust in AI-supported recommendations. This further supports the notion that responsible AI can be an enabler for data driven pedagogy, rather than a constraining framework.
These findings thoroughly validate the human-in-the-loop model for AI-assisted learning analytics in primary education. Teachers’ trust in the analytics outputs was influenced by the presence of analysis by people rather than AI, which points to the need for AI to support rather than replace critical professional input in teaching. This mirrors the K-12 qualitative research suggesting teachers appreciated analytics systems that complemented, not usurped, their authority (Valtonen et al., 2025). The result of the AI systems analytics as decision aids aptly supports the position of co-designed and analytics use centered on the teacher (Mohseni & Masiello, 2025; Possaghi et al., 2025). This is especially true for primary education, where the teacher must consider the child’s cognitive, socioemotional, and contextual factors in the teaching decision. This study maintains the need for a primary focus on the teacher rather than an overemphasis on the AI in educational techno-centric positions.
The learning analytics most likely aids in forecasting learning challenges, which the negative correlation of the academic risk indicators and pedagogical decision-making in the learning analytics suggests. This is in line with the recent analytics research where learning analytics dashboards can assist teachers in primary and secondary education make on-the-spot teaching decisions for students in need of more attention (Tirado-Olivares et al., 2024).
Nonetheless, the systematic review showed the literature still lacks the acknowledgement of fairness and bias, and related ethical concerns. This is especially the case in primary education where vulnerable learners may be adversely impacted by the decisions made through the use of analytics. Some studies, for example, that have combined learning analytics with Universal Design for Learning, have shown that ethically oriented learning analytics can promote equitable practices in education (Roski et al., 2024). The current study advances this line of research by demonstrating that the concern of legislation on fairness and bias affects the level of trust educators place on an analytic system the most.
The proposed conceptual framework is the first of its kind to combine learning analytics, responsible artificial intelligence, and pedagogical decision-making. Previous research has focused on either application of analytics or principles of ethics separately, but studies in K-12 have more recently called for integrated structures where data, pedagogy, and ethics are interwoven (Nguyen et al., 2023; Bergdahl & Sjöberg, 2025).
The study advances existing models and provides a scalable framework for the responsible implementation of learning analytics in schools by validating this framework in a real basic education context. The framework directly responds to issues from the systematic review focused on the lack of empirically validated frameworks incorporating ethical governance and teacher agency.
Despite the robustness of the findings, some limitations must be stated. Since the study utilized a cross-sectional design, future research should be longitudinal to assess the enduring effects of responsible AI-supported decision-making on teacher practices and student outcomes. Also, although the study tackles a poorly represented educational context, research across a variety of educational systems would be beneficial. Research should focus on students’ views of learning analytics, examine the governance frameworks of responsible AI in educational institutions, and analyze the intersection of analytics-related decision-making and inclusive pedagogy in diverse educational contexts.
Conclusions
The positive role of learning analytics coupled with responsible artificial intelligence in boosting basic education pedagogical analytics has been thoroughly documented in this research work. By combining an empirical validation with a systematic literature review and a conceptual framework, this research work pushes the boundaries from a primarily technical and descriptive understanding of educational AI. Data-driven pedagogical decisions were recorded for varying degrees of learning analytics pertaining to the students’ academic challenges and engagement. More importantly, the study demonstrates that the ethical and responsible AI dimensions (especially the transparency, explainability, and human in the loop components) foster teacher trust in, and effective use of, the AI analytic insights. This underscores that the absence of an ethical design framework and responsible governance, the educational impacts will be minimal.
The study also substantiates the human-in-the-loop paradigm, identifying teachers as primary stakeholders in AI-aided decision-making. You can posit that learning analytics systems, in contrast to systems that dispense with professional judgment, are most effective as decision support systems and, therefore, enhance the ability of teachers to address the plethora of needs of individual learners. This assertion is especially pertinent in the field of primary education, where the decisions that educators make, have far-reaching consequences in the cognitive, social, and emotional domains of learners. The long-standing gap in the literature that often conflates learning analytics, responsible AI, and pedagogical decision-making is, thus, empirically and theoretically, addressed by proposing the aforementioned integrated framework. This study also empirically advances high impact learning analytics research into literature-scarce educational settings by providing evidence from primary education in the developing world.
The implications of the findings are significant for both practice and policy. For educators, the results illustrate the importance of acquiring the competencies of data literacy and ethical AI so as to engage meaningfully with the insights of learning analytics. For school leaders and policymakers, the findings suggest the necessity of going beyond technology adoption to systemic frameworks that incorporate responsible AI as a guiding principle in shaping educational policies. Policies that seek to incorporate AI in education must, therefore, advocate for frameworks, teacher professional development, and policy instruments that govern ethically to close the gaps in education so that the use of AI and learning analytics innovations leaves no one behind.
This study contributes to the existing body of knowledge and simultaneously presents a number of opportunities for future research. First, longitudinal studies focusing on the sustained impact of responsible AI supported decision-making on learning achievements, teacher behaviors, and organizational change are important. Second, future studies should involve students, particularly their perceptions of fairness, agency, and trust within AI-driven educational environments. In addition, cross-cultural and comparative research will provide insight on the impact of contextual variables on the adoption and impact of learning analytics. Including research on the implementation of inclusive education, particularly on children with special educational needs, is yet another important area of research that is in line with the global equity agenda.
Lastly, perhaps future work could address the synthesis of learning analytics with new pedagogical models, including the Universal Design for Learning and the cross-section of analytics with the data-driven and human value of technologies pedagogy models. This study reiterates the position that the tendency for educational innovation to rest on the uncritical use of artificial intelligence is misplaced. Innovations in education lie in the transparent, responsible, and pedagogically informed use of learning analytics to assist human decision-making. The study offers a framework and empirical data, thus building a strong case for advancing the ethics, equity, and data-centric education of the digital age.
References
Beck, J. S., & Nunnaley, D. (2021). A continuum of data literacy for teaching. Studies in Educational Evaluation, 69, 100871. https://doi.org/10.1016/j.stueduc.2020.100871
Beerwinkle, A. L. (2021). The use of learning analytics and the potential risk of harm for K-12 students participating in digital learning environments. Educational Technology Research and Development, 69, 327–330. https://doi.org/10.1007/s11423-020-09854-6
Bergdahl, N., & Sjöberg, J. (2025). Transformation, support needs and AI, in K-12 education. Education and Information Technologies. https://doi.org/10.1007/s10639-025-13762-8
Cheng, N., Zhao, W., Xu, X., Liu, H., & Tao, J. (2024). The influence of learning analytics dashboard information design on cognitive load and performance. Education and Information Technologies, 29, 19729–19752. https://doi.org/10.1007/s10639-024-12606-1
Han, J., Kim, K. H., Rhee, W., & Cho, Y. H. (2021). Learning analytics dashboards for adaptive support in face-to-face collaborative argumentation. Computers & Education, 163, 104041. https://doi.org/10.1016/j.compedu.2020.104041
Khosravi, H., Buckingham Shum, S., Chen, G., Conati, C., Tsai, Y.-S., Kay, J., Knight, S., Martinez-Maldonado, R., Sadiq, S., & Gašević, D. (2022). Explainable Artificial Intelligence in education. Computers and Education: Artificial Intelligence, 3, 100074. https://doi.org/10.1016/j.caeai.2022.100074
Kippers, W. B., Poortman, C. L., Schildkamp, K., & Visscher, A. J. (2018). Data literacy: What do educators learn and struggle with during a data use intervention? Studies in Educational Evaluation, 56, 21–31. https://doi.org/10.1016/j.stueduc.2017.11.001
Mandinach, E. B., & Schildkamp, K. (2021). Misconceptions about data-based decision making in education: An exploration of the literature. Studies in Educational Evaluation, 69, 100842. https://doi.org/10.1016/j.stueduc.2020.100842
Mohseni, Z. A., & Masiello, I. (2025). Co-designing, developing, and implementing multiple learning analytics dashboards for data-driven decision-making in education: A design-based research approach. Educational Technology Research and Development. https://doi.org/10.1007/s11423-025-10577-9
Nguyen, A., Gardner, L., & Sheridan, D. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28, 4221–4241. https://doi.org/10.1007/s10639-022-11316-w
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372(71). https://doi.org/10.1136/bmj.n71
Possaghi, I., Vesin, B., Zhang, F., Sharma, K., Knudsen, C., Bjørkum, H., & Papavlasopoulou, S. (2025). Integrating multi-modal learning analytics dashboard in K-12 education: Insights for enhancing orchestration and teacher decision-making. Smart Learning Environments, 12(1), 1–34. https://doi.org/10.1186/s40561-025-00410-4
Roski, M., Sebastian, R., Ewerth, R., Hoppe, A., & Nehring, A. (2024). Learning analytics and the Universal Design for Learning (UDL): A clustering approach. Computers & Education, 210, 105028. https://doi.org/10.1016/j.compedu.2024.105028
Susnjak, T., Ramaswami, G. S., & Mathrani, A. (2022). Learning analytics dashboard: A tool for providing actionable insights to learners. International Journal of Educational Technology in Higher Education, 19(1). https://doi.org/10.1186/s41239-021-00313-7
Teasley, S. D. (2017). Student facing dashboards: One size fits all? Technology, Knowledge and Learning, 22(3), 377–384. https://doi.org/10.1007/s10758-017-9314-3
Schildkamp, K., Poortman, C. L., Luyten, H., & Ebbeler, J. (2020). Factors promoting and hindering data-based decision making in schools. Educational Research Review, 31, 100339. https://doi.org/10.1016/j.edurev.2020.100339
Tirado-Olivares, S., Moreno-Guerrero, A.-J., García-Martínez, I., & López-Belmonte, J. (2024). Learning analytics in primary education: A systematic review. Education and Information Technologies, 29, 3393–3424. https://doi.org/10.1007/s10639-023-12157-z
Valtonen, T., Lehtinen, A., Pöntinen, S., Huhta, A.-M., & Tondeur, J. (2025). Elementary and secondary school teachers’ perceptions of learning analytics: A qualitative approach. Technology, Knowledge and Learning. https://doi.org/10.1007/s10758-025-09847-5
Velander, J., Taiye, M. A., Otero, N., et al. (2024). Artificial intelligence in K-12 education: Eliciting and reflecting on Swedish teachers’ understanding of AI and its implications for teaching & learning. Education and Information Technologies, 29, 4085–4105. https://doi.org/10.1007/s10639-023-11990-4
Villa-Torrano, C., Suraworachet, W., Gómez-Sánchez, E., Asensio-Pérez, J. I., Bote-Lorenzo, M. L., Martínez-Monés, A., Zhou, Q., Cukurova, M., & Dimitriadis, Y. (2025). Using learning design and learning analytics to promote, detect and support socially-shared regulation of learning: A systematic literature review. Computers & Education, 232, 105261. https://doi.org/10.1016/j.compedu.2025.105261
Conflict of interest
The author declares that there is no conflict of interest