AI Impact on Decision-making
Today, I would like to share a recent AI SoTL article entitled, “The impact of artificial intelligence on task performance and decision-making: Empirical evidence on Generation Z” by Balcerzak, Zinecker, and Mičánek (2025) (https://doi.org/10.14254/1795-6889.2025.21-3.7 ).
The authors investigate a central tension in AI-enabled higher ed:
Does AI improve student performance at the expense of diagnostic validity?
Their controlled, three-stage experimental study with Generation Z business students provides rare empirical evidence quantifying this trade-off.
Using a sequential design, students completed:
A diagnostic baseline assessment (no AI)
An independent reasoning assessment (no AI)
An identical AI-assisted assessment
This design isolated AI’s impact on task performance, internal reliability, score variance, and perceived cognitive effort.
Findings
1. AI substantially increased observable performance. Average scores increased by over 30 percentage points in the AI-assisted condition (86.9%) compared to the independent reasoning condition (56.4%). Under identical time constraints, students achieved higher accuracy, suggesting efficiency gains consistent with cognitive offloading and distributed cognition theories (Kirsh, 2013).
2. Score variance compressed and internal reliability collapsed. Cronbach’s alpha dropped from .87 in the non-AI condition to .31 in the AI condition. From a classical test theory perspective (Cronbach, 1951), this represents a significant reduction in discriminatory power. AI amplified performance but degraded the assessment’s ability to differentiate independent reasoning ability.
3. Students perceived non-AI tasks as more educationally valuable. Eighty percent reported the independent condition as more challenging and cognitively effortful.Despite lower scores, students described deeper engagement, stronger ownership, and greater metacognitive awareness. AI reduced perceived cognitive load, aligning with Cognitive Load Theory (Paas & van Merrienboer, 2020), yet students reported a shift from reasoning to tool management.
4. Agency redistributed within human–AI assemblages. Rather than simply enhancing cognition, AI reconfigured it. Students selectively trusted AI for calculations but hesitated in evaluative judgment tasks. This supports emerging work on automation bias and evaluative judgement (Bearman et al., 2024).
From a learning sciences perspective, this study operationalizes several theoretical frameworks:
Distributed cognition: Performance becomes a property of the human–AI system rather than the individual learner (Salomon et al., 1991).
Construct validity theory: When AI mediates task completion, score meaning changes (Kane, 2013).
Cognitive Load Theory: AI reduces execution load but may displace germane processing required for deep learning.
Self-regulated learning: Independent conditions elicited stronger metacognitive monitoring and strategic reasoning.
The authors argue that simply allowing AI within traditional assessment formats risks undermining validity. Instead, they recommend hybrid assessment designs that:
Separate independent reasoning from AI-augmented performance
Require justification and critique of AI outputs
Make distributed agency visible
Preserve evaluative judgement as a core learning outcome
Reference
Balcerzak, A. P., Zinecker, M., & Mičánek, J. (2025). The impact of artificial intelligence on task performance and decision-making: Empirical evidence on Generation Z. Human Technology, 21(3), 620–639.

