Human-Centered Framework for AI-Assisted Instructional Design
July 28, 2025
Introduction
The increasing presence of AI in education is forcing a necessary rethinking of instructional design. But as generative models become ubiquitous, a new challenge has emerged: how do we use these tools without losing sight of pedagogy, ethics, and human agency?
A recent publication by Li et al. (2025) introduces the ARCHED framework—AI for Responsible, Collaborative, Human-centered Education Design—which redefines AI-assisted instructional design by foregrounding human decision-making while leveraging the power of large language models (LLMs). Their model offers an antidote to the “automation-first” mentality dominating current edtech platforms, and it has significant implications for faculty, instructional designers, and policy makers alike.
Reclaiming Pedagogical Ground in the Age of AI
Many current AI-powered instructional tools promise efficiency—generating syllabi, rubrics, or even full modules with a few prompts. However, Li et al. (2025) argue this "one-click convenience" comes at the cost of pedagogical coherence, transparency, and educator autonomy.
The ARCHED framework departs from this trend by embedding AI tools within a structured, three-phase workflow. Instead of offering complete instructional packages, ARCHED encourages iterative co-design between human and machine. Educators begin the process by specifying learning parameters via the Learning Objective Generation System (LOGS). These are then evaluated by a second AI component—the Objective Analysis Engine (OAE)—which provides feedback on Bloom’s taxonomy alignment and structural quality. Human instructors remain decision-makers at every stage.
Why This Matters: Bloom’s, Transparency, and Assessment Diversity
Pedagogically, ARCHED stands out for its explicit integration of Bloom’s taxonomy, a foundational framework in instructional design. Unlike generic AI writing tools, ARCHED ensures learning objectives are developmentally appropriate and measurable.
Empirical evaluation by Li et al. (2025) shows the tool achieves strong agreement with expert human classification of learning objectives (κw = 0.834), particularly at the "Create" and "Remember" levels. ARCHED also produces learning objectives that score comparably to those written by instructional design experts on clarity, structure, and measurability—effectively matching human quality without sacrificing educational alignment.
Most importantly, ARCHED promotes diversity in assessment. By decoupling learning objective creation from standardized test formats, the system empowers educators to craft authentic, discipline-specific assessments instead of relying on AI defaults like multiple-choice questions (Cheng et al., 2024).
A Model for Ethical, Inclusive AI Use in Teaching
At a time when universities are grappling with academic integrity and the role of generative AI, ARCHED provides a transparent, ethically grounded alternative. It reframes AI not as a replacement for educators, but as a reflective design partner—what one might call a “pedagogical mirror” rather than a “black box.”
This is more than a philosophical statement. ARCHED includes features such as:
Detailed analysis reports on learning objectives
Downloadable documentation for peer or administrative review
Instructor controls for iterative refinement
Support for uploading existing objectives for evaluation
The tool is already available in preview at https://logen.viablelab.org and future iterations promise integration with LMS platforms.
Centering Educators in the Future of EdTech
AI is not going away—and nor should it. But how we design with it matters. The ARCHED framework offers higher ed faculty a research-backed, educator-first model for integrating AI in meaningful, responsible, and pedagogically sound ways.
As we continue exploring AI's potential in instructional design, frameworks like ARCHED remind us that the ultimate goal isn’t automation—it’s better learning.
References
Cheng, Z., Xu, J., & Jin, H. (2024). TreeQuestion: Assessing conceptual learning outcomes with LLM-generated multiple-choice questions. Proc. ACM Hum.-Comput. Interact., 8(CSCW2), 431:1–431:29. https://doi.org/10.1145/3686970
Li, H., Fang, Y., Zhang, S., Lee, S. M., Wang, Y., Trexler, M., & Botelho, A. F. (2025). ARCHED: A Human-Centered Framework for Transparent, Responsible, and Collaborative AI-Assisted Instructional Design. Proceedings of Machine Learning Research, 273, 1–11. https://arxiv.org/abs/2503.08931

