Context & Conditions
This artifact was created during Spring 2026 in EDET 793: Advanced Instructional Design and Development at the University of South Carolina. The assignment required my team to plan and conduct a comprehensive formative evaluation of an eLearning module using both expert review and small group learner testing. The evaluation focused on an ASSURE model-based instructional module designed for graduate-level instructional design students.
​
At this stage in the program, I brought significantly more advanced instructional design knowledge compared to earlier artifacts, particularly in areas of evaluation, assessment, and data-driven revision. This project required the application of multiple evaluation frameworks, including the ADDIE model (Molenda, 2015), the Morrison, Ross, and Kemp (MRK) model (Morrison et al., 2019), and Kirkpatrick;s evaluation model (Kirkpatrick & Kirkpatrick, 2016). Specifically, we conducted a Level 1 (Reaction) and Level 2 (Learning) evaluation, collecting both qualitative and quantitative data through SME review, pre/post assessments, and learner attitude surveys. Tools used included Google Forms for data collection and analysis and Adobe Captivate as the platform for the instructional module. This artifact reflects my ability to apply formal evaluation frameworks to systematically assess and improve an instructional product.
​
Scope
The purpose of this artifact was to evaluate the effectiveness, efficiency, and appeal of an eLearning module and generate actionable recommendations for improvement prior to full implementation. This project was completed as part of a collaborative graduate assignment but reflects authentic instructional design practice, as formative evaluation is a critical step in refining instructional products.
​
Within the broader instructional design process, this artifact represents the evaluation phase of a full design cycle, following prior work in analysis, design, and development. The evaluation examined whether learners demonstrated measurable gains in understanding, whether the module was appropriately paced, and whether it was engaging and accessible. For example, results showed a significant increase in learning, with mean scores improving from 67.11% on the pre-test to 93.42% on the post-test, indicating strong instructional effectiveness. At the same time, findings identified areas for improvement, such as limited engagement, over-reliance on text, and accessibility concerns.
​
Role
This artifact was developed as part of a collaborative instructional design team. My role included contributing to the design of evaluation instruments (e.g., SME Notes Form, attitude survey), supporting data analysis, and co-authoring sections of the final report, particularly those related to findings and recommendations.
​
I played a key role in interpreting both quantitative and qualitative data to identify patterns and translate them into actionable design improvements. For example, I contributed to analyzing pre- and post-test data to determine learning gains, as well as synthesizing learner feedback related to engagement, navigation, and accessibility. This role required collaboration, communication, and the ability to integrate multiple perspectives into a cohesive evaluation.
​
Instructional Design
This artifact strongly reflects the ADDIE model and the Morrison, Ross, and Kemp (MRK) model, with a primary focus on the Evaluation phase. From an ADDIE perspective, this work represents formative evaluation, where data are collected and analyzed to improve an instructional product before full implementation (Molenda, 2015). For instance, we measured learning gains through pre/post assessments, analyzed learner reactions through Likert-scale surveys, and examined usability through SME feedback. These activities demonstrate systematic evaluation of instructional effectiveness, efficiency, and appeal.
​
Within the MRK model, this artifact reflects key components such as formative evaluation, instructional alignment, and revision (Morrison et al., 2019). The use of an SME review aligns with MRK’s emphasis on expert feedback (connoisseur-based evaluation), while the small group trial reflects learner-centered evaluation practices. Furthermore, this artifact demonstrates alignment between objectives, instruction, and assessment, as evidenced by measurable learning gains across objectives. The findings also highlight areas where alignment could be strengthened, such as increasing cognitive rigor and improving accessibility. This work is further informed by evaluation theory, particularly Kirkpatrick’s model, as it integrates both learner reaction and learning outcomes to provide a comprehensive understanding of instructional effectiveness.
​
Related Performance Indicators
AECT Standard 4: Professional Knowledge and Skills
-
4.1 Collaborative practice
-
4.3 Assessing and evaluating
Reflection
This artifact represents one of the strongest examples of my growth as an instructional designer in this program. Unlike earlier artifacts, which focused primarily on planning and design, this project required me to critically evaluate an instructional product using real data from both experts and learners. For example, analyzing the increase from 67.11% to 93.42% in assessment scores allowed me to see concrete evidence of instructional effectiveness, while qualitative feedback revealed deeper issues related to engagement and accessibility. This reflects AECT Standard 4, as I applied professional knowledge and skills to assess and improve instructional design.
​
At this point in my development, I would describe my skills as approaching an advanced level, particularly in evaluation and data analysis. I demonstrated the ability to design and implement multiple evaluation methods, interpret both quantitative and qualitative data, and translate findings into specific, actionable recommendations. However, I also recognize areas for continued growth. If I were to revise this project, I would incorporate additional data sources, such as usability testing with assistive technologies or longitudinal data to measure retention and transfer of learning. I would also strengthen the connection between evaluation findings and iterative redesign by explicitly mapping each recommendation to specific revisions in the module.
​
This artifact strongly aligns with AECT Standard 4 because it demonstrates my ability to engage in collaborative, data-driven evaluation practices that improve instructional quality. Compared to earlier work, this artifact shows a clear shift from designing instruction to critiquing and refining it, which is a critical skills for professional instructional designers. It reflects my ability to think systematically about instruction, use evidence to guide decisions, and continuously improve learning experiences for diverse learners.
References
Kirkpatrick, D. L., & Kirkpatrick, J. D. (2016). Kirkpatrick’s four levels of training evaluation. ATD Press.
​
Molenda, M. (2015). In search of the elusive ADDIE model. Performance Improvement, 54(2), 40–42. https://doi.org/10.1002/pfi.21461
​
Morrison, G. R., Ross, S. M., & Kemp, J. E. (2019). Designing effective instruction (8th ed.). Wiley.
