Evaluation is an essential element in the process of designing and implementing educational and training programs. The analysis of a program’s successes or failures allows for improvement in the learning and performance of individuals as well as greater efficiency for the organization (Reiser & Dempsey, 2018, pp. 87, 91). Models should assess both formative and summative evaluation (p.87). With this in mind, what follows is a proposal for a new order for evaluation, which borrows from Stufflebeams’ influential CIPP model, Rossi’s question-based Five-Domain Evaluation Model, and Kirkpatrick’s Training Evaluation, Levels 3 and 4 (pp. 88, 92).
In this, The Comprehensive Evaluation Model, there are three stages: Foundational Considerations, Procedural Concerns, and Outcomes Valuations. The three stages provide opportunities for both formative assessment, which evaluates the process of the program and implements improvements as needed, and summative assessment, which is concerned with evaluation in any area other than development (Reiser & Dempsey, 2018, p. 87).
The first stage of the Comprehensive Evaluation Model is Foundational Considerations, which begins with context evaluation, derived from CIPP and Rossi’s Five-Domains; this is a needs assessment to determine if the program is necessary (Reiser & Dempsey, 2018, p.88). The second step, like CIPP’s Input Evaluation, is concerned with whether the available resources and support are adequate to implement the program (p. 88). And the third and final step considers the concept of the program, like Rossi’s Theory Assessment, analyzing the potential success of the overall program concept in an effort to avoid theory failure (p.88).
The Procedural Concerns portion of this new model deals primarily with formative evaluation where the development of the program and the process of implementation are deliberated, looking for measures that might improve effectiveness (Reiser & Dempsey, 2018, p. 88).
Finally, the Outcomes Valuation is a summative evaluation focusing on the overall success of the product (i.e. the program). Considerations here include an implementation assessment, asking Rossi’s question, “Was the program implemented properly, according to the program plan?” (Reiser & Dempsey, 2018, p. 88). What follows is an impact study, evaluating whether or not the learner behavior was changed, as intended by the program. This evaluation investigates using Kirkpatrick’s Training Evaluation Model, Levels 3 and 4. At this point, evaluators must determine (1) if learners can apply learned concepts to the necessary arena (workplace or classroom) and (2) if the program implemented change that improved the performance of the organization (p.92.). All of these factors must be weighed on the balance with an efficiency assessment, like Rossi’s, to determine the return on investment or the cost effectiveness of the program (pp. 88-89).
Reiser, R. A., & Dempsey, J. V. (2018). Trends and issues in instructional design and technology. Boston: Pearson Education.
So, what do you think?