Re: Simple vs. Complex Learning Outcomes

A response to Chapter 6, “Psychological Foundations of Instructional Design,” in Trends and Issues in Instructional Design and Technology


Select two instructional goals that represent simple versus complex learning outcomes. How would the learning theories discussed in this chapter be employed to develop instruction to teach the goals you have selected?  How would the instruction differ in each case?  Would one or another theory be more applicable to one goal versus the other?  Why?


When designing instruction for students, it is important to begin with the end in mind. Setting instructional goals points to the path that the learners and teachers should follow. Examining those goals provides a window into the learning processes and theories that instructional designers are utilizing to elicit learning outcomes. In fact, the theories and the learning processes are the path to get learners to that end point. In the following paragraphs, two instructional goals for dental hygiene education, one simple and one complex, will be examined and the underlying principles of instruction and learning will be highlighted.

The Simple Instructional Goal: By the end of the first month of school, first-year dental hygiene students can label intraoral landmarks on a diagram and properly describe the normal anatomy found there.

One of the underlying principles at work in this simple instructional goal is the Behavioral Learning Theory, where knowledge exists outside of the learner and must be pursued (Driscoll, 2017, pp. 53-54). In this case the student must memorize a discrete set of intraoral landmarks and their location in the mouth along with the standard descriptors of healthy, normal anatomy. The memorization is a criterion-reference activity, a matter of learning a defined set of terms and relating that information to a fixed standard (the intraoral cavity), where multiple choice and fill in the blank quizzes test recall and students’ answered are compared to the specified standard (Reiser, 2017, p. 14). Instructors, as the experts assign readings with diagrams, then give lectures with slides, then give quizzes to test learning, and finally allow students’ the opportunity to practice in the pre-clinical setting, providing feedback (formative evaluation) as necessary. Students’ correct answers are reinforced by the stimuli of high scores on quizzes and positive verbal feedback in pre-clinical setting. Students repeat the same identification/labeling exercises, either in written or verbal form, multiple times, until it they are well versed in this basic, yet critical skill for the profession of dental hygiene (Driscoll, 2017, p. 54).

The influence of the Cognitive Information Processing Theory on this instructional goal is readily apparent, as well. Here the information exists outside of the learner and is the stimulus, or input, that triggers internal processing required for learning to occur (Driscoll, 2017, p. 54). The activities first appeal to the learner through sensory memory, and as the input progresses (visually) from diagrams to slides to live patients. Then the information moves to the short-term memory and finally to the long-term memory (p. 54). As an aside, this is where the Schema Theory is also applied in that schemas develop as learners increase in familiarity from repeated visual exposures to the material until what was foreign becomes commonplace. Schemas are also used as learners move from learning vocabulary to classifying tissues (e.g. soft vs. hard tissues, keratinized vs. non-keratinized) and categorizing anatomical structures in terms of their purposes (e.g. the different salivary glands, the assorted tissues that compose the periodontium, and the various types of papillae on the tongue) (p. 55). Returning to Cognitivism, the dental hygiene students receive feedback at multiple intervals during the learning process to allow them to ascertain the correctness of their answers and to modify their performance if necessary (p. 54). Finally, information processing is facilitated for learners due to practice in a variety of settings (p.54).

The Complex Instructional Goal: By the end of the first semester, first-year dental hygiene students will use information gathered in the initial oral exam and medical history to develop a dental hygiene treatment plan for a patient requiring quadrant scaling and root planning.

Once again, the instructional goal is sustained by practices associated with Cognitive theory. This goal draws on all previously learned information where leaners must retrieve encoded knowledge about health and disease and then use critical thinking skills to develop a plan and schedule appropriate treatment (Driscoll, 2017, p. 54). The clinical environment requires the highest level of problem-solving and critical-thinking skills, which are higher order cognitive skills (pp. 57, 62).

In addition, Constructivist influences are woven into this learning process in that these are live patients, people with real health concerns and dental disease; this is “authentic performance in a realistic setting” (Driscoll, 2017, p. 63). Students are practicing the profession of dental hygiene, as novices under the “cognitive apprenticeship” of their professional dental hygienist instructors (pp. 62, 73). In the clinical environment, instructors move from their classroom position of “sage on the stage” to a more collaborative relationship of “guide on the side” (pp. 57, 61). This shift also demonstrates Situated Learning Theory where the dental hygiene student moves to the place of performing the same tasks and skills that the experts in the subject matter do, where learners “participate in the practices of the community” (p. 55). Creation of a treatment plan for patients is authentic to the discipline of dental hygiene and allows learners to “reflect on what and how they are learning,” another aspect of constructivism (pp. 57, 63). Assessment of the instructional goal is indeed complex because patient care, in a clinical setting, is unlikely to reveal “uniform level of accomplishment among learners.” The subject of learners’ study and work is the patient, who cannot be standardized. This means, for a learner, every patient (learning experience) is different due to variances in terms of level of difficulty (pertaining to deposit removal), degree of disease, and complication of management (pain, physical limitations, psychosocial factors). For the same reason, it is impossible to standardize the learning experience in the clinical setting from one student to another (p. 57). Obstacles such as these are considered in the planning of instruction, and the solution to this problem is the multiplicity of learning experiences.

Driscoll, M. P. (2017). Psychological foundations of instructional design. In Reiser & Dempsey (Eds.), Trends and Issues in Instructional Design and Technology (pp.52-60). New York, NY: Pearson.

Reiser, R. A.  (2017). A history of instructional design and technology. In Reiser & Dempsey (Eds.), Trends and Issues in Instructional Design and Technology (pp.8-22). New York, NY: Pearson.

A Comprehensive Evaluation Model

Evaluation is an essential element in the process of designing and implementing educational and training programs. The analysis of a program’s successes or failures allows for improvement in the learning and performance of individuals as well as greater efficiency for the organization (Reiser & Dempsey, 2018, pp. 87, 91). Models should assess both formative and summative evaluation (p.87). With this in mind, what follows is a proposal for a new order for evaluation, which borrows from Stufflebeams’ influential CIPP model, Rossi’s question-based Five-Domain Evaluation Model, and Kirkpatrick’s Training Evaluation, Levels 3 and 4 (pp. 88, 92).

In this, The Comprehensive Evaluation Model, there are three stages: Foundational Considerations, Procedural Concerns, and Outcomes Valuations. The three stages provide opportunities for both formative assessment, which evaluates the process of the program and implements improvements as needed, and summative assessment, which is concerned with evaluation in any area other than development (Reiser & Dempsey, 2018, p. 87).

The first stage of the Comprehensive Evaluation Model is Foundational Considerations, which begins with context evaluation, derived from CIPP and Rossi’s Five-Domains; this is a needs assessment to determine if the program is necessary (Reiser & Dempsey, 2018, p.88). The second step, like CIPP’s Input Evaluation, is concerned with whether the available resources and support are adequate to implement the program (p. 88). And the third and final step considers the concept of the program, like Rossi’s Theory Assessment, analyzing the potential success of the overall program concept in an effort to avoid theory failure (p.88).

The Procedural Concerns portion of this new model deals primarily with formative evaluation where the development of the program and the process of implementation are deliberated, looking for measures that might improve effectiveness (Reiser & Dempsey, 2018, p. 88).

Finally, the Outcomes Valuation is a summative evaluation focusing on the overall success of the product (i.e. the program). Considerations here include an implementation assessment, asking Rossi’s question, “Was the program implemented properly, according to the program plan?” (Reiser & Dempsey, 2018, p. 88). What follows is an impact study, evaluating whether or not the learner behavior was changed, as intended by the program. This evaluation investigates using Kirkpatrick’s Training Evaluation Model, Levels 3 and 4. At this point, evaluators must determine (1) if learners can apply learned concepts to the necessary arena (workplace or classroom) and (2) if the program implemented change that improved the performance of the organization (p.92.). All of these factors must be weighed on the balance with an efficiency assessment, like Rossi’s, to determine the return on investment or the cost effectiveness of the program (pp. 88-89).

Reiser, R. A., & Dempsey, J. V. (2018). Trends and issues in instructional design and technology. Boston: Pearson Education.

Re: ADDIE, SAM, and Pebble-in-the-Pond Models

A response to Chapter 4, “SAM and Pebble-in-the-Pond: Two Alternatives to the ADDIE Model,” in Trends and Issues in Instructional Design and Technology


Compare and contrast the ADDIE, SAM, and Pebble-in-the-Pond models. Discuss strengths and weaknesses of each model. You are encouraged to utilize texts as well as graphs to share your information.


Unfortunately, some of ADDIE’s strengths are connected to its weaknesses. Because of the sequential order, an error or misjudgment at the beginning is often carried through the process. The documentation for this model is laborious and produces a written plan that is essentially an abstract concept until it the implementation phase, at which point major revisions of the project could be costly, if not impossible. The written plans of the instructional designer are subject to misinterpretation by others and the proposal is a description of what to do, but not necessarily how to do it. In all of this, it is easy to lose sight of the learner in lieu of focusing on the instruction (Branch, 2017, p. 24).

SAM (Successive Approximation Model) is one of the instructional design alternatives to the ADDIE model. It is a process model that relies on successive throwaway prototypes to communicate suggestions visually and provide opportunities for early and frequent formative testing of functionality (with live learners). As opposed to ADDIE’s document-heavy, abstract process, SAM’s use of prototypes throughout the project allows troubleshooting from the very beginning and makes for clearer communication of ideas and feedback between the designer and the stakeholders. SAM also develops preliminary plans for all of the content, from the beginning. Operating in this manner makes the SAM model very time-efficient, and therefore more cost-effective, in comparison to ADDIE (Allen & Merrill, 2017, pp. 33-35).

Above is the more basic SAM, which is a two-phased approach for simpler projects. The three-phased approach, for more complex projects, breaks the second phase into design and development phases. A key strength of the SAM approach is the Savvy Start where the design team meets (including stakeholders) to brainstorm the initial prototypes, constantly analyzing for weaknesses by asking themselves, “Why shouldn’t we do this?” From the outset, the team is committed to flexibility by generating multiple disposable iterations. Obviously SAM is a very creative process that keeps the end in mind from the beginning. The weakness of SAM is the possibility of getting stuck in the cycle of revision and having trouble finalizing the product (Allen & Merrill, 2017, pp. 33-35).

PEBBLE IN THE POND is another design alternative to ADDIE that is a problem-centered approach where the problem, something learners must solve, is the catalyst for instructional design. This model begins with the assumption that some initial evaluation and analysis has occurred and that the solution to the problem is instruction instead of some other option (Allen & Merrill, 2017, p. 35).

In this illustration, each concentric ring represents a step in the process of instructional design, where the problem initiates the process. The “pebble” represents the problem the student must be able to solve. The pebble is thrown into the “instructional pond,” causing a ripple that begins the design process.

  • The first ripple is the development of a prototype that illustrates the problem and how students can solve it (Merrill, 2013).
  • The second ripple is the creation and demonstration of a progressive series of prototypes that illustrate increasingly complex problem solving for students (Merrill, 2013).
  • Ripple number three is comprised of determination and demonstration of the specific skills required to respond to the problems as seen in the progression of prototypes (Merrill, 2002).
  • The fourth ripple is development of a structural framework for the problems in the progression using specific, task-centered instructional strategies and peer collaboration (Allen & Merrill, 2017, p. 35).
  • Ripple five is finalization of the prototype. Necessary components include design of “interface, navigation, and supplemental instructional materials” (Allen & Merrill, 2017, p. 35; Merrill, 2009).
  • The sixth ripple is the evaluation phase where data is collected to evaluate the course (formative evaluation) in order to make revisions to the prototype (Allen & Merrill, 2017, p. 35).

Unlike ADDIE, the Pebble model is very student-centered and learning-focused because it begins with the problem that the student must solve and demonstrates the skills necessary for students to succeed. Use of prototypes throughout the process avoids some other pitfalls of ADDIE such as inefficient use of time due to laborious documentation and miscommunication within the design team due to the abstract nature of a written plan. On the other hand, the Pebble model is limited since it lacks “the important steps of production, implementation, and summative evaluation” that are essential to the overall instructional design process (Allen & Merrill, 2017, p. 35).

Allen, M. W. & Merrill, M. D.  (2017). SAM and Pebble-in-the-Pond: Two alternatives to the ADDIE model. In Reiser & Dempsey (Eds.), Trends and Issues in Instructional Design and Technology (pp.31-41). New York, NY: Pearson.

Branch, R. M. (2017). Characteristics of foundational instructional design models. In Reiser & Dempsey (Eds.), Trends and Issues in Instructional Design and Technology (pp.23-30). New York, NY: Pearson.