Program Evaluation Models and Related Theories

Currently, I am involved in studying a curriculum dedicated to the subject of education. The latter implies the learning of many crucial concepts, theories, and aspects – starting from educator-learner relationships and ending with the organization of educational policies. This program is essential and requires continuous development that is founded on constant evaluation and evidence-based suggestions. Given a wide range of elements to learn and a great number of the program’s participants, it allows assessing it with a variety of different approaches. Below, a combined method of the program’s evaluation will be suggested, basing on the Logic Model and Kirkpatrick’s four-level evaluation model.

It seems reasonable to state that generally, there are five assessment approaches or perspectives through which an educational program may be evaluated: experts, consumers, programs, decisions and participants. My specific area may allow the combination of these approaches. The following rationale will be founded on the latter statement. The Logic model refers to the program approaches that were mentioned above and can be utilized in the framework of my chosen subject.

A significant assessment approach that can be used is the Logic Model that carefully considers the relations between program components, as well as these components’ relations to the program’s contexts. There are four essential elements of the Model that include inputs, activities, outputs, and outcomes (Stegemann & Jaciw, 2018). The former composes all resources that are expected to be available to an educational program – starting from facilities and ending with staff skills. Activities are the set of strategies, innovations, and shifts that are contained in the program (Rajashekara et al., 2020). Outputs can be determined as indicators that the program’s activities are taking place or being completed. Finally, outcomes “define the short-term, medium-term, and longer range changes intended as a result of the program’s activities” (Frye & Hemmer, 2012, p. 295). Here, it should be noted that this Model is significant in terms of implementing innovations or revising a program.

The latter characteristic is especially relevant under the current conditions of the pandemic and the forced related changes that the educational sector in general – and the chosen educational program in particular – is facing. The solution of the leading tasks of advancing and diversifying the education program within the scope given – ensuring its accessibility, quality and efficiency – presupposes large-scale structural, institutional, organizational and economic changes. First of all, it is necessary to carry out a significant update of the content of education, bringing it in line with the requirements of the time and development prospects.

The program’s quality implementation includes the implementation of an adequate system of pedagogical measurements. It is quite obvious that the purpose of pedagogical measurements is in one way or another connected with control, accounting, diagnostics, research, monitoring, examination, stimulation, assessment, and mark. That is, with this kind of actions, mechanisms, means that occupy their “niche” in the educational process, participating in ensuring its integrity, productivity, focus on the development of the student’s personality. In this regard, it should be emphasized that the Logic Model lacks the mentioned focus and concentrates rather on the program’s elements’ peculiarities (Newton et al., 2013). Hence, this disadvantage may be compensated by appealing to the combined approach.

The four-level model for evaluating the effectiveness of education, which has become widespread in international practice, was proposed in 1959 by Donald Kirkpatrick. The first level, according to Kirkpatrick, is “Reaction;” at this level, the reaction of the program participants to the training is ascertained (Paull et al., 2016). The second level is “Learning,” the main task of which is to evaluate the knowledge and skills gained during the training. To assess the level, specially designed tests, questionnaires, and studies are used, the purpose of which is to quantitatively measure progress in the knowledge gained.

The third level is “Impact;” Kirkpatrick defines this level as the most important and difficult. It is here that the assessment of how the behavior of the participants has changed as a result of the training takes place, how the acquired knowledge and skills are applied. The fourth level is “Results”. Assessment at this level is the most difficult and costly. At this stage, the assessment determines how the participants’ performance has changed after the training (Alsalamah & Callinan, 2021). Kirkpatrick stresses that outcomes include changes that have occurred due to the fact that the participants have been educated. At this level, the most important thing is to choose the indicator that is influenced by the training conducted as much as possible and directly, and to carry out their special measurement before and after education (Reio Jr. et al., 2017). In order to achieve the significant evaluation process, the fourth element of the Logic Model – outcomes – may be replaced by Kirkpatrick’s approach. Thus, the two tools will be combined, and the Logic Model’s lack of focus on learner-oriented outcomes will be addressed.

To conclude, the combined approach to evaluate the program dedicated to the education subject was provided. It was claimed that under the current conditions of the pandemic, this program needs constant improvements, innovations, and revisions. The Logic Model may be used to attain such a state of affairs, given that this is its crucial advantage. However, it was also stated that the Model contains one visible flaw – it lacks the focus on learner-related outcomes. In this vein, Kirkpatrick’s four-level evaluation approach was proposed to resolve this issue. It was suggested to replace the outcome element of the Logic Model with Kirkpatrick’s model so that these two methods could be combined and complement each other. This recommendation can be productively implemented during the process of evaluating the chosen program, as well as other wide-range subjects.


Alsalamah, A., & Callinan, C. (2021). Adaptation of Kirkpatrick’s four-level model of training criteria to evaluate training programmes for head teachers. Education Sciences, 11(116), 1 – 25. Web.

Frye, A. W., & Hemmer, P. A. (2012). Program evaluation models and related theories: AMEE Guide No. 67. Medical Teacher, 34(1), 288–299.

Newton, X. A., Poon, R. C., Nunes, N. L., & Stone, E. M. (2013). Research on teacher education programs: logic model approach. Evaluation and program planning, 36(1), 88–96.

Paull, M., Whitsed, C., & Girardi, A. (2016). Applying the Kirkpatrick model: Evaluating an Interaction for Learning Framework curriculum intervention. Issues in Educational Research, 26(3), 490–507.

Rajashekara, S., Naik, A. D., Campbell, C. M., Gregory, M. E., Rosen, T., Engebretson, A., & Godwin, K. M. (2020). Using a Logic Model to design and evaluate a quality improvement leadership course. Academic Medicine, 95(8), 1201–1206.

Reio Jr., T. G., Rocco, T. S., Smith, D. H., & Chang, E. (2017). A critique of Kirkpatrick’s evaluation model. New Horizons, 29(2), 35–53.

Stegemann, K. C., & Jaciw, A. P. (2018). Making it logical: Implementation of inclusive education using a Logic Model framework. Learning Disabilities: A Contemporary Journal, 16(1), 3-18.

Cite this paper

Select style


ChalkyPapers. (2023, October 17). Program Evaluation Models and Related Theories. Retrieved from


ChalkyPapers. (2023, October 17). Program Evaluation Models and Related Theories.

Work Cited

"Program Evaluation Models and Related Theories." ChalkyPapers, 17 Oct. 2023,


ChalkyPapers. (2023) 'Program Evaluation Models and Related Theories'. 17 October.


ChalkyPapers. 2023. "Program Evaluation Models and Related Theories." October 17, 2023.

1. ChalkyPapers. "Program Evaluation Models and Related Theories." October 17, 2023.


ChalkyPapers. "Program Evaluation Models and Related Theories." October 17, 2023.