Evaluation is an important process in understanding the efficacy of training programs because it makes it possible to estimate whether they have achieved their intended goals or not (Wang & Wilcox, 2006). In this regard, evaluation provides a framework for comprehending the cause and effect of training techniques. Additionally, it helps organizations estimate Return on Investments (ROI) associated with their training programs, thereby enabling them to recalibrate their programs for improved efficiency (Russ-Eft & Preskill, 2005). Consequently, the process contributes to the development of valuable pieces of knowledge that are essential in enhancing decision-making processes within organizations by promoting a shared understanding of workplace processes through individual or team learning programs. Given that evaluation plays an important role in advancing knowledge, experience has shown that the modes or structures adopted by organizations when implementing them play a significant role in understanding the efficacy of training programs. Therefore, it is crucial to understand the merits and demerits of different evaluation techniques before adoption.
Critique of Kilpatrick’s Evaluation Model
Kilpatrick’s evaluation model has four levels of analysis involving an assessment of people’s reactions, the extent of new knowledge acquisition, application of knowledge, and changes in Key Performance Indicators (KPIs) attributed to relevant training programs (Peck, 2018). Based on its formative and summative levels of assessment, Kilpatrick’s evaluation model is relatively easy to understand and implement. It also gives immediate feedback to users, because it can gauge participants’ immediate reactions to training modules. Additionally, it highlights gaps in training programs, which form the foundation for improving future processes (Peck, 2018). Despite the advantages associated with Kilpatrick’s four levels of taxonomy, the lack of identifiable measures for assessing KPIs negatively affects its adoption (Peck, 2018). Additionally, its four levels of evaluation are not correlated – meaning that it represents a taxonomy of evaluation as opposed to a reliable model of review (Peck, 2018). These issues contribute to its partial implementation in various organizational contexts.
Evaluation and Content Design of Future Training Programs
Given the importance of evaluation in understanding the efficacy of training programs, the knowledge generated from linked processes could be useful in designing future training programs. For example, it could help to strengthen the relationship between training programs and organizational objectives, thereby creating synchrony between an organization’s human resource plans and its vision (McGuire, 2014). At the same time, the knowledge generated could help to increase the ROI or value of training programs by identifying and eliminating redundant aspects of the program that do not contribute to the advancement of knowledge or practice (Russ-Eft & Preskill, 2005). In this regard, evaluation could improve the content design of future training programs.
Adoption of Training Models
Given the challenges that organizations encounter in adopting different types of evaluation models, I found an interesting quote shared by Wang and Wilcox (2006) stating, “The purpose of all classification systems is to help conceptualize and understand the nature, functions, or purposes of evaluation from different perspectives…It is a matter of preference, familiarity, or convention” (p. 530). This statement poses a challenge to the development of universal training systems because it implies that organizations are at liberty to adopt aspects of training programs, as they wish, based on their unique environmental dynamics. The net effect is that the process of implementing evaluation models is complex and requires careful thought than had been previously conceived. This finding leads to a probing question, which is, “How can we harmonize training programs to create a universal appeal that would facilitate adoption across various organizational settings?”
McGuire, D. (2014). Human resource development (2nd ed.). SAGE Publications.
Peck, D. (2018). The Kirkpatrick model of training evaluation [Video]. Web.
Russ-Eft, D., & Preskill, H. (2005). In search of the Holy Grail: Return on investment evaluation in human resource development. Advances in Developing Human Resources, 7(1), 71–85. Web.
Wang, G. G., & Wilcox, D. (2006). Training evaluation: Knowing more than is practiced. Advances in Developing Human Resources, 8(4), 528–539. Web.