There are numerous initiatives and interventions worldwide that have been established in local communities to enhance the situation. The communities are working together to minimize crime, provide secure and affordable homes for everybody, or support more children in school to accomplish their best, to name just a few examples. In recent years, a significant movement has been taking place to better understand and enhance practice by using assessment. The worth and effect of their work continue to be assessed by most program management when asking questions, consultation with partners, assessments, and feedback. The information gathered is then used to enhance the software. The many kinds of evaluations that may be performed and used during the life cycle must be understood. Process, impact, outcome, and summative evaluation are the primary types of analyses. The systematic use of evaluations resolved numerous difficulties and contributed to the betterment of numerous community groups.
Aspects of Making The Evaluation Process Better
There are several sorts of assessments; thus, assessment techniques must be tailored according to the evaluations and goals. According to McCain, evaluation is about quality and improving learning (2016). When determining the merit criteria for evaluations, an assessor should prioritize the values of various stakeholder groups. According to Hunt et al., “the evaluation system must provide teachers with objective, content-specific feedback to inform practice and guide them to improve their teaching effectiveness” (2016, p. 21). In addition to reflecting a plural democracy approach, the descriptive value method also addresses evaluation issues concerning the interests of the stakeholders, therefore making it more probable that the evaluation findings will be used.
With diverse views on the program’s values, an evaluator may also decrease personal prejudice, which is difficult to avoid when the assessor allocates values. According to Liu et al., “process evaluation is widely accepted as an effective strategy to improve product quality and shorten its development cycle” (2019, p. 19312). However, if there can be no consensus among various actors on the topic of importance, an assessor should judge the program’s worth. According to Putra, “process route evaluation is a part of research and development (R&D)” (2016, p. 238). For example, while choosing merit criteria for the evaluation, an evaluator might undertake a requirements assessment to determine the critical stakeholder group that has the most impact on the program and prioritize its values.
An assessor must know and accept both quantitative and qualitative approaches as available for carrying out an assessment. According to Okolelova et al., an innovative project can be considered as a set of possible implementations (2017). An evaluator should visit with customers before the evaluation takes place and acquire their views on the strategy recommended. The case study is not an appropriate technique if customers have a firm view on obtaining specific quantitative data and are given great charts in the final report. If customers wish to experience a minimum intrusion into the program, the first choice should not be classified as an experiment or quasi-experimental design. However, it does not mean that the evaluators should instantly forsake their finest assessment techniques if customers do not accept such practices. However, before implementing any methodology, evaluators must persuade their clients of the suggested evaluation techniques and agree.
An assessor should stress the critical use of its assessment findings and aggressively encourage the distribution of the evaluation outcomes. It is not easy to think that an appraisal will be undertaken without an interest in knowing the effects of a social program, mainly if it costs substantial amounts of contributions and affects a vast population. The instrumental use of assessments is highly reckless and has a harmful influence on the assessment profession. The use of assessment lighting sounds intriguing but is, in fact, limited. Firstly, the extent of data collection is difficult to establish. It is difficult to identify what precise facts are helpful to potential consumers of assessments in darkness. Secondly, without informing them what works in the application and what does not, it is not easy to enlighten them. If people want to learn from winning or losing, they must assess success and failure. The expectations and the time restrictions of customers frequently make it hard to evaluate the illumination. The input, implementation, result, and attitudes of the individuals in the program must be measured over time. However, most customers have little time for an assessment.
Accords sum up the assessment methods and define the duties and obligations of everyone. An agreement outlines how assessment actions are carried out. According to Grzeszczyk & Klimek, “evaluation can be presented as a systematic investigation concerning the quality of projects” (2018, p. 135). Contract elements include declarations on the intended purpose, use, and techniques and a summary of the supplied products, managers, time, and money. According to Fontanillas et al., “the evaluation of the results focuses on an interactive assessment based on the analysis of the projects developed by other groups at the final stage of the subject” (2016, p. 1). The complexity of the agreement will rely on the ties between the participants. For instance, a legal contract, a comprehensive procedure, or a simple memorandum of understanding might be used. Regardless of formality, establishing a separate agreement gives a chance to check the shared account required for a satisfactory assessment. It is also a foundation for changing processes if this is necessary.
Numerous actions might be focused on the assessment design. According to Owen, everyone involved with policy and program development and delivery is being asked to plan more carefully, reflect more critically, and justify their decisions (2020). For example, advocates and skeptics alike of the initiative might be engaged to determine the political feasibility of the suggested assessment questions. According to the development stage of the program, a possible assessment menu may be distributed among stakeholders so that they can identify which is most impressive. To identify their information needs and timing for action, interviews with certain target users might be performed. Resource needs may be lowered if the users are prepared to utilize more practical but less accurate assessment techniques.
There are various activities and actions in local communities across the world to improve the issue. There are several kinds of evaluations; evaluation procedures must be adjusted to the evaluations and objectives. There must be an understanding of the numerous types of assessments carried out during the life cycle. The primary forms of analyses are the process, impact, result, and summative assessment. The systematic use of assessments addressed many problems and helped the improvement of many community organizations. The assessor should know how to conduct an evaluation and accept both quantitative and qualitative techniques.
A customer assessor should visit and get comments on the proposed approach before the assessment takes place. If clients have a clear perspective of getting specific quantitative data and provide great diagrams in the end report, the research is unsuitable. An evaluator must emphasize the crucial application of the evaluation results and aggressively promote the evaluation results’ circulation. There is no straightforward idea for evaluating the impacts of social programs, especially if they cost a large quantity of money and touch an overwhelming population in a way that is not intrinsically interesting. The instrumental use of evaluations is compassionate and damages the evaluation profession. Intriguing but restricted, the application of assessment lighting. Numerous actions might be focused on the assessment design. According to the development stage of the program, a possible assessment menu may be distributed among stakeholders so that they can identify which is most impressive. To identify their information needs and timing for action, interviews with certain target users might be performed.
Fontanillas, T. R., Carbonell, M. R., & Catasús, M. G. (2016). E-assessment process: giving a voice to online learners. International Journal of Educational Technology in Higher Education, 13(1), 1-14. Web.
Grzeszczyk, T. A., & Klimek, D. (2018). The model of social innovation project evaluation. In Proceedings of the Asia-Pacific Social Science and Modern Education Conference. (10). Shanghai: Atlantis Press. Web.
Hunt, K., Gurvitch, R., & Lund, J. L. (2016). Teacher Evaluation: Done to You or with You?. Journal of Physical Education, Recreation & Dance, 87(9), 21-27. Web.
Liu, J., Zhou, H., Liu, X., Tian, G., Wu, M., Cao, L., & Wang, W. (2019). Dynamic evaluation method of machining process planning based on digital twin. IEEE Access, 7, 19312-19323. Web.
McCain, D. V. (2016). Evaluation basics. Association for Training Development.
Okolelova, E. Y., Shulgina, L. V., Trukhina, N. I., Shibaeva, M. A., & Shulgin, A. V. (2017). The mechanism of evaluation under the conditions of uncertainty of innovational project as a random process. In Perspectives on the use of New Information and Communication Technology (ICT) in the Modern Economy. pp. 56-63. Springer, Cham.
Owen, J. M. (2020). Program evaluation: Forms and approaches. Routledge.
Putra, Z. A. (2016). Early phase process evaluation: Industrial practices. Indonesian Journal of Science and Technology, 1(2), 238-248. Web.