Sources of Data for Program Evaluation
Data for Program Evaluation
In program evaluation, it might often be necessary to use a wide variety of sources in order to collect data. Programs usually affect a large number of people at once, and provide measurable effects on society, workplace, local or global communities. As a result, it is possible to organize and utilize statistics that display tangible effects of a program, and demonstrate its effectiveness. This section of the work will present different sources of program evaluation data, and discuss their value for assessing projects.
Local census data
Local census data is an important starting point for developing and applying programs. Using statistics about births, deaths, and population distribution, it is possible to access the necessity of program implementation in an area (Taylor-Powell & Steele, n.d.). For example, if one is proposing a project to address the problems of the elderly population, its value will be largely dependent on the size of the elderly population in the area. This statistic can also reveal inherent flaws of some types of programs. Taking the aforementioned example of an elderly-focused group, accessing population statistics may show what percentage of the elderly population the program is actually helping, showing the limitations of its design.
Demographic data
Demographic data provides collective information about the state of the local community, including age, sex, race and other considerations. Statistic such as employment status, average income, marital status and others can create great incentive for introducing change into the community (Taylor-Powell & Steele, n.d.). A program that seeks to address racial pay gap, or social inequality will be more valuable in communities where this issue is more pronounced. Similarly, if demographics do not display the presence of an issue an intervention is trying to solve, it may call the validity of the project into question.
Media feature stories
Media remains a vital component of understanding the success of programs and other types of campaigns. Oftentimes, media provides coverage of programs, providing journalistic, professional, or commonplace opinions about its effectiveness or timeliness (Taylor-Powell & Steele, n.d.). As a result, both the potential and practical value of a program may be assessed. Media may also provide valuable criticism or feedback, revealing points of improvement or concerns.
Business statistics
Much like demographic data and other types of statistics, business-related information can establish the needed precedent to tell that the program is valuable. They describe the programs and emergent trends of the professional sphere, providing a reason to create programs in the first place (Taylor-Powell & Steele, n.d.). The effectiveness of the subsequent change can be further tracked through changes in business statistics.
Program documents
Program documents exist for the explicit purpose of recording the effects of a particular program. They contain information pertaining to the initial goals of the program, and its real effects. By comparing these types of data, it is possible to tell if a program was successful or not (Taylor-Powell & Steele, n.d.). In this type of evaluation, flaws in planning, results and other consideration can also be seen more easily.
Using Data Sources to Guide Program Evaluation
Withdrawal Rates
Withdrawal rates can be indicative of the project’s ability to engage its participants on all steps of the process, as well as the general viability of the project as a whole. If a large number of people with draws from a project at a specific point in time, it may be necessary to examine that aspect of the project. This data cannot be used separately from other types of statistics, however, as there are many potential reasons as to why individuals may withdraw from a course. Only by accessing all available information, it is possible to more accurately understand why people leave a project.
Course failure rates
Course failure rates can tell organizers much about the difficulty of the project, its demographic, and the potential of introducing improvements. If it is seen that a large number of participants fail the project, it can be used as indication that some part of the project does not sufficiently prepare them for success. Therefore, projects may be changed or adjusted in order to help people succeed more consistently. Alternatively, the high failure rates may be indicative of a wrong population choice. Much like withdrawal rates, course failure rates need to be applied while coupled with other evaluation methods and statistics, as there are many reasons why a course might fail that are outside of the project’s influence.
Summative assessments
Summative assessments provide information about student learning at the point when the course is complete, highlighting how the information was processed, retained and understood by the participants. Assessing how much the program was able to teach the students is vital in understanding its success. Summative assessments cannot be understood properly without formative ones, as they provide the necessary context as to why certain types of competencies developed less than others.
Formative assessments
Formative assessments provide information on how students learn, and how the program affects their ability to receive, use and apply the necessary competencies. By using formative assessments, it is possible to understand which parts of the program require change or adjustment, creating direct influence on the future of a project (Carnegie Mellon University, n.d.). This measure must be used together with summative assessments, because any project or change must lead to a tangible result considered successful.
Internal test scores /Â External test scores
Student end-of-course survey answers
Student survey answers provide a look into how the course was perceived, and what opinions the students came to by the end of it. As the direct beneficiaries of a program, students provide invaluable information about its success (Carnegie Mellon University, n.d.). Using various types of surveys, they are capable of commenting on their learning process, its successes and pitfalls. This data must be utilized together with other statistics, such as course withdrawal or failure rates, it order to provide the necessary context for student opinion.
Faculty end-of-course survey answers
Faculty survey answers give insight into how well the program was prepared, implemented and delivered. Despite the potential flaws or highlights of a program, it is necessary for faculty members to put it into motion, which brings its own set of variables into consideration. Faculty may both assist the completion of a program, or hinder its results from being accurate. Therefore, it is necessary to assess faculty feedback to witness potential issues with both program planning and implementation.
In order to triangulate areas of improvement, formative and summative assessment, as well as student feedback may be used together. This set of criteria allows program managers to see how their program has affected learning and which results or reactions it produced. By combining these three types of evaluation, it is possible to see the effects of a project on all parts of its implementation.
Surveys for the Same Course
Student end-of-course survey
- I am satisfied with the diversity of the course curriculum
- Strongly agree – Agree – Neutral – Disagree – Strongly Disagree
- No important considerations were left out of the course curriculum
- Strongly agree – Agree – Neutral – Disagree – Strongly Disagree
- The parts of the complemented and enhanced each other, building on the subjects and themes (“12 amazing course evaluation survey questions,” 2022)
- Strongly agree – Agree – Neutral – Disagree – Strongly Disagree
- I am satisfied with my learning experience
- Strongly agree – Agree – Neutral – Disagree – Strongly Disagree
- I have received useful and detailed feedback (“12 amazing course evaluation survey questions,” 2022)
- Strongly agree – Agree – Neutral – Disagree – Strongly Disagree
- Instructors guided me throughout the curriculum
- Strongly agree – Agree – Neutral – Disagree – Strongly Disagree
- Teachers were able to provide the necessary context and explanations
- Strongly agree – Agree – Neutral – Disagree – Strongly Disagree
- All supplementary material and resources were made available
- Strongly agree – Agree – Neutral – Disagree – Strongly Disagree
- Instructors emphasized my learning journey and understanding as vital
- Strongly agree – Agree – Neutral – Disagree – Strongly Disagree
- Instructors showed me consideration and respect
- Strongly agree – Agree – Neutral – Disagree – Strongly Disagree
Faculty end-of-course survey
- How confident were you in the success of the program starting out?
- Did your confidence of project success change by the end of the program?
- Were you able to accomplish the educational goals set before you?
- Do you think the students were able to engage in the program, actively learn and participate?
- Were there any issues during the course of the project
- What would you want to be changed in the program?
- What would you want to be improved in the program?
- Would you advocate for this program methodology to be used for other learning experiences?
- What do you think your own teaching style or convictions brought to the discussions central to the project?
- How important is individual teaching style to the educational process?
I will use the results of these surveys to understand the average experience with the project from both sides. For students, I will be able to locate key areas of change and improvement, while also accessing the capability of instructors. From the teachers’ side of the discussion, I will be able to properly understand what parts of the program can be changed in order to enhance the student learning experience.
References
12 amazing course evaluation survey questions. (2022). QuestionPro. Web.
Carnegie Mellon University. (n.d.). Formative vs summative assessment – Eberly center – Carnegie Mellon University. CMU – Carnegie Mellon University. Web.
Taylor-Powell, E., & Steele, S. (n.d.). Collecting Evaluation Data: An Overview. Intergroup Resources – Intergroup Resources. Web.