Meta-Assessment: Evaluating Assessment Activities

Evaluation Critique

As part of the Assessment Movement which emerged in higher education as part of standardization and policy initiatives for evaluation, John Cry proposes a set of objective rules and principles that each assessment or evaluation process at universities should follow. The purpose of this evaluation is to ensure not only some level of standard but contribute to fairness and correctness of assessments in the academic settings. Assessments can be highly beneficial when providing useful feedback and information that can be used for development, not just completing them for a policy ‘checkmark.’ The evaluation is aimed at providing a framework for assessment activities on faculty, programs, and the institution as a whole. John C. Ory, the author of the article, is a researcher and professor at University of Illinois, working for the Department of Education Policy, Organization and Leadership as well as the Office of Instructional Resources. His involvement has implications for the evaluation as he was likely involved in the assessment processes at his respective university, so he understood the nuances and inaccuracies involved. The evaluation is highly methodical and attentive to detail, indicating key steps that should be undertaken by assessment staff to remain objective and aim for objectivity.

Description of Evaluation

The evaluation approach utilized in this article is the Program Impact Theory. The author recognized the concept of assessment at colleges was inherently flawed as a program and had various methodological and practical limitations. The evaluation sought to change the approach by offering the systemic recommendations to program change. The research design can be best described as summative evaluation, that is primarily focused on the project completion evaluation. Essentially, the researcher examines values and outputs of the program within the context of the projected result. Similarly occurred in the article, as Ory sought to examine the respective value of the assessment process and determine accuracy of its outputs from various perspectives. The authored offered several operational definitions, such the meaning of assessment, as well as defining each category explored such as utility standards, feasibility, etc.

No direct sample is described in the article. However, it is implied that Ory (1992) took the sample of assessments conducted at his respective university. If the article had an application, it would likely be using convenience sampling, with readily available data to use. No data collection or analysis process is described in the article. However, hypothetically, the author is attempting to evaluate these assessments and their impact. To deeply study the topic, he could use the data collection method of in-person interviews with the auditors and staff conducting the assessments in order to identify their training, approaches, and common patterns. In turn, the best form of analysis would be by coding the data from interviews and performing thematic content analysis. The guidelines presented in the article are highly categorized by major themes, and then individual categories with their respective descriptors.

The researcher does not address validity and reliability. However, Ory does not that he hopes that his recommendations are used and tested in real-world applications to assessments. Through this application, it can be possible to gather data to potentially add more improvements to the assessment process and ensuring that it is standardized across higher education. Although not specifically results, Ory presents a complex and detailed list of guidelines and recommendations for the improvement of the assessment process. There are 4 distinct categories, utility standards, feasibility standards, propriety standards, and accuracy standards, with each having 5-10 subcategories taking into account the common elements to consider in the assessment process. Ory calls this the meta-evaluation. The limitations to this are largely the fact that the methodologies utilized by Ory are not clear. How did the author derive these categories of standards, are certain ones more critical then others, he never answers many of these questions. However, the implications from this approach and model developed by Ory have significant impact. He notes that the proposed standards can be used for commissioning and planning assessments, conducting them, and evaluating completed assessments as well. They take on a standardized approach which bring professional standards to the industry, enhancing the capability to answer questions.

Assessment of Evaluation

The evaluation approach and design where an appropriate fit, given that this was an initial entry research and observation in the industry. The summative evaluation design allows to measure the degree of success of a project which is then shared with target audience and stakeholders. The complexities of the higher education assessment process required a detailed and holistic approach that is able to identify all the relevant factors and shortfalls that could be used for standard correction or improvement. The design was struggling, as there was no quantitative support or analysis to the presented data. It was done based on qualitative observations and analysis. The conclusions presented by the author are highly warranted, as there were significant reforms needed to assessment in higher education without any standards or regulation to do so. Every party acted on their owns and it created chaos and confusion as well as introducing bias into the process.

The research was conducted ethically and responsibility, all observed data was confidential. There were no issues that any person was blamed or compromised. Instead, Ory sought to create more systematic change to the assessment process. The primary strength of the evaluation one could argue was its in-depth detail and categorization. The breakdown of recommendations by categories helps to easily navigate the guidelines and implement them. The major weakness was lack of practical implications and data to support these guidelines. Ory should have provided a study where these principles were applied to the assessment process in comparison to the unregulated approaches from earlier, and to identify whether his recommendations served as improvements, which could then be turned into policy for efficient and effective application.


Concerned with the state of assessments in higher education, this article and its author an educator and administrator, presents a set of recommendations on the many variable ways that the assessment process can be improved. His evaluation and subsequent guidelines of the assessment process are highly critical, but thorough and categorical. Ory breaks down the evaluation into simple terms and explains each step logically. However, the author notes that future study and evaluation is needed. Assessment should be a continuous process of improvement, not a standstill and outdated concept. Assessment can greatly benefit those whom it is evaluating, but it should maintain the level of high standards as to be relevant and useful. Future evaluations should encompass a greater number of holistic factors but also strive to develop a methodological approach to studying available data.


Ory, J.C. (1992). Meta-assessment: Evaluating assessment activities. Research in Higher Education, 33(4), pp.467-481.

Cite this paper

Select style


ChalkyPapers. (2023, April 15). Meta-Assessment: Evaluating Assessment Activities. Retrieved from


ChalkyPapers. (2023, April 15). Meta-Assessment: Evaluating Assessment Activities.

Work Cited

"Meta-Assessment: Evaluating Assessment Activities." ChalkyPapers, 15 Apr. 2023,


ChalkyPapers. (2023) 'Meta-Assessment: Evaluating Assessment Activities'. 15 April.


ChalkyPapers. 2023. "Meta-Assessment: Evaluating Assessment Activities." April 15, 2023.

1. ChalkyPapers. "Meta-Assessment: Evaluating Assessment Activities." April 15, 2023.


ChalkyPapers. "Meta-Assessment: Evaluating Assessment Activities." April 15, 2023.