Phonemic Awareness Assessment Tools

Phonemic awareness relates to the specific capacity to manipulate sounds and focus on phenomes, i.e., individual sounds in spoken words. Elliott et al. (2019) identify phonemes as the smallest units consisting of the spoken language; they are typically combined to create words and syllables. Achieving phonemic awareness is crucial since it serves as a foundation or basis for word recognition and spelling proficiencies.

It is among the most effective predictors of a child’s capacity to learn how to read during the initial two school instruction years. Phonemic awareness evaluation aims to assess various elements, including phoneme matching: the capacity to distinguish words beginning with similar sounds; phoneme isolation: the capability to isolate or separate a sound within a word; phoneme segmentation: the ability to split or divide a word into specific sounds; phoneme blending, which refers to the ability to integrate specific sounds into a single word; and phoneme manipulation: the capability to move, change, or modify specific sounds contained in a word.

There are various tools for assessing phoneme awareness, including COR, High Scope, teaching strategies Gold, Developmental Reading Assessment, and DIBELS (Dynamic Indicators of Basic Early Literacy Skills). However, this presentation will focus on two primary evaluation approaches, DIBELS and Developmental Reading Assessment. This presentation aims to

  • Highlight the advantages and disadvantages of the selected assessment tools.
  • Analyze the quality of each evaluation approach and the validity of the data it would generate.
  • Elucidate how each assessment tool could be differentiated for a child with an oral language delay.

Developmental Reading Assessment

The Developmental Reading Assessment, commonly abbreviated as DRA, refers to an individually administered tool used to evaluate a child’s reading abilities. It is a device used by teachers or instructors to distinguish a student’s level of reading, comprehension, fluency, and accuracy (Jones, n.d.). The DRA analyzes every child’s reading proficiency via performance evaluation, recording, and systematic observation.

Advantages

DRA evaluations support learning standards – the outcomes from the evaluation process provide teachers or instructors with precise data on a student’s current independent level of reading. The results can also help instructors distinguish the underlying growth needs and identify areas associated with remarkable growth or progress (Wadhams, 2017).

This evaluation tool allows instructors to assess three primary metrics: reading preferences, book reading, and book selection. Depending on students’ ability and age, instructors may allow learners to select their preferred book or pre-select students’ reading material (Wadhams, 2017). A teacher may also ask the learner to retell or narrate the story as per their understanding or respond to particular comprehension queries. Furthermore, questions associated with reading preferences may be designed or formulated according to a learner’s ability and age. These queries can range from basic questions to queries regarding different authors.

The approach allows for the accurate computation of scores. According to Jones (n.d.), an accuracy score using the DRA tool is ascertained by calculating the number of self-corrections, errors, and omissions during the analysis of the students’ reading capacities. The instructor typically circles the rate of precision in the official observation guide. They later calculate a total score for comprehension according to the DRA comprehension stipulations in the rubric and the reading level (independent) based on the provisions on the DRA criteria included in the observation guidelines.

The DRA evaluation tool further enhances an instructor’s capacity to monitor students’ reading levels. According to Wadham (2017), the assessment outcomes can be used to track learners’ reading progress on a semi-annual or yearly basis. This allows educators to attain a comprehensive understanding of the weaknesses and strengths of learners and identify any noticeable improvements. This analysis tool can also be used to monitor students’ short-term progress at regular intervals or periods, such as at the cessation of every semester.

Disadvantages

The DRA assessment tool does not provide adequate information to facilitate the practical evaluation of students in the upper grades.

The above-mentioned evaluation approach does not provide adequate guidelines for individualizing and modifying instructions to meet learners’ needs.

This assessment tool is susceptible to repetitiveness, and if duplicated, its black and white copies do not ensure the reader’s proper engagement (Wadham, 2017).

Quality and Validity of Developmental Reading Assessment

Internal consistency is a common reliability measure used to assess various items’ effectiveness in analyzing or measuring similar behavioral traits or variables. The sample selected for the evaluation of this particular assessment tool comprised 1676 learners in grades K-8. The DRA was administered to students in 2006 during Spring as a field test and characteristics tabulated. Generally, the metrics demonstrated moderate-to-high reliabilities (Wadhams, 2017). For instance, oral fluency had a Cronbach’s alpha score of.74,.85, and.60, while comprehension recorded a score of.81,.80, and.78 (Wadhams, 2017). The above-mentioned values are slightly low than anticipated; however, they exhibit good consistency.

Concerning passage equivalency, oral fluency and reading comprehension had level 4 and 34 scores, respectively (Wadhams, 2017). There were no significant variations between passage difficulties at the identified levels. Some equivalence level was demonstrated at the non-fiction and fiction passages at levels 38-80 16, and 28 (Wadhams, 2017).

The test-retest reliability outcomes revealed no statistically significant variation at the confidence level of.05 (Wadhams, 2017). Correlation coefficients were considerably high with a range of between.93 and.99 (Wadhams, 2017). This indicates that DRA demonstrates high test-retest reliability with minimal errors linked to time sampling.

Validity

A test’s content validity refers to the adequacy or sufficiency with which important content or information has been sampled and the appropriateness with which data is addressed in the test. This validity approach was integrated into the DRA assessments during the process of its creation. The tests contained in this evaluation tool are authentic or realistic (Wadhams, 2017). Furthermore, the learner is asked to give text responses in a manner deemed suitable for the genre.

DIBELS

The DIBELS tool was established to measure empirically and recognized validated proficiency linked to reading outcomes. Every metric in this assessment tool has been comprehensively researched and found to be valid and reliable indicators of literacy development in early childhood (Elliott et al., 2019).

Advantages

DIBELS utilizes the avant-garde research-based approaches for validating and designing curriculum-based reading metrics (Elliott et al., 2019). This, in turn, increases its applicability among many students in several grades.

It provides educators or instructors with guidelines for monitoring all students’ progress. DIBELS’ subtests allow for the analysis of crucial abilities and skills deemed essential for reading success. Furthermore, these subsets provide both progress-monitoring and benchmark forms (Elliot et al., 2019).

The outcomes (learners’ performance) obtained from the assessment process can be utilized by instructors to distinguish students who are likely to benefit from strategic, intensive, and core instructions.

The benchmark subtests for this assessment tool use advanced principles for designing tests to offer instructors pertinent data on all learners. Besides helping teachers to identify students at various risk levels (Smolkowski & Cummings, 2016), DIBELS’ subtests facilitate the analysis of items that can be used to predict subsequent instructional steps.

Another crucial benefit of this assessment tool is that it allows instructors to monitor the progress of students. Tracking learners’ progress, according to Smolkowski and Cummings (2016), is critical in ensuring that students distinguished for strategic and intensive support really benefit from it as intended. Furthermore, it allows the interventionist to intensify and modify interventions until the anticipated improvement sequence is attained.

Disadvantages

It is recommended that teachers administer this assessment tool (DIBELS) three times annually to evaluate a child’s performance in every aspect. Despite the fact that it takes ten minutes or less to assess a child using this tool, some teachers may be reluctant to spend classroom periods three times a year administering this test, particularly in instances where they have to give other tests.

The fact that DIBELS incorporates difficult words such as “blurred” and “swot” in the assessment test also acts as a significant drawback. Smolkowski and Cummings (2016) argue that the inclusion of words such as “swot” and “blurred” is irrelevant because the test aims to evaluate whether learners can decode words, and therefore, it may be hard for them to pronounce such vocabularies with reference to their phonics knowledge.

DIBELS test is usually administered to every student individually, and therefore if an instructor has a large classroom, this approach may pose as a shortcoming. Administering the examination may consume substantial amounts of classroom time.

Quality and Validity of DIBELS

The DIBELS assessment tool has been studied comprehensively to ensure it attains the criteria for validity and reliability. The reliability of the DIBELS evaluation tool incorporates the analyses of data from five different research surveys. According to the study outcomes, the tool’s reliability coefficients are congruously high in all three reliability forms (Elliott et al. 2019). The coefficients’ magnitude or level implies that DIBELS has minimal test error and that its users can have trust the outcomes of the tool’s test. Upon recurrent evaluations across several forms, DEBILS’ reliability increased considerably.

With regard to multiple-probe aggregates reliability, every DIBELS probe is fundamentally an item, and the assessment tool consists of several short, recurrent probes. The outcomes revealed that repeated evaluations can be utilized to examine learners’ adroitness through time, and based on the recurrent alternate forms of the assessment tool, educational decisions may be developed (Elliott et al., 2019). The Spearman-Brown Prophesy Formula was used to determine the reliability of students’ proficiency measurement according to several probes. The outcomes revealed that DIBELS’ probes, when integrated, had a reliability score of.88 and.70 (Elliot et al., 2019). Upon combining four recurrent evaluations, the tool has a reliability score of more than.90, irrespective of the fact that an individual item (probe) has a significantly low score of.70 (Elliot et al., 2019).

Validity

Content validity

DIBELS’ measures or metrics were explicitly developed to enhance the connection between itself and the basic literacy skills in early childhood, performance sensitivity, and modifications of instructions in line with these areas (Elliott et al., 2019). Its measures act as primary indicators of fundamental skills in early reading.

The DIBELS measures’ design specifications have a direct correlation to their content validity. Every metric was developed as per the specified criteria to foster their sensitivity and utility (Elliott et al., 2019). DIBELS assessment tool also has adequate criterion-related and discriminant validity.

Classification consistency: As a benchmark performance metric, the composite score outcomes revealed that the test scores precisely predict consequent or successive performance above or at the benchmark objective for the subsequent year (Elliott et al., 2019).

How the Assessment Tools Could be Differentiated for A Child with Oral Language Delay

To meet the needs of children with oral delay, these two assessment tools can be differentiated in to four elements: learning environment, products, content, and process and according to students’ learning profile, interest, and readiness.

Examples of content differentiation include

  • Utilizing reading materials at different readability rates.
  • Inputting text materials on tapes.
  • Using vocabulary or spelling listings at students’ readiness levels.
  • Exploiting the use of reading buddies.
  • Organizing small groupings to re-teach a skill or conception for learners.

Examples of process or activity differentiation include

  • Providing manipulatives or additional hands-on support for learners with oral language delay.
  • Varying the period taken by these students to complete tasks as an approach to providing extra support.

Examples of product differentiation include:

  • Presenting these learners with options regarding how to express their preferred learning, for instance through writing letters or developing murals with labels.
  • Utilizing rubrics which match or are compatible and extend to their differing adroitness levels.

Examples of learning environment differentiation approaches include:

  • Ensuring that the learning environment fosters the student’s capacity to read without destruction and in a quiet place.
  • Establishing or developing clear guidelines for independent assessments which are compatible with every students’ individual needs.
  • Creating routines that allow these students to acquire help from instructors, especially when educators are busy with other learners and are unable to come to their aid promptly.

References

Elliott, J., Lee, S. W., & Tollefson, N. (2019). A reliability and validity study of the DIBELS (dynamic indicators of basic early literacy skills—modified). School Psychology Review, 30(1), 33–49. Web.

Jones, K. (n.d.). How to use a developmental reading assessment. The Classroom. Web.

Smolkowski K, & Cummings K. D. (2016). Evaluation of the DIBELS (Sixth Edition) diagnostic system for the selection of native and proficient English speakers at risk of reading difficulties. Journal of Psychoeducational Assessment, 34(2), 10 –118. Web.

Wadhams, K., (2017). Multiple measures of reading assessment and the effects on data-driven instruction [Unpublished master’s thesis]. Brockport, State University of New York.

Cite this paper

Select style

Reference

ChalkyPapers. (2023, October 10). Phonemic Awareness Assessment Tools. https://chalkypapers.com/phonemic-awareness-assessment-tools/

Work Cited

"Phonemic Awareness Assessment Tools." ChalkyPapers, 10 Oct. 2023, chalkypapers.com/phonemic-awareness-assessment-tools/.

References

ChalkyPapers. (2023) 'Phonemic Awareness Assessment Tools'. 10 October.

References

ChalkyPapers. 2023. "Phonemic Awareness Assessment Tools." October 10, 2023. https://chalkypapers.com/phonemic-awareness-assessment-tools/.

1. ChalkyPapers. "Phonemic Awareness Assessment Tools." October 10, 2023. https://chalkypapers.com/phonemic-awareness-assessment-tools/.


Bibliography


ChalkyPapers. "Phonemic Awareness Assessment Tools." October 10, 2023. https://chalkypapers.com/phonemic-awareness-assessment-tools/.