IMPACT
Language Rating Scale

Examine how language may affect everyday social interactions and/or academic performance
Analyze adverse IMPACT of language as required by IDEIA

OVERVIEW

 IMPACT Language Rating Scale (ages: 5-21) is an objective measure of language functioning based on informal observations of clinicians, teachers, and parents. This tool aids in the clinical determination of a diagnosis/special education eligibility by examining how language difficulties may affect everyday social interactions and/or academic performance (for educational planning purposes). 

The IMPACT Language Rating Scale  evaluates the impact of a child’s oral communication and on their social interactions, academic life, and home/after school life. The current rating scale asks parents, teachers, and clinicians to rate the various components of language functioning on a 4-point scale (“never,” “sometimes,” “often,” and “typically”) and yields a percentile and standard score. By utilizing this rating scale, we are able to develop a better understanding of how a student’s communication difficulties/differences may impact language development, as well as academic performance, and peer relationships.

This rating scale is a norm-referenced spoken language rating scale that is composed of 45 test items, and has three separate forms to be completed by clinician, parent(s), and teacher(s). It is an accurate and reliable assessment tool that provides valid results on informal observations of spoken language, language processing and integration, and social interactions in the school and home environment. Normative data of this test is based on a nationally representative sample of 1064 (typically developing) children and young adults in the United States.

highlights

Helps measure impact on educational progress. Questions presented in a video based format. Automated scoring. Parents and teachers can easily access the rating forms online (by phone, tablet, etc). Parent Spanish forms and instructions included.

ages

5 to 21 years

scores

Standard scores, percentile ranks, impact analysis

psychometric data

The nationwide standardization sample consisted of 1064 examinees (typically developing), stratified to match the most recent U.S. Census data on gender, race/ethnicity, and region.

administration time

30 to 45 mins for all 3 rating scales

format

Online rating scale with accompanying videos that narrate and explain the questions. Automated scoring

Examples of the the IMPACT Social Communication Rating Scale Questions

Frequently asked questions

The nationwide standardization sample consisted of 1064 examinees (typically developing), stratified to match the most recent U.S. Census data on gender, race/ethnicity, and region.

The Impact Language Rating Scale can be accessed as part of the Video Assessment Tools annual membership which costs $125 annually ($24.99 monthly), OR it can be accessed as part of the Social Squad intervention program membership at the Video Learning Squad site which costs $79 annually. 

The IMPACT Language Rating Scale was developed at the Lavi Institute by Adriana Lavi, PhD, CCC-SLP (author of the Clinical Assessment of Pragmatics (CAPs) test, the Social Squad, the IMPACT Language Rating Scale, etc.

All standardization project procedures were implemented in compliance with the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education [AERA, APA, and NCME], 2014). Additionally, all standardization project procedures were reviewed and approved by IntegReview IRB (Advarra), an accredited and certified independent institutional review board, which is organized and operates in compliance with the US federal regulations (including, but not limited to 21 CFR Parts 50 and 56, and 45 CFR Part 46), various guidelines as applicable (both domestic and international, including but not limited to OHRP, FDA, EPA, ICH GCP as specific to IRB review, Canadian Food and Drug Regulations, the Tri-Council Policy Statement 2, and CIOMS), and the ethical principles underlying the involvement of human subjects in research (including The Belmont Report, Nuremberg Code, Declaration of Helsinki).

This is an online rating scale with accompanying videos that narrate and explain the questions. SLPs, teachers and parents are able to access the rating scale forms online. SLPs use automated scoring online to obtain standard scores and to generate a report. 

Yes, please contact us to request a quote.

If you have more questions or would like to connect with a representative please Contact Us

Highlights of the IMPACT Language Rating Scale

The results of the IMPACT Language Functioning Rating Scale provide information on the spoken language comprehension and expressive language skills that children and adolescents require to succeed in school and social situations. This rating scale is particularly valuable to individuals who have delays in spoken language comprehension, expressive language, language integration, literacy, and social interactions. Data obtained from the IMPACT Language Functioning Rating Scale is useful in determining eligibility criteria for a student with a language impairment.

Strong Psychometric Properties

The IMPACT Language Functioning Rating Scale was normed on a nationwide standardization sample of 1064 examinees. The sample was stratified to match the most recent U.S. Census data on gender, race/ethnicity, and region. Please refer to Chapter 4 for more information of the standardization process.

The IMPACT Language Functioning Rating Scale areas have strong sensitivity and specificity (above 80%), high internal consistency, and test-retest reliabilities. Criterion-related validity studies were conducted during standardization, with over 1064 participants. Please refer to Chapter 5 for more information on the summary results of the reliability and validity studies.

The contextual background and theoretical background sections described in Chapters 1 and 2 provide construct validity of the IMPACT Language Functioning Rating Scale. Additionally, please refer to chapter 1 for descriptions of each language skill observed and literature reviews to support this type of measurement included in the IMPACT Language Functioning Rating Scale.

Ease and Efficiency of Administration and Scoring

The IMPACT Language Functioning Rating Scale consists of three observational rating scales, one for clinician, one for parent, and one for the teacher. All IMPACT rating scales and scale converting software is available on the Video Assessment Tools website. Rating scale item clarification videos are also provided on this website. Additionally, an instructional email with a link to the website and rating form is prepared for your convenience to send to teacher and parents. Please review Chapter 3 for more information on the easy and effective administration process.

Description of the IMPACT Language Rating Scale

The IMPACT Language Rating Scale is a norm-referenced spoken language comprehension and spoken language rating scale for children and young adults ages 5 through 21 years old. It is composed of 45 test items, and has three separate forms to be completed by clinician, parent(s), and teacher(s). It is an accurate and reliable assessment tool that provides valid results on informal observations of spoken language, language processing and integration, and social interactions in the school and home environment. Normative data of this test is based on a nationally representative sample of 1064 children and young adults in the United States.

The Impact Model

The IMPACT model was developed based on current literature and examination of real-world challenges faced by individuals with speech and language impairments such as school demands and social interactions. This model was designed to analyze the real-life authentic observations of teachers, parents, and clinicians. The IMPACT model uses a contextualized, whole language approach to see the impact and the outcome of a speech and/or language impairment on education and social interactions.

Rating Scale Areas

The test is composed of five areas: spoken language comprehension, oral expression, language processing and integration, literacy, and social language skills.

Testing Format

The IMPACT Language Rating Scale is composed of 45 test items. The test uses a series of items that asks the rater to score on a 4-point scale (“never,” “sometimes,” “often,” and “typically”). The rating scale yields an overall percentile and standard score. While completing this checklist, examinees are able to watch videos that will guide them by providing specific examples of what each question is asking. The videos are there to help examiners along if they have any questions regarding the skill that they are assessing.

Uses and Purpose

Clinicians, parents, and teachers can provide valuable information regarding a student’s understanding of spoken language, expressive language, language integration, literacy, and social language abilities. This information can help determine what areas the child has deficits in and how deficits in these areas may impact the child in both the classroom and in the home environment. The IMPACT Language Rating Scale should be used to evaluate children or young adults who have a suspected or previous diagnosis of a language disorder. This tool will assist in the identification or continued diagnosis of a spoken language comprehension and/or expressive language disorder. Additionally, this rating scale will help determine if there are any educational or personal impacts. The results of the IMPACT Language  Rating Scale provide clinicians information on children and young adult’s ability to comprehend spoken language and use spoken language. By utilizing the IMPACT Language Rating Scale, we are able to develop a better understanding as to how a student’s language abilities may impact their academic performance, progress in school, and social interactions.

Code of Federal Regulations-Title 34: Education

34 C.F.R. §300.7 Child with a disability.  (c) Definitions of disability terms. (11) Speech or language impairment means a communication disorder, such as stuttering, impaired articulation, a language impairment, or a voice impairment, that adversely affects a child’s educational performance.

The Individual’s with Disabilities Act (IDEA, 2004) states that when assessing a student for a speech or language impairment, we need to determine whether or not the impairment will negatively impact the child’s educational performance. In order to determine whether a language impairment exists, we can collect a language sample of the individual, and analyze language abilities and the impact of the impairment on academic success. 

Importance of Observations and Rationale for a Rating Scale

When evaluating an individual’s language abilities, the evaluation should include systematic observations and a contextualized analysis that involves multiple observations across various environments and situations (Westby et al., 2003). According to IDEA (2004), such types of informal assessment must be used in conjunction with standardized assessments. Section. 300.532(b), 300.533 (a) (1) (I, ii, iii); 300.535(a)(1) of IDEA states that, “assessors must use a variety of different tools and strategies to gather relevant functional and developmental information about a child, including information provided by the parent, teacher, and information obtained from classroom-based assessments and observation.” By using both formal and informal assessments, clinicians are able to capture a larger picture of a student’s language abilities. By observing a child’s language via informal observation, examinees (i.e., clinician, teacher, and parent) can observe how the child understands language and uses language (e.g., express needs and wants, make requests, converse with peers/friends, etc.), as well as the potential impact a language disorder may have on a child’s academic and social life.

When we consider a formal spoken language comprehension and/or spoken language assessment, it may be difficult for clinicians to observe and gauge the impact that these deficits may have on a student’s everyday life. Parent and teacher input can be beneficial at this time because it allows for the observations to take place in an authentic everyday setting. Additionally, the examiners are already familiar with the child and may know what to look for which will help create a true representation of the child’s language abilities. The IMPACT Language Functioning Rating Scale provides us with clinician, parent, and teacher observations and perspectives of a child’s understanding and use of language. When given the guidelines of what to look for, parents will be able to provide numerous examples of their child’s language abilities and the impact these deficits may have on a child. Difficulties in the various parts of language may not be so easily observed during clinical assessment and observation. Furthermore, it can be important to obtain information on how a child engages with their family, friends, and peers during common tasks in order to obtain ecologically and culturally valid information on how a child functions and communicates on a daily basis (Jackson, Pretti- Frontczak, Harjusola-Webb, Grisham-Brown, & Romani, 2009; Westby, Stevens, Dominguez, & Oetter, 1996).

During assessment and intervention planning, it is important to consider how spoken language comprehension and spoken language abilities may adversely affect educational performance and a child’s social interactions. When compared to typically developing peers, children with language impairments are rated by their kindergarten teachers as being significantly less prepared in areas such as literacy, math, pro-social communication, and behavioral competence (Justice, Bowles, Pence Turnbull, & Skibbe, 2009). Previous research has suggested that language disorders can be detrimental to a child’s development and children whose language falls behind their peers are at an increased risk of academic failure (Durkin, Conti-Ramsden, & Simkin, 2012; Johnson, Beitchman, & Brownlie, 2010), behavioral and psychiatric problems (Conti-Ramsden, Mok, Pickles, & Durkin, 2013, Snowling & Hulme, 2006), unemployment, economic disadvantage, (Parsons, Schoon, Rush, & Law, 2011), and social impairment (Clegg, Hollis, Mawhood, & Rutter, 2005).

Contextual Background for Rating Scale Areas

Language impairment involves difficulty in the understanding and/or use of spoken, written, and/or other symbol systems. The disorder may involve: “(1) the form of language (phonology, morphology, syntax); (2) the content of language (semantics); and/or (3) the function of language in communication (pragmatics) in any combination” (ASHA, 2016). Listening comprehension is a high-order skill that involves both language and cognitive abilities (Florit, Roch, & Levorato, 2013; Kim & Phillips, 2014; Lepola, Lynch, Laakkonen, Silven, & Niemi, 2012). Specifically, listening comprehension refers to one’s ability to comprehend spoken language (e.g., conversations, stories/narratives) by extracting and constructing meaning. Research has showed that listening comprehension is critical to reading comprehension (Foorman, Koon, Petscher, Mitchell, & Truckenmiller, 2015; Kim, 2015; Kim, Wagner, & Lopez, 2012; Kim & Wagner, 2015). When children present with reading comprehension deficiencies, there is a heavy focus on word recognition difficulties, including dyslexia and learning disabilities. Difficulties with word recognition are linked to weakness in the phonological domain of language and are often identified early on in the pre-school years (Catts, Fey, Zhang, & Tomblin, 2001). On the other hand, some children demonstrate reading comprehension difficulties despite adequate word reading abilities (Catts, Adlof, & Ellis Weismer, 2006; Nation, Clarke Marshall, & Durand, 2004). This group of individuals is known as poor comprehenders. Poor comprehenders are able to read text accurately and fluently at age-appropriate levels, however, they have difficulty understanding what they are reading (Cain & Oakhill, 2007; Nation, 2005). For example, when reading, poor comprehenders have weaknesses in the areas of semantics, syntax (Catts, Adlof, & Ellis Weismer, 2006; Nation & Snowling, 1998; Nation, Snowling, & Clarke, 2007) and more complex parts of language such as idioms, inferencing, comprehension monitoring, and knowledge of text structure (Oakhill, 1984; Cain & Towse, 2008; Cain, Oakhill, & Bryant, 2004; Oakhill & Yuill, 1996). Additionally, when we consider narrative comprehension, children with language disorders are less likely to provide correct answers to literal or inferential questions about stories that have been read to them (Gillam, Fargo, & Robertson, 2009; Laing & Kamhi, 2002). Since reading comprehension takes time to develop, it is difficult to demonstrate reading comprehension deficits in children before they are able to read accurately and fluently. Thus, these students’ reading comprehension deficits may go unnoticed until later grades. As such, it is critical that language deficits are identified as early on in development as possible.

There is also a strong relationship between oral language abilities and reading ability (Hulme & Snowling, 2013). Nation, Clarke, Marshall, and Durand (2004) investigated poor compehenders’ spoken language skills. The results of this study found that these students were less skilled than those in the typically developing group on semantic tasks (e.g., vocabulary and word knowledge), morphosyntax (e.g., past tense inflection, sentence comprehension) and aspects of language use (e.g., understanding figurative language). Research also suggests that students with expressive language difficulties are four to fives times more likely than their peers to present with reading difficulties (Catts, Fey, Zhang, & Tomblin, 2001). For example, Zielinkski, Bench, and Madsen (1997) explored expressive language delays in preschoolers and found that these children were more likely to have difficulties with reading performance. Poll and Miller (2013) also reported that when children are 8 years old, expressive language delays could be a significant risk factor for poor oral language and reading comprehension. Furthermore, Lee (2011) discovered that expressive language development predicts comprehension of reading passages in both third and fifth grade students. Vocabulary can also play an important role early on in development as was demonstrated in Duff, Reen, Plunkett, and Nation’s (2015) study that found infant vocabulary between 16 and 24 months is predictive of reading comprehension early on in school instruction years. Additionally, Pysridou, Eklund, Poikkeus, and Torppa’s study (2018) found that expressive language ability at age 2–2.5 years old is associated with reading comprehension in ages 8–16 years old.

Listening comprehension and oral language abilities can also be important when we consider writing development (Kim, Al Otaiba, Wanzek, & Gatlin, 2015; Hulme & Snowling, 2013). Children with language impairments have been found to show grammatical errors (Gillam & Johnston, 1992; Scott & Windsor, 2000; Windsor, Scott, & Street, 2000) and spelling errors in their written texts. The spelling errors are similar to those found in children with dyslexia (Puranik, Lombardino, & Altmann, 2007), however, an individual’s ability to create and think of new ideas appears to be specific to difficulties within the language system (Bishop & Clarkson, 2003; Puranik, Lombardino, & Altmann, 2007). Numerous studies have explored the difficulties that school-age children with language impairment have with telling stories. For example, when compared to typically developing children, children with language deficits tend to compose stories that contain fewer words and utterances (Moyano & McGillivray, 1988 [as cited in Hughes, McGillivray, & Schmidek, 1997]), fewer story grammar components (Paul, 1996), reduced sentence complexity (Gillam & Johnston, 1992), fewer complete cohesive ties (Liles, 1985), increased grammatical errors (Liles, Duffy, Merritt, & Purcell, 1995; Norbury & Bishop, 2003), and poorer overall story quality (Gillam, McFadden, & van Kleeck, 1995; McFadden & Gillam, 1996).

Over the last thirty years, there has been an abundance of research demonstrating that children with specific language impairment (SLI) are at a disadvantage when it comes to peer relationships (Durkin & Conti-Ramsden, 2010). Individuals with SLI engage less in active conversation interactions, enter less frequently into positive social interactions, demonstrate poorer discourse skills, are more likely to provide inappropriate verbal responses, and are less likely to influence others successfully (Hadley and Rice, 1991; Craig, 1993; Craig and Washington, 1993; Grove, Conti-Ramsden, & Donlan, 1993; Guralnick, Connor, Hammond, Gottman, & Kinnish, 1996; Brinton, Fujiki, & McKee 1998; Vallance, Im, & Cohen 1999). Children with SLI also tend to score lower in the areas of social skills, social cognitive abilities, and may have trouble with behavioral and emotion regulation (Cohen, Barwick, Horodezky, Vallance, & Im, 1998; Fujiki, Brinton, & Clarke, 2002; Marton, Abramoff, &Rosenzweig, 2005; Lindsay, Dockrell, & Strand, 2007). Additionally, children with language impairments are at higher risk of academic failure, social exclusions, behavioral and emotional difficulties, and are more vulnerable to being bullied (Conti-Ramsden, Durkin, Simkin, & Knox, 2009; St Clair, Pickles, Durkin, & Conti-Ramsden, 2011). Lastly, children with language disorders are also at a heightened risk of exhibiting externalizing problems and antisocial conduct disorders (Beitchman, Wilson, Johnson, et al., 2001; Conti- Ramsden & Botting, 2004).

Description of Rating Scale Areas

Spoken Language Comprehension

The spoken language comprehension rating scale items look at how well an individual understands spoken language. For example, rating scale items look at a child’s ability to understand grade level stories, vocabulary, narratives, and his/her ability to answer questions regarding a given story. Additional test items in this area look at an individual’s ability to follow along with a conversation, lecture, or discussion, and the ability to recognize when something he/she hears does not make sense.

Sample Spoken Language Comprehension Item: After listening to a lesson, discussion, or story, is the student able to answer who, what, where, and when questions? For example, is the student able to recall the characters, setting, time, place, and what was happening in the story?

Oral Expression

The oral expression rating scale items look at how well an individual is able to use spoken language. For example, test items investigate if the individual is able to appropriately ask and answer questions, initiate conversations, use narrative storytelling, grade level vocabulary, correct word order, and grammar. Additional test items in this area look at an individual’s ability to add comments and questions to a conversation, maintain the topic, form thoughts and ideas, problem solve, negotiate, and use critical thinking skills.

Sample Oral Expression Item: Does the student experience difficulty asking or answering questions in class? For example, does he/she have trouble responding to teacher or peer comments during classroom activities?

Language and Literacy

The language and literacy rating scale items look at an individual’s ability to comprehend and understand what he/she is reading, to distinguish between the main idea and supporting details, and to use his/her own experiences to predict what might happen in grade-level stories. Additionally, literacy rating scale items look at an individual’s writing abilities.

Sample Language Processing and Integration Item: Does the student demonstrate an understanding of grade level stories and literature? For example, is the student able to follow along with stories that are read in class and is he/she able to comprehend what is going on in the story?

Language Processing and Integration

The language processing and integration rating scale items look at how an individual follows multi-step instructions, understands figurative language, analogies, and inferences, and sequences details or events. Additionally, rating scale items look at whether an individual’s ability to comprehend and use spoken language impacts his/her reading abilities.

Sample Language Processing and Integration Item: Does the student have a difficult time making inferences/implied meaning from given information? For example, does the student have a difficult time “reading between the lines,” making connections, or drawing conclusions?

Social Interactions

The social interactions rating scale items look at how spoken language comprehension and use may impact an individual’s social interactions. For example, rating scale items may look at whether an individual is aware of his/her language deficits and how he/she expresses their feelings towards their language disorder. Additionally, rating scale items investigate an individual’s confidence regarding his/her communication and how this impacts their participation in conversations and activities with peers, friends, and family.

Sample Language Processing and Integration Item: Does the student’s ability to understand and use language make it difficult for him/her to participate fully in school related clubs or activities? For example, does the student’s language skills hold them back from joining drama club or yearbook club?

Administration of the Rating Scale

Examiner Qualifications

Professionals who are formally trained in the ethical administration, scoring, and interpretation of assessment tools and who hold appropriate educational and professional credentials may administer the IMPACT Language Rating Scale. Qualified examiners include speech-language pathologists, school psychologists, special education diagnosticians and other professionals representing closely related fields. It is a requirement to read and become familiar with the administration, recording, and scoring procedures before using this rating scale and asking parents and teachers to complete the rating scales.

Confidentiality Requirements

As described in Standard 6.7 of the Standards for Educational and Psychological Testing (AERA et al., 2014), it is the examiner’s responsibility to protect the security of all testing material and ensure confidentiality of all testing results.

Eligibility for Testing

The IMPACT Language Rating Scale is appropriate to use for individuals between the ages of 5-0 and 21-0 years of age. This rating scale is designed for individuals who are suspected of or who have been previously diagnosed with a speech sound disorder. The rating scale also addresses the potential impact that an articulation or phonological disorder may have on a child.

EASY TO FOLLOW STEPS

STEP 1

Complete the CLINICIAN online rating form that will calculate student age and raw scores for you!

STEP 2

Email or Text links to the online rating form to TEACHER(S) and PARENT(S), and get the results back by email (or printed pdfs).

STEP 3

Easily convert scores and use our report generating widget to generate a ready-to-use write-up for your assessment report.

Theoretical Background of the IMPACT Language Rating Scale

poken language comprehension and oral expression, refers to the understanding and the use of spoken language across various contexts and social situations. Approximately 7% of children have deficits in language comprehension or language use and these difficulties can persist into the school-age years and interfere with communication, academics, and social interactions (Tomblin, Records, Buckwalter, Zhang, Smith, & O’Brien, 1997).  Longitudinal studies have revealed that language impairments that persist into school age remain in adolescence (Conti-Ramsden & Durkin 2007) and adulthood (Johnson, Beitchman, & Brownlie, 1999; Clegg, Hollis, Mawhood, & Rutter, 2005), often with accompanying literacy deficits (Clegg, Hollis, Mawhood, & Rutter, 2005, Snowling & Hulme, 2000). Lindsay and Dockrell (2012) conducted a longitudinal study with adolescents who were identified as having specific language impairment (SLI) during the early primary grades. This study assessed the behavioral, emotional, and social difficulties of students in relation to self-concept, language, and literacy abilities over time. Participants were followed from 8 years old to 17 years old. Lindsay and Dockerell (2012) found that poor language and literacy skills continued, and peer and conduct problems were found to increase over this age range. Joffee and Black (2012) explored behavioral, emotional, and social difficulties in young adolescents who, based on teacher report, were identified as having low language skills and/or poor academic achievement. These students had not been clinically diagnosed as having a language disorder. Results of Joffee and Black’s (2012) study indicate that even students with subtle language problems can negatively impact school and social interactions. The researchers emphasized the need to identify and treat language weakness in all students so that all children can be properly supported.

There is a clear need for formal and informal assessment tools that aid in the identification of language disorders because without appropriate assessment and intervention, there can be serious negative impacts to a child’s development, education, and social interactions. Observations of students’ language abilities in his/her natural educational environment, as well as teacher and parent observations of language functioning in educational settings are fundamental when determining eligibility. Bishop and McDonald (2009) emphasize that when assessing children for language impairment, it is important to use both language test scores and parental report in order to provide complementary information to the evaluation. Spoken language comprehension and spoken language disorders can have adverse effects on various aspects of language development, as well as academic performance, and peer relationships. For example, a child who has difficulty with their ability to understand spoken language may find it difficult to follow along during classroom instruction and fall behind in their classwork. Additionally, a child who has trouble understanding or using spoken language may have difficulty developing meaningful peer relationships and friendships, which could lead to a variety of other difficulties such as behavioral and emotional problems. By assessing students with the IMPACT Language Functioning Rating Scale, speech-language pathologists, teachers, and parents can observe children in their natural environments and identify those individuals who have a suspected or an existing diagnosis of a language disorder and the impact the language disorder may have on the child.

Standardization and Normative Information

The normative data for the IMPACT Language Functioning Rating Scale is based on the performance of 1064 examinees across 11 age groups (shown in Table 4.1) from 17 states across the United States of America (Arizona, California, Colorado, Nevada, Idaho, Illinois, Iowa, Kansas, Ohio, Minnesota, Florida, New York, Pennsylvania, Florida, South Carolina, Texas, Washington).  

The data was collected throughout the 2018-2021 school years by 37 state licensed speech-language pathologists (SLPs). The SLPs were recruited through Go2Consult Speech and Language Services, a certified special education staffing company. All standardization project procedures were reviewed and approved by IntegReview IRB, an accredited and certified independent institutional review board. To ensure representation of the national population, the IMPACT Language Functioning Rating Scale standardization sample was selected to match the US Census data reported in the ProQuest Statistical Abstract of the United States (ProQuest, 2017). The sample was stratified within each age group by the following criteria: gender, race or ethnic group, and geographic region. The demographic table below (Table 4.2) specifies the distributions of these characteristics and shows that the normative sample is nationally representative.

Criteria for inclusion in the normative sample

A strong assessment is one that provides results that will benefit the individual being tested or society as a whole (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education [AERA, APA, and NCME], 2014). One way we can tell if an assessment is strong, is if the test includes adequate norms. Previous research has suggested that utilizing a normative sample can aid in the identification of a disability. Research has also suggested that the inclusion of children with disabilities may negatively impact the test’s ability to differentiate between children with disorders and children who are typically developing (Peña, Spaulding, & Plante, 2006). Since the purpose of the IMPACT Language Rating Scale is to help to identify students who present with language disorders, it was critical to exclude students from the normative sample who have diagnoses that are known to influence language abilities (Peña, Spaulding, & Plante, 2006). Students who had previously been diagnosed with spoken language comprehension and/or spoken language disorders, auditory processing disorders, and articulation or phonological impairments were not included in the normative sample. Further, students were excluded from the normative sample if they were diagnosed with autism spectrum disorder, intellectual disability, hearing loss, neurological disorders, or genetic syndromes. In order for students to be included in the normative sample for this assessment tool, students must have met criteria of having typical language development, and show no evidence of language deficits. Thus, the normative sample for the IMPACT Language Rating Scale provides an appropriate comparison group (i.e., a group without any known disorders that might affect language abilities) against which to compare students with suspected disorders.

The IMPACT Language Rating Scale is designed for students who are native speakers of English and/or are English language learners (ELL) who have demonstrated a proficiency in English based on state testing scores and school district language evaluations. Additionally, students who were native English speakers and also spoke a second language were included in this sample.

Norm-referenced testing is a method of evaluation where an individual’s scores on a specific test are compared to scores of a group of test-takers (e.g., age norms) (AERA, APA, and NCME, 2014). Clinicians can compare clinician, teacher, and parent ratings on the IMPACT Language Rating Scale to this normative sample to determine whether a student is scoring within normal limits or, if their scores are indicative of a language disorder. Administration, scoring, and interpretation of the IMPACT Language Rating Scale must be followed in order to make comparisons to normative data. This manual provides instructions to guide examiners in the administration, scoring, and interpretation of the rating scale.

Validity and Reliability

his section of the IMPACT Language Functioning Rating Scale manual provides information on the psychometric characteristics of validity and reliability. Validity helps establish how well a test measures what it is supposed to measure and reliability represents the consistency with which an assessment tool measures a certain ability or skill. The first half of this chapter will evaluate content, construct, criterion, and clinical validity of the IMPACT Language Functioning Rating Scale. The latter half of the chapter will review the consistency and stability of the IMPACT Language Functioning Rating Scale scores, in addition to test retest and inter-rater reliability.

Validity

Validity is essential when considering the strength of a test. Content validity refers to whether the test provides the clinician with accurate information on the ability being tested. Specifically, content validity measures whether or not the test actually assesses what it’s suppose to. According to McCauley and Strand (2008), there should be a rationalization of the methods used to choose content, expert evaluation of the test’s content, and an item analysis.

Content-oriented evidence of validation addresses the relationship between a student’s learning standards and the test content. Specifically, content-sampling issues look at whether cognitive demands of a test are reflective of the student’s learning standard level. Additionally, content sampling may address whether the test avoids inclusion of features irrelevant to what the test item is intended to target.

Single-cut Scores

It is common to use single cut scores (e.g., -1.5 standard deviations) to identify disorders, however, there is evidence that advises against using this practice (Spaulding, Plante, & Farinella, 2006). When using single cut scores (e.g., -1.5 SD, -2.5 SD, etc.), we may under identify students with impairments on tests for which the best-cut score is higher and over identify students’ impairments on tests for which the best-cut score is lower. Additionally, using single cut scores may go against IDEA’s (2004) mandate, which states assessments must be valid for the purpose for which they are used.

Sensitivity and Specificity

Table 5.1 shows the cut scores needed to identify language disorders within each age range. Additionally, this table demonstrates the sensitivity and specificity information that indicates the accuracy of identification at these cut scores. Sensitivity and specificity are diagnostic validity statistics that explain how well a test performs. Vance and Plante (1994) set forth the standard that for a language assessment to be considered clinically beneficial, it should reach at least 80% sensitivity and specificity.

Thus, strong sensitivity and specificity (i.e., 80% or stronger) is needed to support the use of a test in its identification of the presence of a disorder or impairment. Sensitivity measures how well the assessment will accurately identify those who truly have a language disorder (Dollaghan, 2007). If sensitivity is high, this indicates that the test is highly likely to identify the language disorder, or, there is a low chance of “false positives.” Specificity measures the degree to which the assessment will accurately identify those who do not have a language disorder, or how well the test will identify those who are “typically developing” (Dollaghan, 2007).

Content Validity

The validity of a test determines how well the test measures what it purports to measure. Validity can take various forms, both theoretical and empirical. This can often compare the instrument with other measures or criteria, which are known to be valid (Zumbo, 2014). For the content validity of the test, expert opinion was solicited. Thirty-one speech language pathologists (SLPs) reviewed the IMPACT Language Functioning Rating Scale. All SLPs were licensed in the state of California, held the Clinical Certificate of Competence from the American Speech-Language-Hearing Association, and had at least 5 years of experience in assessment of children with spoken language comprehension, spoken, and social language disorders. Each of these experts was presented with a comprehensive overview of the rating scale descriptions, as well as rules for standardized administration and scoring. They all reviewed 6 full-length administrations. Following this, they were asked 35 questions related to the content of the rating scale and whether they believed the assessment tool to be an adequate measure of language functioning. For instance, their opinion was solicited regarding whether the questions and the raters’ responses properly evaluated the impact of language disorders on educational performance and social interaction. The reviewers rated each rating scale on a decimal scale. All reviewers agreed that the IMPACT Language Functioning Rating Scale is a valid informal observational measure to evaluate language skills and to determine the impact on educational performance and social interaction, in students who are between the ages of 5 and 21 years old. The mean ratings for the Clinician, Teacher, and Parent rating scales were 30.8±0.7, 28.8±0.8, 27.6±0.9, respectively.

Construct Validity

Developmental Progression of Scores

Spoken language comprehension and spoken language is developmental in nature and skills change with age. Mean raw scores for examinees should increase with chronological age, demonstrating age differentiation. Mean raw scores and standard deviations for the IMPACT Language Functioning Rating Scale are divided into eleven age intervals displayed in Table 5.2 below.

Criterion Validity

In assessing criterion validity, a correlation analysis was not possible for the IMPACT Language Functioning Rating Scale when compared to the current body of rating scales. The IMPACT Language Functioning Rating Scale is unique in its content and design. This rating scale cannot be compared to the existing body of rating scales because of its unique focus which is not available within other rating scales.

Group Differences

Since a language assessment tool is designed to identify those examinees with spoken language and spoken language comprehension impairments, it would be expected that individuals identified as likely to exhibit spoken and spoken language comprehension impairments would score lower than those who are typically developing. The mean for the outcome variables (Clinician, Teacher, and Parent ratings) were compared among the two clinical groups and the typically developing group of examinees using Kruskal Wallis analysis of variance (ANOVA). The level of significance was set at p≤0.05. Table 5.4 reviews the ANOVA, which reveals a significant difference between all three groups.

Inclusion/Exclusion Criteria for the Group Differences Study

Typically developing participants were selected based on the following criteria: 1) exhibited hearing sensitivity within normal limits; 2) presented with age-appropriate speech and language skills; 3) successfully completed each school year with no academic failures; and 4) attended public school and placed in general education classrooms.

Inclusion criteria for the spoken language comprehension group was: 1) having a current diagnosis of spoken language comprehension impairment (based on medical records and/or school-based special education eligibility criteria); 2) being enrolled in the general education classroom for at least 4 hours per day; and 3) exhibited hearing sensitivity within normal limits.

Finally, the inclusion criteria for the expressive language impairment group was: 1) having a current diagnosis of a spoken language impairment or delay (based on medical records and/or school-based special education eligibility criteria); 2) being enrolled in the general education classroom for at least 4 hours per day; and 3) exhibited hearing sensitivity within normal limits.

Standards for fairness

Standards of fairness are crucial to the validity and comparability of the interpretation of test scores (AERA, APA, and NCME, 2014). The identification and removal of construct-irrelevant barriers maximizes each test- taker’s performance, allowing for skills to be compared to the normative sample for a valid interpretation. Test constructs and individuals or subgroups of those who the test is intended for must be clearly defined. In doing so, the test will be free of construct-irrelevant barriers as much as possible for the individuals and/or subgroups the test is intended for. It is also important that simple and clear instructions are provided.

Response Bias

A bias is defined as a tendency, inclination, or prejudice toward or against something or someone. For example, if you are interviewing for a new employer and asked to complete a personality questionnaire, you may answer the questions in a way that you think will impress the employer. These responses will of course impact the validity of the questionnaire.

Responses to questionnaires, tests, scales, and inventories may also be biased for a variety of reasons. Response bias may occur consciously or unconsciously, it may be malicious or cooperative, self-enhancing or self-effacing (Furr, 2011). When response bias occurs, the reliability and validity of our measures is compromised. Diminished reliability and validity will in turn impact decisions we make regarding our students (Furr, 2011). Thus, psychometric damage may occur because of response bias.

Types of Response Biases

Acquiescence Bias (“Yea-Saying and Nay-Saying”) refers to when an individual consistently agrees or disagrees with a statement without considering what the statement means (Danner & Rammstedt, 2016).

Extremity Bias refers to when an individual consistently over or underuses “extreme” response options, regardless of how the individual feels towards the statement (Wetzel, Lüdtke, Zettler, & Bohnke, 2016).

Social desirability Bias refers to when an individual responds to a statement in a way that exaggerates his or her own positive qualities (Paulhus, 2002).

Malingering refers to when an individual attempts to exaggerate problems, or shortcomings (Rogers, 2008). Random/careless responding refers to when an individual responds to items with very little attention or care to the content of the items (Crede, 2010).

Guessing refers to when the individual is unaware of or unable to gage the correct answer regarding their own or someone else’s ability, knowledge, skill, etc. (Foley, 2016).

In order to protect against biases, balanced scales are utilized. A balanced scale is a test or questionnaire that includes some items that are positively keyed and some items that are negatively keys. For example, the IMPACT Language Functioning Rating Scale items are rated on a 4-point scale (“never,” “sometimes,” “often,” and “typically”). Now, imagine if we ask a teacher to answer the following two items regarding one of their students:

  1. The student appears confident when asking and answering questions in the classroom.
  2. The student does not appear to experience difficulty when asking and answering questions in class.

Both of these items are positively keyed because a positive response indicates a stronger level of confidence in language ability. To minimize the potential effects of acquiescence bias, the researcher may revise one of these items to be negatively keyed. For example:

  1. The student appears confident when asking and answering questions in the classroom.
  2. The student appears to experience difficulty when asking and answering questions in class.

Now, the first item is keyed positively and the second item is keyed negatively. The revised scale, which represents a balanced scale, helps control acquiescence bias by including one item that is positively keyed and one that is negatively keyed. If the teacher responded highly on both items, the teacher may be viewed as an acquiescent responder (i.e., the teacher is simply agreeing to items without regard for the content). If the teacher responds high on the first item, and responds low on the second item, we know that the teacher is reading each test item carefully and responding appropriately.

For a balanced scale to be useful, it must be scored appropriately, meaning the key must accommodate the fact that there are both positively and negatively keyed items. To achieve this, the rating scale must keep track of the negatively keyed items and “reverse the score.” Scores are only reversed for negatively keyed items. For example, on the negatively keyed item above, if the teacher scored a 1 (“never”) the score should be converted to a 4 (“typically”) and if the teacher scored a 2 (“sometimes”) the score should be converted to a 3 (“often”). Similarly, the researcher recodes responses of 4 (“typically”) to 1 (“never”) and 3 (“often”) to 2 (“sometimes”).  Balanced scales help researchers differentiate between acquiescent responders and valid responders. Therefore, test users can be confident that the individual reporting is a reliable and valid source.

Inter-rater Reliability

Inter-rater reliability measures the extent to which consistency is demonstrated between different raters with regard to their scoring of examinees on the same instrument (Osborne, 2008). For the IMPACT Language Functioning Rating Scale, inter-rater reliability was evaluated by examining the consistency with which the raters are able to follow the test scoring procedures. Two clinicians, two teachers, and two caregivers simultaneously rated students. The results of the scorings were correlated. The coefficients were averaged using the z-transformation method. The resulting correlations for the subtests are listed in Table 5.5.

Test-Retest Reliability

This is a factor determined by the variation between scores or different evaluative measurements of the same subject taking the same test during a given period of time. If the test proves to be a strong instrument, this variation would be expected to be low (Osborne, 2008). The IMPACT Language Functioning Rating Scale was completed with 68 randomly selected examinees, ages 5-0 through 21-0 over two rating periods. The interval between the two periods ranged from 12 to 20 days. To reduce recall bias, the examiners did not inform the raters at the time of the first rating session that they would be rating again. All subsequent ratings were completed by the same examiners who administered the test the first time. The test-retest coefficients for the three rating scales were all greater than .80 indicating strong test-retest reliability for the IMPACT Language Functioning Rating Scale. The results are listed in Table 5.6.

Scroll to Top