Imagine has been using the concept of student growth for many years as an integral part of its educational philosophy and the basis of its school and teacher evaluation and accountability program. Indeed, Imagine has been a leader in the trend away from relying solely on static performance measures to utilizing measures of student gain.
The structure of the learning gain formula currently used to measure growth was developed early on and revised in 2008 at a time when there were no other, more sophisticated, measures of student gain readily available. When the STAR Renaissance Computer Assisted Test (CAT) system was implemented, the decision was made to keep the current learning gains measure as the Imagine standard in order to reduce confusion and assure the schools that the basics would stay the same when the major move from a paper and pencil fixed length test to a CAT was made. The use by Renaissance of the Student Growth Percentile (SGP) was easily ignored on most reports by choosing the option to not print it.
The current learning gain model has significant benefits. It is simple to calculate, it can be averaged across any group, and has become the lingua franca of Imagine Schools. This model measures growth by comparing a student to his or her grade level peers in a norming population at both the beginning and the end of the assessment period. The student’s change in relative position within this norming population is the basis for the learning gain calculation.
However, the learning gain model has a number of issues that more recent models have been designed to overcome. By using a score based on the entire distribution of students, the entire norming population is the point of reference and scores are placed on a normal curve. Students at the low end of the curve have a long way to go (making upward movement relatively easy) whereas it is difficult for students on the upper end of the curve to continue an upward progression. When students score high enough to come close to the NCE ceiling, we have to use off-level scoring to remove the ceiling effect. This phenomenon is exacerbated by the statistical concept of regression towards the mean. What this means is, if a student were to take the same assessment twice with no instruction between, a student with a low score would have a much higher than 50% probability of having a higher score the second time and a student with a high score would have a much higher than 50% probability of having a lower score. In other words, scores tend to move towards the mean due to random factors. These issues result in the phenomenon that the learning gain model favors schools with low performing students and is biased against schools with high performing students.
The SGP, utilized by Renaissance as well as many states, resolves these issues by comparing students to their peers based on grade level and scale score rather than the full grade level population. The SGP is dependent on the availability of a very large number of assessment results, modern statistical methods, and significant computing power. The SGP is a measure that cannot be calculated by simply looking at a student’s norm-referenced scores. For this reason, an SGP is available only for the time period over which Renaissance calculates an SGP. At this time, Renaissance provides a current SGP, which is Fall to present. By the end of the year, Renaissance will also be providing a Fall to Fall and a Spring to Spring SGP (when appropriate)
The SGP represents a student’s relative position on a subsequent test among his or her peers that are in the same grade and had the same score on a previous test. For example, Pamela is in the third grade and received a 320 scale score on the fall reading assessment. Her comparison population (peers) would be all third grade students who received a 320 scale score in Fall. In Spring she received a scale score of 430, resulting in an SGP of, say, 42. This would mean that she demonstrated more growth than 42% of her peers (and less than 58%). Normal (or expected) growth is represented by an SGP of 50. By comparing students with their grade AND performance level peers, the result is a much more valid and fair indicator of relative growth when used to evaluate the growth of students, classes, groups, grade levels and schools. Recent enhancements to the SGP model prioritize available data points to make the best use of information across time, by factoring a student’s current test score (the posttest) and up to two prior test scores (the pretest and, if available, an additional prior test) into the calculation.
Learning gain and SGP scores are both measures of growth and are highly correlated. However, they are not directly linked. That is, given an SGP, one cannot compute a learning gain and vice versa. Particularly among high performing and low performing students, it is possible that the learning gain and the SGP will have different relationships to “normal” (e.g. a learning gain less than 1.0 and an SGP greater than 50 for the same student). With the conversion to STAR 360 and the newly developed dashboard reports, Renaissance has made much more extensive use of the SGP in its reports and trend analyses. I am concerned that this will result in more opportunities for confusion.
For an excellent set of discussions regarding Student Growth Percentage, I suggest that you watch the videos at:
For even more information, here is the Renaissance white paper on SGPs.
Benefits of using SGP
- Statistically superior measure of growth
- Consistent ability of all students to demonstrate growth, regardless of their Fall NCE score
- Fairness to all schools, especially schools with high performing students
- Reduction of confusion between Renaissance Reports and Imagine language
- Closer relationship between the Imagine measure of growth and the state measures of growth, particularly when we have a Spring to Spring SGP available
- Easier explanations to parents, boards, etc. (an SGP of 60 means that the student demonstrated growth greater than 60% of his/her grade level and academic peers)