Individual Student Prediction

National Academic and Character Team Databits 5 Comments

Why are my predictions frequently incorrect?

In an earlier blog post, we discussed the difficulties involved in making individual student predictions, especially when we are attempting to make pass/fail predictions.

Great care must be utilized when attempting to predict an individual student’s performance on a state test. Many idiosyncratic issues or occurrences can influence a student’s score on a single test and these sources of error combine when one assessment is used to predict another. When we are reducing the information in both assessments into a dichotomy (predict pass, predict fail to pass) these potential errors are magnified (from STAR Results and Proficiency Scores)

It is also true that::

teachers who work with ALL students using ALL of the information available (including STAR, focus assessments, checks for understanding, personal knowledge, etc.) and moving them as far as possible will be a winning strategy to optimize the results on any state test (from STAR Results and Proficiency Scores)

However, it is only natural to want to have an idea of whether or not Johnny will pass the state test. The typical means of reporting such predictions is predict pass (on track) or predict fail (not on track). However, this is too simplistic an approach and leads to frustration when the predictions appear to be incorrect.

Back when the weather service was putting out predictions of “it will (or will not) rain tomorrow” it was frequently the target of humor and developed a reputation for inaccuracy. When the weather service began to consistently provide forecasts in terms of probability of precipitation (POP), the true nature of mathematical prediction was presented and generally understood by the public. A prediction that there is a 30% chance of rain at noon tomorrow is vastly more helpful than collapsing that into a “no rain” forecast. Each person can make decisions based on a better understanding of the risks involved.

Our understanding of predicting a student’s success needs to evolve into a reflection of the true nature of prediction. Looking at prediction as a dichotomy (will pass, will not pass) completely ignores much of the information available as well as the sources of inaccuracy not included in the model (individual student circumstances). This view implies that there is such a thing as a sure thing. Even a top student can do poorly on one assessment due to a large number of potential factors. It should also seem obvious that a student with a high NCE score has a greater likelihood of passing a state test than a student with a low NCE score and that there is no magic line separating the two. By utilizing a probability to predict a student’s potential for passing a future test we focus our thinking on the fact that even a student who scores high on STAR can fail the state test and a student who scores low on STAR can pass the state test. The probabilities (not certainties) of these things happening are just relatively low.

We have the data to build a mathematical model that will result in a probability statement for a student passing the state test for our large states. A set of tables linking the NCE score to a probability of passing the state test in AZ, FL and OH are contained at the end of this blog. It is important to note that these probabilities are estimates based on the performance of Imagine students during the 2015-16 academic year. These relationships may change somewhat from year to year but the pattern of increasing probability of passing the state test as a student’s NCE score on STAR increases will not. For schools in the other states, observe that the patterns are very similar and vary according the proficient/not proficient cut point used in each state. States with a low proportion passing would have the probabilities lowered and states with a high proportion passing would have them raised.

Therefore, in Arizona, rather than saying that Julia, who scored a PR of 68 on STAR Reading is “on track” to pass the state test, it is more meaningful to report that Julia has a 65% probability of passing the state test (round to the closest value in the table). Similarly, if Frank scores an NCE of 44 on STAR Math, a helpful way of communicating his potential on the state test is to indicate that he has a 31% probability of passing. These probabilities should be interpreted as a student’s probability of passing barring significant individual events (e.g. student is sick or has a traumatic family event on the day of either assessment).

ARIZONA

Reading/ELAMathematics
NCEProbability of ProficientNCEProbability of Proficient
5<2%5<2%
10<2%10<2%
15<2%15<2%
20<2%20<2%
256%25<2%
3014%307%
3522%3515%
4030%4023%
4538%4531%
5046%5039%
5554%5547%
6062%6055%
6570%6563%
7078%7071%
7586%7579%
8094%8087%
8599%8595%
9099%9099%
9599%9599%

 

FLORIDA

Reading/ELAMathematics
NCEProbability of ProficientNCEProbability of Proficient
5<2%5<2%
10<2%10<2%
15<2%153%
203%2011%
2511%2518%
3019%3026%
3527%3534%
4035%4042%
4543%4549%
5051%5057%
5559%5565%
6067%6072%
6575%6580%
7083%7088%
7591%7595%
8099%8099%
8599%8599%
9099%9099%
9599%9599%

 

OHIO

Reading/ELAMathematics
NCEProbability of ProficientNCEProbability of Proficient
5<2%5<2%
10<2%10<2%
15<2%153%
207%2010%
2515%2518%
3022%3025%
3530%3533%
4037%4040%
4545%4548%
5052%5055%
5560%5563%
6067%6070%
6575%6578%
7082%7085%
7590%7593%
8097%8099%
8599%8599%
9099%9099%
9599%9599%

 

 

Comments 5

  1. Quick question. For students with fluctuating NCE scores would you average the scores average or pick mean or highest NCE score for probability purposes of prediction on State Standardized testing?

  2. Post
    Author

    Barbara Brunner, regarding your question on March 1st at 3:27 p.m
    Pick the latest score unless you have reason to believe that it is not representative of the student’s true ability (unusual compared with the student’s other scores)

    1. Here is are two examples of what I was referring to… Student A – has NCE scores of 23, 24, 27, 24, 26, 29, 31, 36, 42, 33, 26
      Student B has NCE scores of 20, 20, 13, 25, 10, 6, 6, 32, 26, 21, 1, 1, 33, 1 So if I look over their scores they bounce enough say I choose 1 then they have a <2% chance but if I go with the 32 then they have a higher probability of passing.

      1. Any assessment score is an estimate. NCE scores are no different. It looks like student A is improving overall. Using the probabilities associated with an NCE of 30 – 33 would get you in the ballpark of their probability of passing. It looks like student B is getting test weary with all of the single digit scores. Try to encourage the student to do his best on the next assessment.

  3. Being able to craft a probability statement, that includes a percentage really does help the classroom teacher to plan even more precisely with specific intervention strategies. Specific and intentional interventions does assist with student growth over time.

Leave a Reply

Your email address will not be published. Required fields are marked *