When “In the Red” Isn’t the Full Story
- Dana Clark
- 4 days ago
- 3 min read
Winter in primary grades means we need an extra ten minutes to bundle up before heading outside, we’re greeted by big gap-toothed grins, and… it’s time for midyear assessments.
Now that February has rolled around, many of our partner schools are eager to share their benchmark data. For some, the results reflect remarkable gains, and we celebrate classrooms filled with eager young readers. For others, we may find ourselves staring at seas of red and feeling a wave of panic. And the truth is that we may even feel blindsided by this data. After all, we’ve been following our foundational skills programs to the letter, and we’ve seen growth! Why isn’t that growth reflected in the benchmark?
What often happens next is a deep dive into the data. We ask: Where do students need support? How do we move forward? And please know, I am not suggesting we skip this. After all, assessments are designed to help guide us. But what if jumping straight to the next steps causes us to miss an important part of the learning story? And what if the results are NOT only about our students, but also about the program?
Before deciding that students and teachers are falling short, let’s pause to explore what teachers were asked to teach and what the benchmark assessment measured, and see if they match.
In one school, midyear data revealed that many students fell “in the red” on the nonsense word portion of a common foundational skills screener given in early January. However, when we looked at the phonics program being used, we saw that the concept of blending sounds was not introduced to students until December. Students had only a couple of weeks to master this before being assessed.
In another school, the data painted a picture of widespread gaps in sound knowledge. Yet when we took a closer look, we realized the sounds students missed were sounds not yet introduced due to the scope and sequence of the program.
Sometimes data does not highlight a lack of learning, but a lack of alignment. Program fidelity, on its own, does not guarantee that students will perform well on every benchmark, and when assessments and programs are not aligned, even strong instruction can appear ineffective. So what do we do? We can’t change the assessment, but we can make informed shifts in our instruction. If your scores surprised you, here are a few steps to consider:
Compare the assessed skills to your program’s scope and sequence. Look for skills or concepts that were not yet introduced or were only introduced briefly before the assessment.
Consider making intentional adjustments. If your school community agrees that certain foundation skills are essential, introduce those concepts earlier and provide additional practice. For example, once students know about six sounds, including at least one vowel, they can begin practicing segmenting and blending those sounds.
Embed practice into existing routines. Look for natural opportunities during word study and reading times to add those concepts and skills. For example, introduce a sound earlier and then embed practice into Shared Reading cycles or a three-part drill.
Data is important. It can help us to identify students who do need additional support and guide our planning. But it can’t be looked at in isolation. To truly understand what our youngest readers need, we must connect the data to what has been taught, how it was taught, and when it was taught.
When assessments and programs aren’t playing nicely, it doesn’t mean anyone has failed. It simply means that we need to look at the whole picture, and then move forward with clarity and care.














