Friday, April 20, 2018
What do we say about our students? Do our values align with the words we use? Do they reflect what parents think is important about what happens in the classroom?
In this data story, we take a closer look at 3,694 comments written about 2,862 K - 5 students on their winter report cards.
Similar to last time, there wasn't anything especially fancy in terms of getting the data. We have reports in our student information system that will gather the information and spit it out into a spreadsheet. After that, I added student demographics and program information from another student file using trusty old INDEX/MATCH.
The big challenge part was getting the data clean...or, at least, cleanish. You see, I didn't want student names. Why not? In part because I wanted to make some of the data available to others. This means I needed to strip out identifying information from the text. Also, the names interfered with some of the frequency counting and comparisons I wanted to make. (Aside: Do you realize how many kids are go by the name "Maddie"? I didn't.)
I did a first pass using the SUBSTITUTE function in Excel. I had Excel replace any occurrences of a student's first name with "" to blank it out. However, this only worked when a teacher used the actual name of the student. Many kids go by nicknames, shortened versions of their names, first and middle names, etc. I'm sure there must be better ways than looking through things row by row, but that's what I ended up doing.
After the spreadsheets were all cleaned up and ready for church, I looked at some different options for doing the text analysis. I don't have any real experience with this, and while I looked as some fancy options like Overview, KH Coder, and Emosaic, I just didn't have the time to devote to digging into them right now. Instead, I used the WordCounter and SameDiff options over at DataBasic.io.
The WordCounter provided the basis for the word cloud you see in the picture at the top of this page. I used SameDiff to compare lists of comments for male and female students, for example.
There are also comparisons for students who receive special services (vs. those who don't), students eligible for free/reduced lunch (vs. those who aren't), and students of colour (vs. white).
I also used a couple of pivot tables in Excel to summarize and sort through the data—for example, the total number of comments per grade level or per student population.
Compared to the last few data stories we've built out in the hallway, this one is less complicated. There's a lot of paper and stickers, with some foam to help provide dimension to the word cloud.
I knew I wanted the background to be yellow...something bright for spring, but neutral enough that the black lettering could pop. We put the word cloud in the center of the board. It has the 50 most commonly used words. On the outside, we have the four pairs of lists with words that are only found in comments for students in a particular group. The list for our students who receive special services is particularly depressing.
But wait, there's more...
This is our first data story which uses two boards. On the second board, we have information for our students in secondary grades (6 - 12). There are two middle schools and two high schools. Teachers have a list of "canned" comments at each school that they can assign—two per class per grading period—as opposed to the freeform comments elementary teachers create. For these students, we did some simple counts of how many comments per student and then underneath those charts are lists of the most common and least common comments selected. On the right of the board, we have an area for people to leave comments for us.
This second board isn't as sexy as the one for elementary, but I'm still excited that we have represented something for every school and every K - 12 student (even if they received no comments).
This is one project where I would have loved to have rejected the null hypothesis: the idea that there isn't any difference between student groups. But even with this very basic analysis, I couldn't. Even though most of the text is pretty much the same across student groups at the elementary level, the bottom line is there are some differences in how we talk about boys and girls...and for students of colour...and students from low-income backgrounds...and those who receive special services.
We may never eliminate bias, but if we don't bring it to light, we can't start to address it. While it's great that our district is taking on several initiatives around inclusion and cultural competency, but these are useless if we only use them to pat ourselves on the back for starting them. If we can't change the system in meaningful ways for students, we are just as complicit as those who built the structures in the first place. This display is one way to raise some awareness of what we're up against.
To see more pictures of this project, or view frequency tables of the comments, please visit the page for this data story. As always, comments welcome!
Thursday, January 11, 2018
This (school) year, I have built a story about our high school seniors...and one for our sixth graders...and now I'm moving down to kindergarten. For our sixth data story, we are looking at early learning data. What does it mean to be kindergarten ready?
Our district has been participating in the state-mandated WaKIDS assessment for three years. Teachers collect observational data about each student's development in six categories: social-emotional, physical, cognitive, language, literacy, and math. There is a 9-level scale for each item, with birth to age 1 being the lowest and third grade being the highest. Only eight of these are shown below, as no child in our district was rated the lowest level in any category.
This story was one of the easiest in terms of managing the data files. The state provides us with a file of everything submitted by teachers, and then I merged in a few other demographic and program pieces. This time around, there were no statistical shenanigans, just total counts for each category and level.
Do you remember these? Usually built with pony beads, they have made various appearances throughout the years to signify friendship, solidarity, remembrance, or another purpose. Perhaps you wore them on your shoelaces or the lapel of a jacket.
If you haven't seen these sorts of things, they are constructed from safety pins and beads. I remembered them when I was still pondering the strip plot idea and veered off into how I might be able to string or hang beads from a line. Once I thought of the safety pins, I made an immediate connection to early learning (even if we don't use safety pins for diapers anymore).
After I had the concept, I knew I could build a safety pin with beads for each of our 486 kindergartners. Some back of the envelope calculations showed that I could fit 6 beads (at .5 - .6 mm each) on a 2" safety pin. This would allow me to show all six data points of the assessment, using beads with colours matching the developmental levels indicated by the teacher.
But what else could I encode, I wondered?
In divining the entrails of the data, I noticed that there were a few student attributes that might be worthy of further attention. First was gender. It is not uncommon to hear parents talking about "red shirting" young boys to give them an additional year to mature...but do the data bear this out? I ordered two types of safety pins to help us look at this: gold for girls and silver for boys. The second piece was a student's birthdate. We have a cutoff of September 1. If a student is not 5 years old by then, they can't enroll. But does that really matter---are older students more "ready"? I decided to encode this using different colours of map pins to attach the safety pins to the display. I picked red (because it was not one of the 8 colours of beads) for students who had a birthdate less than six months prior to the first day of school and white pins for older students. Finally, what about low income status? I didn't want to mark this by individual student, due to privacy issues; but, as I organized the pins by school, I decided to order the schools by their overall percentage of students who have low income status. That would give a general comparison. I did look at and consider race; however, I did not represent it with this display because (a) it was actually not as influential a factor as the others for this particular data set and (b) I couldn't represent it as accurately as it deserves. By this, I mean that with student privacy laws, by the time I made a pin showing race and gender, it could become very easy to associate the data with a particular child...especially as most of our schools might only have only one kindergarten student of a particular race. (Yes, we are very white.)
So here, is the final display:
The pins are organized in the space for each school by those kindergartners who were reported as most ready (all purple---or better---in the six categories) to least ready. As you can see in this broad shot, the school with the greatest percentage of low income students (PGS) has a lot lower proportion of all purple pins as compared to BLE, our school with the lowest percentage of low income students.
It's harder to see in the picture at the left (but you can click to embiggen), but gender also seems to play a role in how students are viewed in terms of readiness. Remember, gold pins represent girls...and by the time we get down to the bottom two rows, there is a lot of silver showing. I do wonder whether bias factors in here. I also noticed when I put the pins together that many of the girls were not rated as highly in math as they were in other areas. Hmmm.
Finally, you might notice the labels beneath each board. These are actually little booklets for each school with charts that show aggregate data. Viewers can look at the distributions for each category or for the demographics of the school. If you would like to see these charts, additional photos, or explore the web-based data workbook, please visit our district web site for this display.
One of my colleagues has said that this is his favourite story that I've built. I am very happy with it---we encoded a lot of information into a small space and were able to include every child. The pins jingle and move. The paper sparkles and feels sandy. Light refracts through the beads to make them glitter in the light. I think folding in elements of touch and sound is a critical piece of this work. I know that those aspects don't represent anything in particular about the data, but they invite people to ponder...and that's what I'm after: Engagement.
I also think it's interesting to compare schools and see how they used the scale for this assessment. One school (LRE) only used three levels---all of their pins only have purple, blue, or green beads. Another school (PGS) used eight levels and really differentiated. One school (MTS) is very large (106 kinders), but only 2 were rated as being kindergarten ready in all six areas, while all the other schools had a much larger proportion.
As we get ready to work with our community about registering kindergartners for next year, it will be interesting to think about how this display impacts our conversations. I already had one co-worker spend some time looking at it as she thinks about whether or not to enroll her son with a July birthday in school next year.
I am more than halfway through this project of building large-scale, analog, interactive data displays. My goal is to build ten...and I have four more to dream up and construct. The Muse will be back. I don't know when or what she'll bring, but I will be ready.