Think, Know, Prove: State of the College and KPIs

Think, Know, Prove is a regular Saturday feature, where a topic with both mystery and importance is posted for community discussion. The title is a shortened version of the Investigative Mantra: What do we think, what do we know, what can we prove? and everything from wild speculation to resource referencing fact is welcome here.

My first plan for this post was to kick off a series of posts on a few of the acronyms that you’re likely to be hearing a lot about soon–KPI and PBF in particular. As I started I started to write that post, I found myself writing more about the State of the College address than about KPIs (Key Performance Indicators) and, so, took that as a sign that I should switch topics.

Personally, I quite enjoyed the State of the College address yesterday. Though I loved Metoyer’s playful, hilarious, visual version, I’ve always enjoyed the formality of the presentations, too, and the convening of the entire college into a single room. I have always hated the rooms, though. I can’t hear a dagnabbing thing in room 103–just a word here and there and a muddy blarn about 60% of the time–and 1115 is too crowded when the whole college is there to be comfortable (If I had a magic wand, I’d get us a proper theater). But I digress.

In my time at the college, they’ve typically been more informative than challenging, usually focusing on our successes (enrollment, achievements, and the like) and our circumstances (most often state funding). At our department meeting, which followed the address, there was a general consensus of approval for Don’s focus on how we treat students, and a few people talked about their experiences of being mistaken for a student and being treated one way and then, upon correcting the impression, being treated completely differently, as if a switch had been flipped.

I’ve written a little about this topic before, and it was gratifying to hear that students rated their registration experience better this year by a lot (though, given what I still perceived to be a still general dissatisfaction along with the fact that we were least highly rated among the seven colleges, we obviously have a ways to go–and speaking of those survey results, I wonder why we don’t get to see them; wouldn’t you like to? I would. Maybe if the people involved in the process had more direct access to the ratings of their work, they’d be able to dig a little deeper to find that “will” required to move the needle.).

It seems that I’ve drifted back into my original topic–Key Perfomance Indicators (KPIs), and so I might as well give you a little on that, too, while I’m here.

What is a KPI? Let’s take a baseball team–call them Chicago BestTeam. What’s the most obvious measure of their quality? Their won-loss record. That’s their primary KPI. The team’s record shows how good they are. But for any fan (or gambler) or member of the industry investors or other company deciding whether to hang their name outside the stadium, that isn’t enough. Each constituency will have different interests (respectively: likelihood to win the championship (fan), likelihood to win the game (gambler), current acquirable resources and capacities/best practices (industry), revenue and net income (investor), and brand/customer base (marketing company)). The Sox (a.k.a., Chicago BestTeam) have an interest in providing each of those constituents an accurate measure or indicator of their status in those areas, and their won-loss record won’t be enough. So they’ll have to take and share other sorts of measurements, which reflect these different aspects of their efforts. Those are KPIs, too.

Those measures have to be easily understood–Bill James, one of the founders of a cultural revolution in an industry dominated by traditionalist, almost magical thinking, talked about numbers and data having the power of language when well constructed and presented, i.e., they can tell the story–but they have to tell the truth, too, if they’re going to be useful.

As we’ve all been saying for a while now, our graduation rates certainly tell “A story,” but it’s not ours, because of their reductionism and the ways that measure plays into so many misconceptions of what a community college is and does. Hospitals went through something similar when their industry was hit with the Metric Movement. Imagine working at Cook County Hospital and being evaluated (and compared to, say, Northwestern) on the basis of what percentage of people who come in actually leave alive. You’d be shouting like your hair was on fire that the comparison is an unfair one given the differences in clientele, resources, mission, and all the rest, and you’d be eager to figure out some sort of way to demonstrate how well you do with what you’ve got.

Somewhere along the way the measurers of hospitals (investors and public health officials, it seems–maybe even consultants!) figured out that a focus on process and protocols, rather than outcomes, might yield a better picture of a hospital’s quality–are incoming patients screened for psychiatric, what percentage of heart attack patients are administered a dose of aspirin, etc. There was an article just this week on the results–hospitals now follow “standard protocols” 97% of the time, up from 82% just 8 years ago. You can see the report from the accrediting commission here.

Baseball teams figured it out, too. At least some of them. When school got out in May, the first book I pulled from my shelf was Moneyball, and not because the movie starring Brad Pitt was coming out in September. I wanted to read the book because it tells the story of the way data analysis revolutionized baseball in the early 2000s owing to the unlikely success of one team and their non-traditional methodology. Faced with a small budget in a high cost industry, which created an annually increasing inability to compete with large market teams, the Oakland A’s searched for ways to find and exploit “market inefficiencies.” Their method? To use (initially) and then develop Key Performance Metrics by which they could (they thought) more accurately evaluate baseball players and so spend their money wisely. Over a five year period, or so, the A’s outperformed all or most of the big market teams at a fraction of the cost, and they did it by having better measures of baseball performance and a better understanding of which “performances” were key.

The amazing part is that Bill James and the crew of statisticians he inspired who came up with the methodology and many of the measures the A’s exploited had first published in the late 70’s, 20+ years before, and James was widely considered a krank and an outsider. Baseball was, even in the face of a ton of compelling evidence that many of its “heresies” were truth and many of its “Truths” were myths, a place where the insiders spoke in one voice–that of tradition. “You have to be a baseball man,” they said, “to know a baseball player and be able to run a team.”

Don’t educators say the same sort of thing all the time? Don’t we claim a sort of magical eye that can spot good teaching when we see it, regardless of what the numbers say? The point Bill James and company made was that the difference between a .250 hitter and a .300 hitter is about one hit every two weeks. That isn’t visible to the eye; rather, the eye is likely being influenced by other things–the biases of the industry/tradition, subjective preferences, etc. So the industry resisted, at least most of it, which allowed the A’s to keep on achieving what the industry, even as they did it, said could not be done.

I have a lot more to say about Moneyball, and KPIs, but I’ve run out of time. So, I’ll have to wrap this one up here, and throw it to you with a series of questions this week: what did you think about the state of the college address? What do you think about KPIs? What should our KPIs be? Have you read Moneyball? And what about Brad Pitt?

What do you think? What do you know? What can you prove?