PBFs, KPIs, Data Portals, and Success Metrics

HWFC met with Don and John yesterday, and they (Don and John) are giving the same presentation to the Chairs today, kicking off a conversation about what is likely to be a hugely important project that represents a tremendous and short lived opportunity. There will be strange words and acronyms (see the title) involved, but we’ll run through those over the next week or so, one by one.

In the meantime, this article is a good place to start:

A committee tasked by the Education Department with strengthening how the government measures the success of community colleges last week issued its draft report of recommendations, which will be discussed here today at the committee’s final meeting.

The 20-page report from the Committee on Measures of Student Success calls for community colleges and states to collect and disclose more information about graduation rates, student learning and employment. This reporting should include more voluntarily released data, the committee said, as well as more thorough compliance with current federal disclosure requirements.

Measures of student success need to more accurately reflect the comprehensive mission of two-year institutions and the diversity of students that these institutions serve,” the report said. “For example, current graduation rates do not adequately reflect these institutions’ multiple missions and diverse populations.”

And this one tells the story (or at least a small part of it) of why the committee’s work both matters to us (like it or not) and represents an  opportunity. From the article by Dean Dad:

I’m not naive enough to think that rankings won’t be used in some basically regressive and/or punitive way. But if we at least want to make informed choices, we should try to get the rankings right. Otherwise we’ll wind up rewarding all the wrong things.

He also includes a few suggestions for measures other than graduation:

If the technology and privacy issues could be addressed, I’d like to see a measure that shows how successful cc grads are when they transfer to four-year schools. If the grads of Smith CC do well, and the grads of Jones CC do poorly, then you have a pretty good idea where to start. That would offset the penalty that otherwise accrues to cc’s in areas with vibrant four-year sectors, and it would provide an incentive to keep the grading standards high. If you get your graudation numbers up by passing anyone who can fog a mirror, presumably that will show up in their subsequent poor performance at destination schools. If your grads thrive, then you’re probably doing something right.

Finally, of course, there’s an issue of preparation. The more economically depressed an area, generally speaking, the less academically prepared their entering students will be. If someone who’s barely literate doesn’t graduate, is that because the college didn’t do its job, or because it did? As with the K-12 world, it’s easy for “merit-based” measures to wind up ratifying plutocracy. That would run directly counter to the mission of community colleges, and to my mind, would be a tragic mistake. Any responsible use of performance measures would have to ‘control’ for the economics of the service area. If a college manages to outperform its demographics, it’s doing something right; if it underperforms its demographics, it’s dropping the ball.

The point is, we’ve been belly aching (rightly and justifiably) about the obtuseness of the measures that are popularly and recently used to judge our performance and “success” (in the media, in the Reinvention scheme, and so on). The next few weeks offer an opportunity to have some say at the local and state levels at least, which may possibly impact the federal level, too–it’s not impossible–as to what we think a successful engagement with students is and how our “performance” might best be measured, in so far as that is possible.

(And, just in case you’re interested, here’s an article on the outcome of the meeting mentioned in the first article.)

Speaking of Metrics

This article is pretty funny; it’s called “Standardized tests prove I’m better than Michael Jordan :

Now, critics say that PISA, the SAT and other standardized tests are a lousy way to measure educational attainment or value. But I say enough criticism already. Once you truly understand the awesome power of test scores, you will embrace them, as I have done — especially after realizing how standardized testing proves that I am a better basketball player than Michael Jordan.

Don’t laugh; I have the test results. I read something in a blog somewhere about how MJ recently made 16 out of 20 free throws in a friendly shooting contest. Pretty good, but I thought I could do better. So I went to my local gym and practiced and practiced until I achieved my aim: 18 out of 20 free throws! I’ll send you the video, if you like. (Or you could do what most people do with PISA scores and simply take my word for it.)

You may argue that it’s not a fair comparison, but that’s what so great about this — simply use the same rules we apply to judging PISA scores, and it’s perfectly fair.

It only gets better from there…