PBFs, KPIs, Data Portals, and Success Metrics

HWFC met with Don and John yesterday, and they (Don and John) are giving the same presentation to the Chairs today, kicking off a conversation about what is likely to be a hugely important project that represents a tremendous and short lived opportunity. There will be strange words and acronyms (see the title) involved, but we’ll run through those over the next week or so, one by one.

In the meantime, this article is a good place to start:

A committee tasked by the Education Department with strengthening how the government measures the success of community colleges last week issued its draft report of recommendations, which will be discussed here today at the committee’s final meeting.

The 20-page report from the Committee on Measures of Student Success calls for community colleges and states to collect and disclose more information about graduation rates, student learning and employment. This reporting should include more voluntarily released data, the committee said, as well as more thorough compliance with current federal disclosure requirements.

Measures of student success need to more accurately reflect the comprehensive mission of two-year institutions and the diversity of students that these institutions serve,” the report said. “For example, current graduation rates do not adequately reflect these institutions’ multiple missions and diverse populations.”

And this one tells the story (or at least a small part of it) of why the committee’s work both matters to us (like it or not) and represents an  opportunity. From the article by Dean Dad:

I’m not naive enough to think that rankings won’t be used in some basically regressive and/or punitive way. But if we at least want to make informed choices, we should try to get the rankings right. Otherwise we’ll wind up rewarding all the wrong things.

He also includes a few suggestions for measures other than graduation:

If the technology and privacy issues could be addressed, I’d like to see a measure that shows how successful cc grads are when they transfer to four-year schools. If the grads of Smith CC do well, and the grads of Jones CC do poorly, then you have a pretty good idea where to start. That would offset the penalty that otherwise accrues to cc’s in areas with vibrant four-year sectors, and it would provide an incentive to keep the grading standards high. If you get your graudation numbers up by passing anyone who can fog a mirror, presumably that will show up in their subsequent poor performance at destination schools. If your grads thrive, then you’re probably doing something right.

Finally, of course, there’s an issue of preparation. The more economically depressed an area, generally speaking, the less academically prepared their entering students will be. If someone who’s barely literate doesn’t graduate, is that because the college didn’t do its job, or because it did? As with the K-12 world, it’s easy for “merit-based” measures to wind up ratifying plutocracy. That would run directly counter to the mission of community colleges, and to my mind, would be a tragic mistake. Any responsible use of performance measures would have to ‘control’ for the economics of the service area. If a college manages to outperform its demographics, it’s doing something right; if it underperforms its demographics, it’s dropping the ball.

The point is, we’ve been belly aching (rightly and justifiably) about the obtuseness of the measures that are popularly and recently used to judge our performance and “success” (in the media, in the Reinvention scheme, and so on). The next few weeks offer an opportunity to have some say at the local and state levels at least, which may possibly impact the federal level, too–it’s not impossible–as to what we think a successful engagement with students is and how our “performance” might best be measured, in so far as that is possible.

(And, just in case you’re interested, here’s an article on the outcome of the meeting mentioned in the first article.)

Classrooms in Other Countries (and Other Differences)

This article from Slate is mildly interesting, and mildly horrifying, in spots. It says:

Classrooms in countries with the highest-performing students contain very little tech wizardry, generally speaking. They look, in fact, a lot like American ones—circa 1989 or 1959. Children sit at rows of desks, staring up at a teacher who stands in front of a well-worn chalkboard.

“In most of the highest-performing systems, technology is remarkably absent from classrooms,” says Andreas Schleicher, a veteran education analyst for the Organization for Economic Cooperation and Development who spends much of his time visiting schools around the world to find out what they are doing right (or wrong). “I have no explanation why that is the case, but it does seem that those systems place their efforts primarily on pedagogical practice rather than digital gadgets.”

There are others, quoted in the piece, who suggest some reasons for the comparative differences. Just be sure to mentally insert “on standardized tests” immediately after each use of the word “performing” when you read the rest.