Political Analysis

This piece made me laugh and seems full of truth.

Being in political science and watching Election Night coverage makes me feel how I imagine doctors must feel when they watch ER. The temptation to yell, “That’s not how it works at all! This is ridiculous!” at the TV is occasionally overwhelming. In the end we have to remind ourselves that the viewing public doesn’t care if what they are seeing is realistic or accurate, only that it is entertaining. They only care that House comes up with a mystery diagnosis or that Sam Waterston wins over the jury or that the barking pundits explain the election results in unfathomably simplistic terms that happen to coincide with our own beliefs.

But for those who need the grand explanation, the sweeping conclusions drawn from limited data, the themes that allow us to boil elections down to slogans, I humbly submit the following…The American voter has clearly demanded:

1. Social Security reform that guarantees my current level of benefits, alters someone else’s, and cuts everyone’s Social Security taxes to boot.

3. A balanced budget that doesn’t sacrifice any of the government programs – especially the sacred military-industrial complex and the various old age benefits – that we like.

12. A highly educated workforce produced by a school system that requires no tax dollars to achieve excellence, students who have no interest in learning, and a virulently anti-intellectual society.

13. Closed borders and an endless supply of cheap labor to keep prices low.

15. Health care that is cheap, superior, and readily available to me without the danger of the same being enjoyed by anyone I deem undeserving.

There are more–check them out HERE.

The Assessment Paradox

Maybe it’s because of a certain cognitive bias in me owing to heightened awareness, but for whatever reason, I have been finding a lot of stuff about data collection lately and associated issues, including a piece on a paradox inherent to assessment. According to author Victor Borden:

Information gleaned from assessment for improvement does not aggregate well for public communication, and information gleaned from assessment for accountability does not disaggregate well to inform program-level evaluation.

But there is more than just a mismatch in perspective. Nancy Shulock describes an “accountability culture gap” between policy makers, who desire relatively simple, comparable, unambiguous information that provides clear evidence as to whether basic goals are achieved, and members of the academy, who find such bottom line approaches threatening, inappropriate, and demeaning of deeply held values.

For anyone who’s been involved with or aware of assessment at HW, you’ll know that Cecilia came in and consistently coached toward the idea that assessment ought to be formative–aimed at building knowledge about student learning in order to make changes to practices whose impact could, theoretically, at least, be measured.

Of course, assessment is (or can be) used for other purposes, too–an assessment that tells us that more than half of our students who have completed their Gen Ed Humanities requirement cannot (or will not) write a series of short essays on an art object that meet our expectations for what students should be able to do provides useful information (internally) that is potentially destructive and harmful to the institution and its members (even the students). So there is, in every assessment, a tension at least and a temptation to resolve the difficulty by cooking the measure (or the numbers) to make sure we look good. Having worked on the Assessment Committee for six years I can attest that we work hard to avoid those traps, but we’ve certainly had discussions about the potential impact of certain findings.

Anyway, this article does the best job I’ve ever come across of explaining a dynamic that I’ve long felt, but never been able to pin down. Well worth the time to read it, I’d say.