A Little Great Books Love

Some kind words (and national props) for HW and Wright’s Great Books programs (which, by the by, aren’t the only ones of recent note)  arrive on the scene today courtesy of Adam Kotsko of Shimer, as posted in today’s Inside Higher Ed:

I’ve spoken of the lack of faculty buy-in at other institutions, but I think this points to an even more important factor: student buy-in. If students don’t care, if they’re enrolled for utilitarian reasons and have no intrinsic love of learning, they will most likely wind up failing — and dragging the class down with them. Hence it seems to me that less-selective institutions could offer an optional program for interested students, much like those at two of the City Colleges of Chicago (Harold Washington and Wilbur Wright Colleges). Shimer has worked with Harold Washington in particular for many years, and several of their Great Books students have ultimately finished their four-year degrees at Shimer as a result.

Click HERE to read the rest. And here’s a companion piece from a Chicago State faculty member.

h/t to John Hader on the Chronicle Letter pointer

PBFs, KPIs, Data Portals, and Success Metrics

HWFC met with Don and John yesterday, and they (Don and John) are giving the same presentation to the Chairs today, kicking off a conversation about what is likely to be a hugely important project that represents a tremendous and short lived opportunity. There will be strange words and acronyms (see the title) involved, but we’ll run through those over the next week or so, one by one.

In the meantime, this article is a good place to start:

A committee tasked by the Education Department with strengthening how the government measures the success of community colleges last week issued its draft report of recommendations, which will be discussed here today at the committee’s final meeting.

The 20-page report from the Committee on Measures of Student Success calls for community colleges and states to collect and disclose more information about graduation rates, student learning and employment. This reporting should include more voluntarily released data, the committee said, as well as more thorough compliance with current federal disclosure requirements.

Measures of student success need to more accurately reflect the comprehensive mission of two-year institutions and the diversity of students that these institutions serve,” the report said. “For example, current graduation rates do not adequately reflect these institutions’ multiple missions and diverse populations.”

And this one tells the story (or at least a small part of it) of why the committee’s work both matters to us (like it or not) and represents an  opportunity. From the article by Dean Dad:

I’m not naive enough to think that rankings won’t be used in some basically regressive and/or punitive way. But if we at least want to make informed choices, we should try to get the rankings right. Otherwise we’ll wind up rewarding all the wrong things.

He also includes a few suggestions for measures other than graduation:

If the technology and privacy issues could be addressed, I’d like to see a measure that shows how successful cc grads are when they transfer to four-year schools. If the grads of Smith CC do well, and the grads of Jones CC do poorly, then you have a pretty good idea where to start. That would offset the penalty that otherwise accrues to cc’s in areas with vibrant four-year sectors, and it would provide an incentive to keep the grading standards high. If you get your graudation numbers up by passing anyone who can fog a mirror, presumably that will show up in their subsequent poor performance at destination schools. If your grads thrive, then you’re probably doing something right.

Finally, of course, there’s an issue of preparation. The more economically depressed an area, generally speaking, the less academically prepared their entering students will be. If someone who’s barely literate doesn’t graduate, is that because the college didn’t do its job, or because it did? As with the K-12 world, it’s easy for “merit-based” measures to wind up ratifying plutocracy. That would run directly counter to the mission of community colleges, and to my mind, would be a tragic mistake. Any responsible use of performance measures would have to ‘control’ for the economics of the service area. If a college manages to outperform its demographics, it’s doing something right; if it underperforms its demographics, it’s dropping the ball.

The point is, we’ve been belly aching (rightly and justifiably) about the obtuseness of the measures that are popularly and recently used to judge our performance and “success” (in the media, in the Reinvention scheme, and so on). The next few weeks offer an opportunity to have some say at the local and state levels at least, which may possibly impact the federal level, too–it’s not impossible–as to what we think a successful engagement with students is and how our “performance” might best be measured, in so far as that is possible.

(And, just in case you’re interested, here’s an article on the outcome of the meeting mentioned in the first article.)

Think, Know, Prove: Data Fest

Back in 2006 or so, I distinctly remember a presentation that Keenan did to the Chairs about the percentage of HW and CCC students who “achieved a positive outcome.” Students were asked more specific questions than PeopleSoft does about their intent, and then they were tracked for six years, I think. I remember being astonished by the research and amazed that it wasn’t being hyped–in my fuzzy memory, I thought the report  close to 80% of the students who came in, left with a positive outcome (and I thought I remembered categories like completion, transfer, retention/still going, and those who “got what they came for” if they came for personal interest. I also thought there was a category for those who left or stopped out, but were in good academic standing at the time, after completing a successful semester (the idea being that their personal circumstances posed some kind of obstacle to their continuing), and a category for those who left after an unsuccessful semester (which would have been the ones who did not achieve a positive outcome).

I’ve been combing my files on and off, in 8 minute bursts here and there, looking for the handout that Keenan gave us, but to no avail. I haven’t been up to ask Keenan for it, because, well, I don’t want to put her in a bind and my description would be so vague that it probably wouldn’t be helpful. And then somebody struck gold.

A friend of mine was poking around the Intranet and ended up in corners that I have not yet visited (please note: that link will only work while you are on campus, connected to the network). She wandered into this and this, and then she sent them to me.

They have some great stuff in there. For example:

Community college student outcomes should not be reported in a fragmented manner. Due to the multiple educational and career goals of these students, the use of multiple and comprehensive measures is essential to document the achievement of these goals.

And then there’s this:

Total Positive               DA               HW             KK                MX               OH           TR             WR                CCC
Outcomes                      65.0%         71.3%         54.9%        55.0%         61.7%     71.1%       73.8%            66.7%

And there’s more, too. And, please note, this is all available (and more!) on the CCC Intranet. Their own research and data shows that the reinvention numbers are but one look at how successful we are at serving students. It is undeniably true, as I’ve said before, that they can and should be improved, but they clearly do not tell the whole story.

So, take a look at this stuff and then tell me: What do you think? What do you know? What can you prove?


Student Motivation Question

As I noted in the Reinvention Forum notes, one of our colleagues, Kristin Bivens, raised an interesting point about student responsibility and motivation needing to be included in the mix of considerations related to student success. About a week before, I read this article by Robert Samuelson in the Washington Post about student motivation, but I couldn’t quite bring myself to post it; something about the tone, or maybe the content, kept me from putting it up, even the day after the forum. (And I also didn’t want to suggest that Kristin’s point was anything like Samuelson’s, even accidentally, just as I don’t want to give the impression now that am critical of her position because of the next paragraph.)

And then I saw this response to Samuelson’s piece in the Washington Post education blog from Alfie Kohn, titled (at least on his web site) “School Would Be Great If It Weren’t For the Damn Kids”:

Look beyond methods, though, and consider goals. What’s the point of educating students in the first place? Here is where it becomes relevant that Samuelson’s primary area of interest, like that of so many others who hold forth on the subject of education, is not education. His job is to write about economics, and he sees schooling through that lens. As I’ve noted elsewhere, we have reason to worry when schooling is discussed primarily in the context of “global competitiveness” rather than in terms of what children need or what contributes to a democratic culture — and, indeed, when the children themselves are seen mostly as future workers who will someday do their part to increase the profitability of their employers.

And then I came home yesterday and found my beloved’s September issue of English Journal, which is all about “Motivation” and features an article, also by Alfie Kohn, called “How to Create Nonreaders: Reflections on Motivation, Learning, and Sharing Power.”

And I think you should read them all.

And then read this article on another topic altogether. Come back and say something helpful when you get through one or all of them, preferably by tomorrow. (And yes, that was meant to be an attempt at irony by intentionally imposing three of Kohn’s suggestions for “killing motivation.”) I will stop now…

Tuesday Teaching Question

Okay, so last week I asked you for your big failure. It’s only fair that this week I ask about your biggest success this semester. For example, this year I moved my pre-class survey onto Blackboard (instead of collecting a paper and pencil version) which collected the data much faster than I ever did it myself AND allowed me to make a word cloud out of their answers to various open ended questions (which is one of my favorite online toys ever). I’ve known about Blackboard’s survey capabilities for years, but never put them to use. Silly.

What is something NEW that you tried this semester that worked out really well?