Sunday, August 28, 2011

Last week I talked about my conflicted feelings about my alma mater the University of Alaska Fairbanks being included on this year’s Huffington Post "Top 25 Underrated Creative Writing MFA Programs" list. I was thrilled to see that UAF was being acknowledged as underrated, yet at the same time, I felt ashamed of my excitement because I’ve always frowned on the very idea of ranking programs.

Seth Abramson, I assume the Seth Abramson who wrote the article to which I was responding, commented on my post, but for some reason the comment does not appear on the actual page. I’m not sure why. I don’t know if he deleted the comment after making it or if Blogger is up to its old, bratty tricks again (I have had so much trouble with the comments feature of Blogger lately . . .). At any rate, Abramson, whom I readily admit is far more of an expert on this stuff than I, made some excellent points and offered some valuable information, so I thought I’d include some of what he talked about here to give a more complete view of the issue. In case it was Abramson himself and not some Blogger glitch that caused his comment to be deleted, I won’t reproduce his argument word-for-word.

First of all, Abramson had the foresight to direct me to the Poets & Writers article that details their methodology in putting together their ranking list, which I should have linked to in my blog post last week but didn’t—sorry about that. Much of my frustration over the ranking list last year came, actually, from reading about their methodology. (For example, I was frustrated by the assumption that any program that does not have a good website with complete, detailed information about their program must not be providing good funding or be highly selective. I understand the P & W logic here—if the program was highly selective and funded a large percentage of their students, why wouldn’t they say so on their website?—but I think the logic fails to grasp that some programs simply have very, very, very bad websites, and they don’t include information on their sites that they should.)

Still, while the article does admit that “Four of the nine full-residency MFA rankings are based [on] a survey of a large sample of current MFA applicants,” there are still five additional ranking categories which are actually based on “hard data”: “funding, selectivity, fellowship placement, job placement, and student-faculty ratio.” Abramson suggested that the P & W ranking list does seem to acknowledge that different applicants have different needs, and that’s a fair point. On top of that, the P & W article freely admits that the data collected to prepare the report represents “publicly known data rather than an ordering of all extant data.” The rankings are most definitely flawed, but P & W knows and admits this. They did the best they could with the information they had.

Perhaps a more important point that Abramson made is that the rankings are valuable partially because they provide a list of what programs are even out there to potential applicants. Though not all programs make it into the print article, the full list is available online, and anyway, Abramson believes that MFA applicants take the time to do more research than just reviewing one ranking list. This is very true. I knew a lot of people who were applying for MFA programs last year, and while to my knowledge every single one of them closely considered the P & W rankings, all of them also did a substantial amount of additional research before they solidified their list of where to apply. I will add, though, that most of them, when narrowing their lists of which programs to look into more closely, didn’t include many low-ranked programs on those lists, and to my knowledge they didn’t include any programs that didn’t make P & W’s top 100, like UAF.

The strongest point that Abramson made in defense of ranking lists was that the programs themselves use these lists to make a case for more funding from their universities and to generally improve the areas where they appear to be lacking. I hadn’t thought about this at all, but it’s an excellent point and probably very true. Maybe, for example, a program that does provide the majority of its students with full-funding and is very selective but doesn’t have a great website advertising this information might take its low ranking on the list as a good sign that it’s time to put a little more energy into marketing to potential applicants. It doesn’t mean that there was ever anything wrong with the program itself, but such a program might receive more applicants, allowing it to become even more selective, if it drew up a new and better marketing plan or garnered more funding from its university.

Anyway, since nobody but myself had the benefit of reading Abramson’s response to my post, I wanted to address some of these points and be a bit more even-handed about the issue. Abramson has made me rethink the way I view rankings. I still don’t buy that a program’s placement on a ranking list absolutely correlates to its worth and quality as a program; however, I can see, now, the value of these sorts of lists. And anyway, as Abramson said to me, P & W’s ranking list is still very much in its infancy, and surely the methodology and readily available data will change and get better as the ranking system itself matures.

4 comments:

  1. This is always an interesting conversation, and I can't say I'm going to make it any clear after my two cents worth. I always have issues with lists and rankings, but I think one of the unexpected consequences of working in SEO & online marketing is that this bothers me less. You tend to learn that your ranking is never going to be the same in every region of the US, so you except that search engine ranks are "general estimates" as opposed to hard and fast lists. I don't like #1-100 lists of colleges and graduate programs, but we can probably both agree that some programs are "generally" better than others, so there's some kind of listing going on even if we don't like them.

    I'm glad their methods are looking for hard data in addition to the less tangible things which are still important. But even these are sometimes funny. Based on my income, for example, I would be a prime candidate for the "college increases earnings" argument or "graduate students make more than college grads" argument. Problem is, I've never used any of my education for my current business - so should I count as one of those stats proving a college or grad school education makes more money? I'd argue no - but the plain statistics will have me proving a Coe College and UAF education lead to higher pay than average.

    That's the hard part with any list though - generally income of graduates would be a good indicator and should be in there, but sometimes like with me I would argue that is based on drive, motivation, and willingness to master a separate craft and not the college education.

    Definitely an interesting conversation - not sure I've added much in clarity though :)

    ReplyDelete
  2. Good points! You HAVE helped clear things up a bit for me. The issue is so, so, so complicated that only by acknowledging how complicated it is can it ever really be tackled. I like your attitude of seeing rankings as a kind of general estimate and not taking them too seriously. That way they can be seen as useful in the ways that they are useful, while still not mattering a whole lot in the larger picture.

    ReplyDelete
  3. http://www.observer.com/2011/09/creative-writing-profs-dispute-their-ranking-no-the-entire-notion-of-ranking/

    This is a new development.

    ReplyDelete
  4. Ooh! Interesting! Thanks, Jayme!

    ReplyDelete