Seth Abramson, I assume the Seth Abramson who wrote the article to which I was responding, commented on my post, but for some reason the comment does not appear on the actual page. I’m not sure why. I don’t know if he deleted the comment after making it or if Blogger is up to its old, bratty tricks again (I have had so much trouble with the comments feature of Blogger lately . . .). At any rate, Abramson, whom I readily admit is far more of an expert on this stuff than I, made some excellent points and offered some valuable information, so I thought I’d include some of what he talked about here to give a more complete view of the issue. In case it was Abramson himself and not some Blogger glitch that caused his comment to be deleted, I won’t reproduce his argument word-for-word.
First of all, Abramson had the foresight to direct me to the Poets & Writers article that details their methodology in putting together their ranking list, which I should have linked to in my blog post last week but didn’t—sorry about that. Much of my frustration over the ranking list last year came, actually, from reading about their methodology. (For example, I was frustrated by the assumption that any program that does not have a good website with complete, detailed information about their program must not be providing good funding or be highly selective. I understand the P & W logic here—if the program was highly selective and funded a large percentage of their students, why wouldn’t they say so on their website?—but I think the logic fails to grasp that some programs simply have very, very, very bad websites, and they don’t include information on their sites that they should.)
Still, while the article does admit that “Four of the nine full-residency MFA rankings are based [on] a survey of a large sample of current MFA applicants,” there are still five additional ranking categories which are actually based on “hard data”: “funding, selectivity, fellowship placement, job placement, and student-faculty ratio.” Abramson suggested that the P & W ranking list does seem to acknowledge that different applicants have different needs, and that’s a fair point. On top of that, the P & W article freely admits that the data collected to prepare the report represents “publicly known data rather than an ordering of all extant data.” The rankings are most definitely flawed, but P & W knows and admits this. They did the best they could with the information they had.
Perhaps a more important point that Abramson made is that the rankings are valuable partially because they provide a list of what programs are even out there to potential applicants. Though not all programs make it into the print article, the full list is available online, and anyway, Abramson believes that MFA applicants take the time to do more research than just reviewing one ranking list. This is very true. I knew a lot of people who were applying for MFA programs last year, and while to my knowledge every single one of them closely considered the P & W rankings, all of them also did a substantial amount of additional research before they solidified their list of where to apply. I will add, though, that most of them, when narrowing their lists of which programs to look into more closely, didn’t include many low-ranked programs on those lists, and to my knowledge they didn’t include any programs that didn’t make P & W’s top 100, like UAF.
The strongest point that Abramson made in defense of ranking lists was that the programs themselves use these lists to make a case for more funding from their universities and to generally improve the areas where they appear to be lacking. I hadn’t thought about this at all, but it’s an excellent point and probably very true. Maybe, for example, a program that does provide the majority of its students with full-funding and is very selective but doesn’t have a great website advertising this information might take its low ranking on the list as a good sign that it’s time to put a little more energy into marketing to potential applicants. It doesn’t mean that there was ever anything wrong with the program itself, but such a program might receive more applicants, allowing it to become even more selective, if it drew up a new and better marketing plan or garnered more funding from its university.
Anyway, since nobody but myself had the benefit of reading Abramson’s response to my post, I wanted to address some of these points and be a bit more even-handed about the issue. Abramson has made me rethink the way I view rankings. I still don’t buy that a program’s placement on a ranking list absolutely correlates to its worth and quality as a program; however, I can see, now, the value of these sorts of lists. And anyway, as Abramson said to me, P & W’s ranking list is still very much in its infancy, and surely the methodology and readily available data will change and get better as the ranking system itself matures.