So Megan McArdle wrote a long post
attacking Elizabeth Warren as a scholar. What’s surprising is how little “there-there” there is to her critique. I would love to see nomination hearings based around how expansive of a definition to use for medical bankruptcies and watching Warren rip the face off of Senators when it comes to empirical methods. I doubt it is going to come to this, but I’ll go ahead and respond. (I’ve been waiting for part two to respond, which I assume may not show up.)
Because that isn’t what this is about. It’s about giving the impression that Warren is a weak scholar. Given that Warren is considered
“the leading authority in the country on bankruptcy law,” being called a hack by McArdle, of all people, is something. Especially when we get a gem of a major screwup like this right out the door in the post:
Megan McArdle, blog post: 2. The response rate on their survey was only 20%. Given the deep shame surrounding bankruptcy, you have to worry that they got an unrepresentative sample. And how is that sample most likely to be unrepresentative? Well, one pretty likely way is that people who went bankrupt through no fault of their own–folks who got whacked by large and unpayable medical bills or a business closure–were more likely to respond than the people with drug or alcohol problems, profound depression that left them unable to work, compulsive gambling issues, and so forth….
Katie Porter, comments: Also, I would like to correct the misstatement, I believe of a commentator, that Ms. McCardle reproduces in her article above, that the response rate to the survey was 20%. The response rate was right at 50%, or just under that, depending on the exact metric for response rate used....<MORE>
I have no idea what to make of this. Megan opens her critique by saying that there’s a massive bias in the data sample implied by the low response rate of 20%. A commenter politely responds that the response rate is 50%. She is very polite as the 50% is on the front page
of the 2009 study.
Megan then says she meant the interview rate.
Nobody is perfect, especially on the blogs. I’ve messed up data on this blog before, I’ve confused terms that I knew but didn’t catch in a proofread, and I’ve used data and terms that I thought meant one thing that turned out to mean another thing. Anytime someone points this out I correct it, or pause and double-check what I thought, or quietly ditch using that information. Humility is usually the best antidote to being a hack.
But notice how Megan just keeps on going. This is one of the major planks of her argument, that the sample is corrupted, and when someone points out that what she stated was factually incorrect she just changing the terms and keeps on going as if she what she wrote wasn’t wrong. How is a reader supposed to read this? Did she mean to say interview rate in the beginning? Does she think that a 50% response rate is too low? Useless without a 50% interview rate? Did she know at the time of writing the difference between the two terms? Does she want to reconsider her argument?
(It’s pretty similar to the classic McArdle instance
of “It wasn’t a statistic–it was a hypothetical” when it came to US profits of pharma.)
Which is a shame. Like the hypothetical case there’s no pause, no reflection, so as a reader I just want to assume bad faith and move along.
But I won’t. Let’s continue.