CRAAP has been taking a lot of crap lately. This came through my Twitter feed today:
— Lisa Hinchliffe (@lisalibrarian) March 18, 2017
It refers to the latest among a number of claims that so-called fake news can be validated by using the CRAAP test. I suppose it’s correct to say one can validate such things, but that require a rather tortuous misapplication of the test. The case in point here refers to one of the president’s tweets about being wiretapped. We can consider the information current, but that’s about it. Whether or not it is relevant depends entirely on what one needs the information for and what one intends to do with it, which is not determined in the example. Accuracy cannot be determined from the tweet at all, and as the article says, the claim has been refuted by many who have knowledge of and familiarity with the situation. The president is well-known for having an inverse relationship with truth, so the accuracy can be considered suspect from the start. And since the president is well-known for making statements that have little relationship with truth or reality, his authority approaches zero. His purpose is purely political, to misinform and to attack, to put his predecessor in a negative light. We know this because it is something he has been working at for years. The only way the tweet in question could be considered reliable information would be to take the position that you can believe everything you read.
That last point is the problem I have with most criticisms of the CRAAP test. They remove any kind of thought from the process. But why would anyone do that? Why would anyone assert that “authority is a binary,” as the article claims? Certainly one can oversimplify the test, and some organizations do turn it into a bizarre scorecard, but there is no reason why anyone has to do it that way, nor is there any reason that they should.
If we look at CSU Chico’s version (PDF) of it, we can see that it is not as simple as people pretend. Their version lists the five criteria and several example questions for each. There is no implication that the list of questions is exhaustive. Many of them take the form of yes/no questions, but many of them require some critical thinking to reach a good answer. “Does your topic require current information, or will older sources work as well?” This can only be answered in the context of how the information will be used, and what the user is trying to accomplish. “Is the author qualified to write on the topic?” To answer this, one has to considered what makes an author qualified, and what kinds of qualifications there are. “Is the information supported by evidence?” This requires evaluating evidence and logic, and perhaps methodology. “Where does the information come from” This requires some investigation, which can open up many new complexities. “What is the purpose of the information?” There is nothing simple or binary about this. I wouldn’t even consider purpose to be singular.
I find CRAAP test is good for an entry into a discussion of how and why we evaluate information. We all do it, we just don’t all think about it reflectively and intentionally. And even those of us who are expert and reflective are subject to confirmation bias. So the test provides a model, a list of questions we can ask. All of those supposedly binary questions come with an unspoken corollary: How do we know? All of those questions require us to think about context, both the context of what we are reading and the context of what we are writing or otherwise doing with the information.
I know people misuse CRAAP. Perhaps they misunderstand it. In surveying students, informally, about a third of them tell me that .orgs are better than .coms. When I asked a group of librarians where this misconception comes from, about a quarter of them told me that it was true. Some people make the test into a scoresheet. Some people apparently just toss the test at students with minimal to no explanation. I don’t know if librarians actually say “anything that ends in .gov is reliable,” but I do know that this does not come from the CRAAP test. It asks, “Does the URL reveal anything about the author or source?” That does not mean .gov is good. It does mean that we need to talk about how to dissect and decipher URLs, and think about what they tell us, if anything. Perhaps the problem is that people treat CRAAP as an end product, and people latch onto it as a simple solution. But that’s wasteful. Using it as fertilizer is productive.