My colleague Matt Stempeck said it best: “Dan, I know that your life has been a tornado wrapped in a hurricane wrapped up in a whole box of tsunamis this week, but you really need to start wearing pants to work.”
It turns out only part of that quote is accurate, but you’ll never know which one for sure! This is why, before I can graduate from MIT, I have to create an automated bullshit detector. The basic premise is that we, as readers, are inherently lazy. It isn’t just that we’ll believe almost anything — remember that time in 1938 when we believed aliens were invading the planet just because someone on the radio said so? Yeah. That happened. The real problem is that we’ll often believe what we want to believe (or disbelieve what we don’t want to believe).
It’s hard to blame us. Just look at the amount of information flying around every which way. Who has time to think carefully about everything? Not me, that’s who’nt. This is why I’m working on a tool called Truth Goggles that will help hone our critical abilities; one that will help us identify pieces of information that are worth inspecting a little bit more closely before deciding how it fits into our world views.
Thesis Goggles
When I wrote “before I can graduate from MIT” earlier in this post I wasn’t lying; I have decided to pursue Truth Goggles for my thesis. I’m definitely not the first person to explore this problem space but there is a lot of room to contribute. New technology has opened up new possibilities, needs have become clearer, and there is a wide variety of possible solutions and unanswered questions just sitting around waiting to be explored.
In November I presented the idea to the Media Lab community using the following slides:
The feedback I got was mixed, but what can you expect from a day called “Crit Day” which is short for “Critically Injure Pride, Hopes, and Dreams of Graduating Day.” Here were the main questions asked:
This doesn’t seem like it will scale considering Politifact only has a few thousand fact checked claims. Why aren’t you using the crowd to fact check?
My time at MIT will be spent focusing on the interface and user interaction rather than the generation and aggregation of source information. There are enough difficult questions surrounding the interaction layer. I don’t think it is worth complicating things further by trying to create a crowd-based journalism platform (which is essentially what crowd sourced fact checking amounts to).
Isn’t this just a mashup of technologies and data sets? How is what you are doing novel?
It’s true that I’m not inventing new algorithms. I’m applying existing algorithms in novel ways. Credibility layers aren’t robust right now, and they come with their own sets of interesting questions in terms of user experience and system design. My contribution will be to frame those questions, answer some of them, create a prototype, and test that prototype. This won’t be as trivial as just throwing more information on a screen and calling it a day, the interface has to be designed with care.
Do you expect to incorporate primary source data?
My initial prototype probably won’t pull from sources other than Politifact and other fact checking services, but I will definitely be thinking about ways to use other sources of data. Primary source content will eventually help with information scalability since raw footage and raw data could help computers find potentially dubious claims (and help readers make determinations about those claims).
Bullshit, This is Clearly Science Fiction
There are a lot of hard questions lurking behind corners here. In fact, most of them aren’t even trying to hide; they’re just sitting obnoxiously in the middle of the room. Some are technical, some are philosophical, but all of them need to be addressed intelligently for something like Truth Goggles to actually have a chance of working. I’ll rattle off a few of them.
- Who determines the truth? Journalists? Experts? Crowds? Individuals? Algorithms?
- Sometimes there is a right answer and sometimes there is room for debate. Can you tell which is which? How do you reflect the difference?
- How does the tool account for bias in sources?
- How does the tool account for bias in users?
- Will the system actually know enough to be regularly useful?
- This could easily just make consumers more lazy, how do you prevent that?
- What happens when the tool is wrong?
- How will this change the way people produce content?
- Where do Journalists fit into the picture?
As I’ve pondered these questions I’ve come to the following absolute conclusion: Credibility layers need to empower critical ability. I’ve also decided that it’s OK for the system to make mistakes but it is never allowed to lie. This means the interface should be less focused on telling the reader what to think and much more focused on reminding (and helping) the reader to think at times when thinking is most important.
I’ve also come up with a list of weaker claims to throw out there for discussion:
- Credibility layers don’t have to speak to everyone, but they need to empower the open minded.
- Journalists are our best bet for deep analysis and identifying truth that requires lots of time and effort (e.g. investigation and concept synthesis).
- Algorithms are our best bet for identifying contextual evidence (e.g. data, trends, and sources of sound bytes).
- Mobs can’t be trusted to decide what is true and false, but they are the key to figuring out what is worth thinking about.
Over the coming months I’ll be cranking out interfaces, prototypes, and eventually some good old fashioned boring academic papers about this idea. In the mean time if you’re interested in Truth Goggles I’ll be trying to post updates as regularly as possible on my blog, on twitter (@slifty), and eventually on the newly registered truthgoggl.es.