Truth Goggles identifies fact-checked content on the web, reminding you when it is most important to think carefully. It is a credibility layer designed to increase your ability to reach a well-formed understanding of the world, using journalism to raise red flags.
Truth Goggles was my master’s thesis at the MIT Media Lab. This means that I spent a lot of time exploring the many challenges behind an idea like this, and attempting to solve at least some of them. There are three gigantic hurdles to jump:
- Fact Database – What should be used as ground truth? How do you identify it? Is there enough of it? I ended up realizing that there aren’t many universally believed truths, and I want the system to be accessible to a diverse audience. It needs to contain as many well-explained and thoughtful verdicts as possible. I decided that for the first round a truth source needs to do things: have a reputation of neutrality (i.e.both sides call them biased or neither side calls them biased), and explain the reason behind their verdicts.
- Paraphrase Detection – There are thousands of ways to say the same thing, how do you identify known fact-checks with slightly different phrasing? This is a challenging problem because it mans computers need to understand language. Luckily there are a lot of smart people exploring this space, so I can use existing tools to get part of the way there.
- Human Brains – Assume we have a perfect system that is able to identify fact checked phrases 100% of the time. Would you trust it? Would you use it? What if it told you that you were wrong, would it change your mind? A system that isn’t usable isn’t worth building.
I focused on this third problem (human brains) because I like thinking about people more than algorithms. I tried to design the system to make it easy to swap out better algorithms and data sets down the line, but my experimentation revolved around the user experience.
The prototype can be used anywhere online, but it doesn’t do any intelligent paraphrase detection at this point so chances are it won’t be useful in most places. The study results were promising, and indicated that credibility layers could very well help people think more carefully and in more nuanced ways.
There is still a lot to do for Truth Goggles, and the project is not dead! It’s also clear that people love the idea of an automated bullshit detector.
Papers, Posts, and Press
This got a huge amount of coverage, here are some highlights.
- The first piece by Nieman Lab
- El Tiempo
- CBC Interview
- La Vanguardia
- The Register
- Tech Crunch
- NPR Interview
- The second piece by Nieman Lab
- Boing Boing
- New Scientist (Volume 215, Issue 2882, 15 September 2012, Pages 44–47)
- Wired UK (November 2012)
- Introducing Truth Goggles
- Achievement Unlocked: Thesis
- Truth Goggles Study Results