Remind me to never do that again.
On Friday I officially handed in my thesis, titled “Truth Goggles: Automatic Incorporation of Context and Primary Source for a Critical Media Experience.” For those who don’t know already, it was about an automated bullshit detector for the Internet / an interface to help people think carefully called Truth Goggles. The final version weighed in at a nice round 145 pages.
I’ll let the dust settle before putting this monstrosity online. I also want to write some more condensed posts about the interesting parts because I know nobody is ever going to read the damn thing. Those will come later. For now I give you a few bullet points.
The Gist
Here’s the basic story of the document:
- I learned about the millions and millions of reasons why my idea could never work.
- Not having a strong sense of self preservation I kept on going anyway and tried to create “Truth Goggles!”
- I worked really hard to design and implement an interface that people could value even if they didn’t trust the sources behind the tool.
- I ran a user study and learned that the interfaces worked pretty well when it came to protecting people from misinformation, and that almost everyone who took the study really wants to be able to trust information again.
The Gems
I’ll give a quick preview of some lessons learned. Each of these points deserves a post of its own but since this isn’t my thesis I’m going to just put out my own observations and thoughts. The posts later will probably be more “scientific” and “explanatory” (i.e. “boring” and “less quotable”).
- When people consume information they are struggling hard to maintain their identity. That’s all there is to it. There is plenty of evidence that people consume information with ideological motivations. Those motivations often cause them to accept or reject information based on how well it aligns with what they already believe. I have a theory that if you could just remind someone that there’s nothing to fear — that you aren’t trying to change who they are — you will suddenly be able to actually communicate with them.
- Trying to tell people what to think is a losing battle. When the first round of press for Truth Goggles came out back in 2011 I paid attention to every single comment on every single report about the idea I could find. Lots of people liked it, but a lot of people were instantly dismissive due to concerns about bias. I heard their point, agreed with it, and realized what journalists saw ages ago: there is no way to create a universally respected system that also tells people what to think. I changed course and settled for a system that would remind people when to think instead. I think that is a better mission anyway.
- Credibility breeds respect, and respect breeds open minds. Several participants in the Truth Goggles user study commented that having a credibility layer made them more willing to consider perspectives and messages that they might have normally ignored completely. Think about that for a second. It makes sense, right? It is much easier to respect what a person is saying if you can trust them. Usually “respect” and “trust” are like “chicken” and “egg”, but if you’re using something like Truth Goggles it is possible to develop trust and let the respect follow if it ends up being deserved.
This entire experience has given me a lot of hope about information online and the people who consume it. I’ve said before that credibility was the future of journalism and I’m half tempted to expand that statement to say that credibility could save the world. I’ll probably need to run a few more tests though.
As for the next steps for Truth Goggles, that is to be determined! I’m going to at least keep exploring some of the processes and technologies behind phrase detection, but once I graduate and start my fellowship at the Boston Globe in June I’ll need an explicit way to keep it alive. Stay tuned.