Achievement Unlocked: Thesis

Remind me to never do that again.

On Friday I officially handed in my thesis, titled “Truth Goggles: Automatic Incorporation of Context and Primary Source for a Critical Media Experience.” For those who don’t know already, it was about an automated bullshit detector for the Internet / an interface to help people think carefully called Truth Goggles. The final version weighed in at a nice round 145 pages.

I’ll let the dust settle before putting this monstrosity online. I also want to write some more condensed posts about the interesting parts because I know nobody is ever going to read the damn thing. Those will come later. For now I give you a few bullet points.

The Gist

Here’s the basic story of the document:

  • I learned about the millions and millions of reasons why my idea could never work.
  • Not having a strong sense of self preservation I kept on going anyway and tried to create “Truth Goggles!”
  • I worked really hard to design and implement an interface that people could value even if they didn’t trust the sources behind the tool.
  • I ran a user study and learned that the interfaces worked pretty well when it came to protecting people from misinformation, and that almost everyone who took the study really wants to be able to trust information again.

The Gems

I’ll give a quick preview of some lessons learned. Each of these points deserves a post of its own but since this isn’t my thesis I’m going to just put out my own observations and thoughts. The posts later will probably be more “scientific” and “explanatory” (i.e. “boring” and “less quotable”).

  • When people consume information they are struggling hard to maintain their identity. That’s all there is to it. There is plenty of evidence that people consume information with ideological motivations. Those motivations often cause them to accept or reject information based on how well it aligns with what they already believe. I have a theory that if you could just remind someone that there’s nothing to fear — that you aren’t trying to change who they are — you will suddenly be able to actually communicate with them.
  • Trying to tell people what to think is a losing battle. When the first round of press for Truth Goggles came out back in 2011 I paid attention to every single comment on every single report about the idea I could find. Lots of people liked it, but a lot of people were instantly dismissive due to concerns about bias. I heard their point, agreed with it, and realized what journalists saw ages ago: there is no way to create a universally respected system that also tells people what to think. I changed course and settled for a system that would remind people when to think instead. I think that is a better mission anyway.
  • Credibility breeds respect, and respect breeds open minds. Several participants in the Truth Goggles user study commented that having a credibility layer made them more willing to consider perspectives and messages that they might have normally ignored completely. Think about that for a second. It makes sense, right? It is much easier to respect what a person is saying if you can trust them. Usually “respect” and “trust” are like “chicken” and “egg”, but if you’re using something like Truth Goggles it is possible to develop trust and let the respect follow if it ends up being deserved.

This entire experience has given me a lot of hope about information online and the people who consume it. I’ve said before that credibility was the future of journalism and I’m half tempted to expand that statement to say that credibility could save the world. I’ll probably need to run a few more tests though.

As for the next steps for Truth Goggles, that is to be determined! I’m going to at least keep exploring some of the processes and technologies behind phrase detection, but once I graduate and start my fellowship at the Boston Globe in June I’ll need an explicit way to keep it alive. Stay tuned.

, , ,

  • mstem

    “I have a theory that if you could just remind someone that there’s nothing to fear — that you aren’t trying to change who they are — you will suddenly be able to actually communicate with them.” There’s a study cited in the Debunk Handbook that found that affirming people’s existing identities did indeed make them more likely to accept new information.

    • Thanks for pointing that out — I am going to write about all that stuff soon, and that Nyhan/Reifler study is the motivation behind that claim (didn’t mean to make it sound like it was totally my invention, but hey what are blogs are for if not ambiguously originating theories).  Personal affirmation isn’t quite the same thing although it’s hitting very similar buttons.  Unfortunately I don’t think that the mechanism the tested could ever exist in the real world in that format (You aren’t going to write a letter to yourself every time you read an article!)

      • Yup, the Nyhan/Reifler study is really interesting. It highlights all the hidden influences on reasoning. It’s been misinterpreted in some really weird ways though see here for example. 

  • mstem

    add me to your blogroll, bro

    • Who are you I don’t even know what an mstem is.  Sounds lame.

  • imppress

    I’m wondering about a mechanism that marks the questionable passages more intensely when more people find it to be inaccurate.

    The worst lies/myths might be exposed more quickly that way?

    Don’t mind me.  Just suggesting ways to make this massive project much harder.  You can thank me later.