Sponsor

Massachusetts Institute of Technology (MIT)

Team

Sarah Lukachko, Allie Miller, Nick Sireci, Sandra Romero

Challenge

In this project, a team of 4 individuals, dealt with an abstract concept of a 'learning' app. 

Part of the requirements was to assume our app would be the front-end of a decision-making engine created by MIT called: Justify(*)

In dealing with complex subjects, humans only remember the last 2 or 3 arguments to a process. example: the last 3 ingredients to a recipe, or the last 3 points to an argument without much effort. Anything beyond that would require cognitive retrieval and prioritization of the extra information. Our app was aimed to bridge this gap. 

For our app, we picked the scenario of two people having a debate. To make it interesting we decided it should be any current political or social event that the two people would choose to debate on based on their personal beliefs and principles.

The app would pair these two individuals out of a pool of willing participants or two people could agree on pairing with each other. We had long brainstorming sessions as the visualization of two people agreeing or disagreeing was taking different shapes but none was easy to explain. 

I suggested that maybe somehow showing two circles departing or getting closer to each other while arguments points were being added to our interface might show in abstract what two people might be experiencing if they had this argument in person. 

So an argument would start by choosing a topic and optionally indicating which principles  would personally matter the most, and then people would start dragging supporting points to each side of the argument

argument.intro.jpg
argument.intro.2.jpg

By the time people added enough argument points, some of which would be backed by factual information monitored by the app (fact check) and weighted based on most important personal principles, the visualization of an argument might look like this: 

debate.in.progress.jpg

In support of arguments that would have more validity than others we setup a structure of weights and criteria to prioritize arguments that we found to be backed by facts (our fictitious library) which was also supported by the Justify engine. 

The System bolsters its value if verified by Vote Smart API (our fictitious library of facts); conversely, its value is lowered if successfully refuted.

Possible weight structure:

  • API unbiased: +4 points
  • Known fact: +3 points
  • Individual expert: +2 points
  • Community: +3 points
  • Other trusted source: +2 points
  •  Successfully refuted: -3 points

But all of this needed to make sense in the Justify back-end.  It was my personal task to ensure our concept would have a reflection to the Justify way of processing information so that in our final presentation we could defend this was in theory a viable application to the MIT Justify decision making tool.  It was not easy to navigate the Justify language, I personally communicated with the sponsoring MIT professor. At first, the tool only seemed to match half of our concept. But I was determined we needed to understand what was missing. I iterated until we got it! 

justify.screen.jpg

Here are two other samples how we demonstrated our intelligent app matched the MIT Justify engine

side.to.side.pic.2.jpg
side.to.side.pic.1.jpg

Conclusion

Although this was done over 2 years ago, there was a real application of such concept during the United States 2016 election, several news outlets had fact checking engines where people could go see how true was what they were hearing out there.