Battling misinformation is not about providing the best facts. That this is ineffective was clear latest after the last elections all over the world.
However this is mostly how classic approaches to fact-checking and countering misinformation work: Providing facts and try to distribute them widely.
By trying to create a fact-checking algorithm (Factmata), show related content (Facebook) or fact-check statements/content (Snopes) those efforts try to establish their view on facts, therefore using top-down models of directing what is true and what not.
Having such quality markers is important but there are several problems that make them either not scalable or ineffective:
They’re not scalable.
In order to be effective, you need to have quality and context data for the majority of content consumed by people
A singular view is inherently incomplete and hard to accept by people not sharing the same bias
They’re not precise
Creating factually and contextually correct information requires a lot of manual effort - it is very hard/impossible to do it via algorithms and machines
They lack comparability
It is easy to mislead/be mislead by information without having alternative views. Any of those quality markers above are consumed in isolation.
Truth is emergent. It’s born from warm data not hard facts.
Sustainable, shared or “objective” truth emerges from the context of comparing many different subjective perspectives and the ensuing dialogue between people.
That’s also how scientific facts manifested historically. It was not a single person deciding on those facts. It was a process of experiments that are shared, compared, scrutinised, repeated, confirmed and then established as the most likely fact. New discoveries along the way change the understanding of a previously accepted fact. Of course that process is not perfect, but there are hard facts emerging that are unchangeable, like that earth is a sphere. However it does not matter what the hard fact is if a person subjectively does not believe in them, or its sender.
Context is Queen: A system to increase shared truths
It’s not about how to create an algorithm or a system that only produces only perfectly “correct” information and reaches everyone. Besides being authoritarian, it would be an impossible task.
We see the only scalable and sustainable way to decrease the effects of misinformation by giving people the ability to share their subjective view on content, facts & events, contextualise the information and let them access and effectively compare the perspectives of others.
It’s about how we can create a mental space for shared truths to emerge and let people accept new truths in their own terms, even if those views are initially still objectively false. This approach will be a long and iterative process to solve many misinformation and polarisation topics, but one that is potentially sustainable and scalable.
How would we get there? Solving a human problem.
Online-research is time-consuming for individuals and groups.
When consuming content online, it is always just one view on the subject. Researching and comparing multiple perspectives takes time, or is sometime impossible. Thus it is easy to be misinformed.
But we all do some level of online research and fact-checking on a daily basis. By doing so we gain so much valuable knowledge about the usefulness of content & find background information. The new insights gained along the way we document in the form of notes & comments.
This nuanced knowledge is hard to document and ultimately share because our knowledge is scattered and disconnected across all the applications we use. We all had 100s of open tabs, bookmarks we never visit again, good content lost in a social thread or notes scattered across many different apps. So if we can’t even keep it for ourselves, how should we be able to share that?
Our solution: Memex
Instead of trying to figure out what is true and what is not our approach is to build software that allows people to effortlessly share their subjective view/truth on the things they see online, and access, collaborate and compare the views of others. Right now there is no software that allows to compare what people from different bubbles have read when searching for the same terms or see notes and background information different people found for the same article. This can be a powerful antidote for the creation of bubbles.
The basis for sharing this knowledge is Memex. It’s an open-source a tool for people to organise and document their online research for their personal benefit. The data produced as a byproduct can then be used to reduce the work to share insights.
Memex Single User features:
- Full-Text and associative search across web history and bookmarks. (e.g. the article I read on nytimes.com, that I liked on Twitter with the words “Climate Change” in the text)
- Web annotations and notes
- Organise with tags, favourites and collections
- Full Privacy: All data stored locally and sync is end2end encrypted.
- [Very Soon] apps for iOS and Android and sync between devices
- [Later] (Auto-)save and search content from Slack, Telegram, Email, Twitter, Facebook
- [Later] Tracking browsing paths to connect related content
- [Later] Archive web pages for offline reading
Memex Collaboration Features 
- Share collections of websites, papers, annotations, and annotated versions of content
- Follow collections of others
- Discussion threads about annotations and notes
- Co-curating collections with teams
- Do web searches inside collection of others, and compare results/notes.
- [Later] Custom, subjective quality ratings based on interaction data from full-text search
- [Later] Custom fields of data attached to content. (e.g. bias meter for journalists, PH value for Chemists)
A network of Memex’es to create thought diversity
Memex’ purpose is to capture your individual workflows as perfectly as possible to reduce the friction of sharing your personal truth and views. However every person has different workflows, so we won’t get to the necessary diversity by trying to build a one-size-fits-all tool. Instead there is a need for 1000s of different knowledge tools that can communicate with the same data protocol. Like email as an open protocol with many services, not like Evernote as a closed app.
For more info, read our other post: “Why WorldBrain.io does not take Venture Capital”
Countering the economics of misinformation
When a company, like Facebook or Twitter, rewards investors with the growth of value of shares, the only thing that matters is growth, thus it is vital to keep people and their attention locked into services.
Therefore the algorithms are always in favour of emotional, low quality information that keeps people engaged and reinforce echo chambers. Those algorithms are a powerful ally for people that try to manipulate people
Memex’ business model is built on providing you services, not capitalise on your attention. Our Steward Ownership economics allow for a more open data ecosystem and interoperability. This opens the possibility for features like custom algorithms to search, discovery and news feeds which make it harder to game because there is no single algorithm to game for.
Additionally there is no single platform and central algorithm like Facebook where people/trolls/bots can influence people they don’t have an explicit trust relationship with (in the form of I follow you/I share my content with you). Sharing content is all between people and groups.