The Global Brain Algorithm
About
I Introduction
1
Rationale
2
The Problem with Conventional Discussion Threads
2.1
A Novel UI to Navigate Discussion Threads
3
Key Concepts and Assumptions
3.1
Establishing Causality
3.2
Distributed Reasoning
Example: Did an Earthquake Just Happen?
3.3
Optimizing for Information Value
3.4
Reducing Cognitive Dissonance
3.5
Cognitive Dissonance as Relative Entropy
3.6
The Causal Model
II Identifying Top Replies
4
Driving Informative Discussions
4.1
An Informal Argument Model
4.2
What is the “Best” Reply?
5
Modeling Upvote Probability
5.1
The Beta-Binomial Model
Example
5.2
Naive Point Estimate
5.3
The Bayesian Average
5.4
The Bayesian Hierarchical Model
5.4.1
Approximating the Mean Upvote Probability
6
The Informed Upvote Probability
6.1
Definition of Informed Upvote Probability
6.2
Extrapolating Informed Upvote Probability
6.3
Uninformed, Informed, and Overall Tallies
6.4
The Adding Informed Probability to the Hierarchical Model
6.5
The Reversion Parameter
6.6
Full Model
6.7
Calculating The Informed/Uninformed Upvote Probabilities
7
The Fully-Informed Upvote Probability
III Concepts
8
Cognitive Dissonance
8.1
Key Concepts from Information Theory
8.2
Surprisal as a Measure of Error
8.3
Total Cross Entropy
8.4
Total Relative Entropy = Cognitive Dissonance
8.5
Detailed Example
8.6
Discussion
8.6.1
Parallel to Machine Learning
8.6.2
Subtle Point 1
8.6.3
Subtle Point 2
9
Information Value
9.1
The Information Value of a Vote
9.2
Information Value of Changed Votes
9.3
Information Value of New Votes
9.4
What Does a Vote Mean?
9.5
Example 1: A Storm in Madrid
9.6
Example 2: A Typhoon in Oslo
9.7
Desired Properties of Information Value Formula
9.8
Upvote-Only Relative Entropy
9.9
Example Charts
Appendix
A
Primer on Information Theory I
A.1
Intuitions about Key Concepts
A.2
Surprisal
What about the Base of the Logarithm?
A.3
Entropy
References
Published with bookdown
The Global Brain Algorithm
References