For those who are interested, Jon von Neumann had a great idea to make a loaded coin fair. The highest value for entropy happens when the probability for heads is 0.5 or 50%. We can see that Pr(0) or no Heads, and Pr(1) or 100% Heads have zero entropy value. This is shown below, where the X-axis is the probability of Heads, and the Y-axis is the information entropy. If it is a loaded coin that makes one side more likely to occur, then the entropy is lower than if it is a fair coin. If the coin was always landing on heads, we have a zero entropy case because there is no new information available. In the case of uniform distribution, the maximum entropy model is also the same as Laplace’s principle of insufficient reason. Thus, when we say that the probability of heads or tails is 0.5, we are assuming a maximum entropy model. For discrete events, the entropy is maximum for equally likely events, or in other words for uniform distribution. In the case of the coin toss, the entropy is the average level of information when we consider the probability of heads or tail. Entropy is the average level of information when we consider all of the probabilities. For example, an unlikely defeat of a reigning sports team generates more surprise than a likely win. Low probability events have high information content. The informational entropy is also inversely proportional to the probability of an event. High entropy messages on the other hand have high information content or high surprise content. Low entropy messages have low information content or low surprise content. Entropy is a measure proposed by Claude Shannon as part of his information theory. If we assumed that the coin was loaded, we would be trying to “load” our assumption model, and claim unfair certainty. If we don’t know anything about the coin, our prior assumption should be that heads or tails are equally likely to happen. To explain this further, let’s look at an example of a coin toss. Loosely put, we should model only what is known, and we should assign maximum uncertainty for what is unknown. The Maximum Entropy principle (an extension of the Principle of Insufficient Reason) is the ideal epistemic stance. This idea is based on Claude Shannon’s Information Theory. In today’s post, I am looking at the Maximum Entropy principle, a brainchild of the eminent physicist E.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |