Hello readers,
I know its been a while since have posted anything on here. I have been, for the lack of a better term, preoccupied. If you didn't know, I am currently located on a different parallel than my last posts. But this is besides the point. Today's topic is Shannon entropy, based from information theory.
I have had the pleasure of reading Robert J. Sawyer's book, Wake. It is a great science fiction book. If you have chance to read it, do yourself a favour and do it! In the book they talk about this phenomenon and so I decided to look into further.
I know its been a while since have posted anything on here. I have been, for the lack of a better term, preoccupied. If you didn't know, I am currently located on a different parallel than my last posts. But this is besides the point. Today's topic is Shannon entropy, based from information theory.
I have had the pleasure of reading Robert J. Sawyer's book, Wake. It is a great science fiction book. If you have chance to read it, do yourself a favour and do it! In the book they talk about this phenomenon and so I decided to look into further.
As some of you may know, entropy is the the measurement in chaos or disorder. This could be either from the arrangement of molecules or other areas in which can have disorder (there isn't a standard for the chaos on your desk). There can even be dis order observed in the tossing of a coin!
A fair coin has one side heads and one side tails. If we exclude the probability of the coin to land on it's side (neither heads or tails) then the probability of the coin landing on either side is 50%, that is, there is a 50% chance of heads and 50% chance of tails. This can be broken down to a binary system, a system of 1'sand 0's. In this example, the entropy is zero, since everything is fair, there is no disorder, everything is predictable.
As we all know, our computers transmit data via the binary system, everything you see and hear are the results of many 1's and 0's. So, Claude Shannon (1916 - 2001) proposed a theory of communication. Entropy is used as a tool to predict the next event in a chain. For example, predicting the next letter in the alphabet in a word (think Wheel of fortune with half the letters already filled in). Entropy is set at 1 - 1.5 bits per letter. That is, there is some randomness, though English follows some rules, such as 'u' comes after 'q' on most occasions, or the letter is 'e' is the most popular letter. When we look at entire words the entropy increases between 10 and 11 bits per word. That is, the predictability of the next word is significantly less than predicting the next letter in a series of letters.
This plays into information theory well because we as humans are always looking for ways to maximize space. We seem to do well, what with our chat rooms. We have broken common phrases down quite a bit. We often say ttyl (talk to you later) or brb (be right back), thus sending smaller messages. But this theory says that you cannot compress all messages. Some messages must either stay the same or get bigger in order to convey it's meaning. For example, when I have to write an essay for a liberal art course, I start off with an outline of what I will want to write about. There is no way I can compress it due to word count qualifications. Thus, not all information can be compress and must be further expanded upon.
So that is Shannon Entropy. If you want to see the predictability for yourself, you just need to read a book. Before you turn the page, predict the next word. You'll be surprised how often you are right or wrong!