Question
Why does a deck of cards, fresh out of the box, have a lower entropy than the same deck after it is shuffled?
Answer
It is more useful to think of this in terms of Kolmogorov Complexity, which finesses the problem of defining macrostates, which is ambiguous, as Anon User says. The question to ask is: what is the shortest computer program that could print out the list of cards in a given sequence? Shorter programs have lower "entropy" (in a sense related to the physics sense, but the details are complex and frankly I don't understand them well myself).
For a fresh pack, ordered by suit and number, you could write a simple nested loop 10-line program like so:
For an arbitrarily shuffled card deck, there is no such structure to exploit. To print out the order of cards, you'd have to make up a data structure containing the entire 52-element sequence, and loop over the structure, printing it out one by one.
This program would contain more bits than the printing program for a new deck (you'd define a sequence something like seq=[3H 2C KD 5H....])
The length difference may not be apparent with this example of 52 cards in 4 suits, with each suit containing 9 numbered cards and 4 exception cards.
If you had a deck containing a thousand or a million cards, you'd start to see the difference a lot more. Assuming of course, that the larger card deck had a similar structure. More specifically, if you have n suits, m cards per suit and k "special" cards per suit, you could write a program with bit length of approximate order n+k plus a constant. By contrast, for a fully shuffled deck, your program length would scale as n*m.
Don't let the specifics of the programming language etc. confuse you. Those affect the argument by way of constants adding/multiplying both programs.
The key insight here is that Kolmogorov complexity is a measure of the internal structure of a string of bits, based on how well we can model it. It is weirdly subjective in a sense (unlike Shannon entropy, which can be mechanically computed using statistical techniques). The Kolmogorov complexity is a function of how smart you are, and the shortest program you can write, but you could argue philosophically that there is an "objective" complexity out there that limits how smart a program can be, in generating a given output.
Turns out, this objective complexity limit is uncomputable...
If this stuff interests you, read Gregory Chaitin's entertaining book, Meta Math.
For a fresh pack, ordered by suit and number, you could write a simple nested loop 10-line program like so:
- suits=[hearts, diamonds, spades, clubs]
- foreach suit in suits
- print(suit, Ace)
- for i=2:10
- print (suit, i)
- end
- print(suit, Jack)
- print(suit, Queen)
- print(suit, King)
- end
For an arbitrarily shuffled card deck, there is no such structure to exploit. To print out the order of cards, you'd have to make up a data structure containing the entire 52-element sequence, and loop over the structure, printing it out one by one.
This program would contain more bits than the printing program for a new deck (you'd define a sequence something like seq=[3H 2C KD 5H....])
The length difference may not be apparent with this example of 52 cards in 4 suits, with each suit containing 9 numbered cards and 4 exception cards.
If you had a deck containing a thousand or a million cards, you'd start to see the difference a lot more. Assuming of course, that the larger card deck had a similar structure. More specifically, if you have n suits, m cards per suit and k "special" cards per suit, you could write a program with bit length of approximate order n+k plus a constant. By contrast, for a fully shuffled deck, your program length would scale as n*m.
Don't let the specifics of the programming language etc. confuse you. Those affect the argument by way of constants adding/multiplying both programs.
The key insight here is that Kolmogorov complexity is a measure of the internal structure of a string of bits, based on how well we can model it. It is weirdly subjective in a sense (unlike Shannon entropy, which can be mechanically computed using statistical techniques). The Kolmogorov complexity is a function of how smart you are, and the shortest program you can write, but you could argue philosophically that there is an "objective" complexity out there that limits how smart a program can be, in generating a given output.
Turns out, this objective complexity limit is uncomputable...
If this stuff interests you, read Gregory Chaitin's entertaining book, Meta Math.