The Veil of Scale
There's an old Soviet-era joke about communist notions of sharing. Two party workers, let's call them Boris and Ivan, are chatting:
Boris: If you had two houses, would you give one to your comrade? Ivan: Of course! Boris: If you had two cars, would you give one to your comrade? Ivan: Without a doubt! Boris: If you had two shirts, would you give one to your comrade? Ivan: You're crazy, I couldn't do that! Boris: Why not? Ivan: I have two shirts!
There are two things going on here. One is of course, the skin-in-the-game effect. The other is what I call the veil of scale: we choose small-and-local behaviors differently depending on how we think those behaviors will have emergent scaled consequences. The joke here depends on going from large-scale to small-scale questions, surprising Ivan with a question that's real for him. The veil of scale is about thought experiments of the form: how would you act in a situation if you didn't know the extent to which your actions were going to be scaled? The Veil of Ignorance The veil of scale is a naturally occurring relative of a well-known artificially constructed idea called the veil of ignorance. In its traditional form, the veil of ignorance is a device that helps you correct biases that are a result of knowing certain things. The classic illustration is this thought experiment: if you could choose before you were born, what sort of society would you choose to be born into if you didn't know what race or gender you'd be born as? In this case, the veil of ignorance argument suggests that you should choose a society that treats its underprivileged well, whatever "well" means. A more practical illustration is the idea of the corporate veil. If a society wants its entrepreneurs to take risks, limiting liability for failures is potentially a good idea. One way to realize the idea ("limited liability" is not by itself a usable idea) is to hide corporate non-personhood behind a veil, allowing all the implications of limited liability to be worked out via the metaphor of corporate personhood. We pretend ignorance of the fact that corporations are just bunches of people acting together in certain ways. The corporate veil illustrates two features of the veil of ignorance in practice. First, you can't just make up an abstract veil. That makes pretending not to know too hard, like trying to not think of an elephant. Instead, you have to create a practical fiction in front of the veil. Otherwise you can't operationalize it. So our veil of ignorance in the case of corporations is an anthropomorphic interface. As a result of the particular form, corporations have names and identities (brands), can enjoy genuine human goodwill, lend themselves to being discussed in terms of birth, growth and death, and so forth. The veil of ignorance is, to coin a term, a painted veil. The painted veil makes it possible to lead corporations and direct their energies in relatively simple ways, by co-opting and overloading natural human social behaviors. One cost is that we are vulnerable to emotional appeals to "save" the "life" of corporations. We respond very differently when such appeals are made in more direct, unveiled terms: "save our jobs and the lifestyles we've become used to" or "too big to fail" (which are unveiled labor and unveiled shareholder narratives respectively, and are more or less appealing than "save this great company," depending on your politics). Thanks to the painted corporate veil, we are also prone to overestimating how much power leaders actually have, since the it tempts us into viewing the CEO as the personification of the company. The second point follows from the first. A veil of ignorance in everyday use, with a practical fiction painted onto it, is an amoral device. When we engineer a particular veil of ignorance and paint it a certain way, we hope it will do more good than harm in the particular situation. Where it doesn't work, we need to over-ride the decisions suggested by the veil. In the case of corporations, that's the idea of piercing the veil. While it's a legal idea, we use it personally every day. We might accept it when a low-level employee saying s/he is helplessly bound by rules and unable to help us. But we informally pierce the veil when ask to speak to managers and demand exceptional treatment. We might accept a company's stock tanking and costing us a bundle, but we are less likely to accept the veil when it hides extreme environmental damage. There are other veils of ignorance around us: the "Market" and "The Law" and "Nature" are other major ones. In most cases (but not always), the fiction we project onto the veil is an anthropomorphic one. All organizational metaphors are also veils of ignorance by definition, since they provide simplified interfaces to complex realities, by highlighting some aspects over others. So a veil of ignorance is really not a very complex idea: it is a user interface; a societal equivalent of the everyday engineering idea of a functional abstraction. You hide complexities beneath a wrapper, paint on an interface based on a fiction, and hope the abstraction doesn't leak much in the practical situations it's been designed for. Where it leaks, you do some messy leak-patching. In user experience terms, any veil of ignorance translates the complex behaviors of an unfamiliar entity into the simple behaviors of a familiar entity. It is a kind of manufactured normalcy. The Veil of Scale The Veil of Scale is a related idea: imagine a situation or problem that is manageable or solvable at the scale of one person, and ask what your natural problem solving behavior would be. Now imagine that your particular ad hoc solution is going to be scaled to an arbitrary size, in an arbitrary organizational context that you cannot know the nature of before it actually emerges. Does it still work under arbitrary possible scalings? The Veil of Scale is simply the obscurity of scaled versions of everyday things, designed or not. It exists naturally as opposed to being constructed legalistically. Let's take a simple example, a situation involving splitting a restaurant check that I was in recently. The scaled version is all the checks being split across the world at any given time. This larger slice of civilizational behavior exists whether you want it to or not, and whether or not you attempt to organize it at any given scale. Let's say you and a half dozen friends are at dinner. To keep things simple, you offer to pay, do a rough calculation to split the check evenly and arrive at a suggested contribution. Others pay you what you come up with. Some promise to pay later. Some pay a little less or a little more due to the difficulty of handling change or because they guess they ordered more or less than the average. You don't keep precise tabs or police contributions. You expect to come out either a little ahead or a little short, and you assume that each individual instance of being the check-splitter will neither bankrupt you, nor suddenly enrich you. You also expect that over time, with the same group of friends, it will all balance out in the end. This is what I usually do or suggest at gatherings, and things work exactly that way when you're talking 5-6 people. With a dozen people, things get more volatile, with bigger surpluses and shortfalls and more people feeling unfairly treated. Recently, I did this with a group of 20+ and came up wildly short and was forced to call for additional contributions to make up the deficit, since it was larger than I was prepared to cover on my own. Upon reflection, I realized why you are more likely to run deficits with informal check splitting with larger groups: a larger fraction of people are likely to feel unfairly treated and adjust accordingly. There are multiple aspects of the transaction that don't scale well, causing both actual and perceived unfairness. So my guess is that scaling fails for the following reasons:- People ordering drinks, appetizers and differently priced entrees in uncoordinated ways stress the "what I owe for what I ordered" assessment beyond what people think is a reasonable range. Drinks cause particular stress, since they are expensive and many don't drink, while others drink a lot.
- Small miscalculations can add up: I forgot to add in the tip before doing the division, and a $5 individual deficit added up to a $100 group deficit, which was a big chunk of the total deficit.
- Shared items get "unfairly" distributed: with 3-4 people sitting around a table, an appetizer plate is within reach for all. With a larger group, there are multiple local clusters of conversations, each of which operates as a shared-ordering unit.
- Rounding precision matters. If a split check comes out to $18/head, you're likely to get a lot of $20s and come out ahead. If it comes to $22, you're likely to end up short. This is of course a simple function of the denomination distribution of cash in typical wallets.
- With larger and more open groups, there is also less synchronization and people arriving late (and ordering less) or leaving early (often asking a friend to cover for them). This also contributes to Item 1. It becomes harder even to tell whether somebody has just dropped by for a few minutes or has been part of the group.
- Larger groups also mean more variation in relationship histories and expectations. As a rule of thumb, you could say that two people who have been on N dinners together will expect to go on N more dinners together. So newcomers might expect to never see the group again, while old-timers might expect the group to continue forever. This affects how much slop/variation people are willing to accept. For the game theorists among you, this is iterated prisoner's dilemma with a wide distribution of expectations about how long the interactions will continue.
- Larger groups also are likely to have greater income diversity (as in the Friends episode when the 3 poor friends get upset with the 3 rich friends). A model of even check-splitting penalizes those who order frugally, expecting a more equitable check-splitting process.
- And then of course, there is the free-rider problem. The larger and more open the group, and the more varied the relationship history lengths, the more likely it is that free riders will join the party.
- Larger groups also make deferred settlement more complex, if people simply forget to pay later.
- Degeneracy: you guess that your behaviors don't scale at all, beyond a point, and that other phenomena will kick in. In this case, you see the scaled regime as fundamentally disconnected from the small-and-local regime. This leads to (for instance) not voting, being mostly expedient in everyday decisions, and apathy. In the check-splitting situation, you're likely to say, "whatever."
- Scaling by Pricelessness: you guess that all scaling will depend on scaling of the values that helped solve your problems locally. You also guess that if you get the values right, the scaled solution will preserve the characteristics of the unscaled one. This leads to everything from manifestos to universalist religious proselytizing. In the check-splitting situation, you might explicitly articulate a value, as in "pay whatever you think is fair" or as I once saw someone do, announcing "I am just going to pay for lunch, if anybody wants to give me a twenty, you're welcome to."
- Scaling by Feature Creep: you guess that all scaling will depend on adding features to the mechanisms that helped solve your problems locally. So scaling becomes the problem of having sufficient foresight to add features before they are needed. This sometimes leads to good engineered solutions and more often to awful over-engineered ones. In the check-splitting case, you might add adjust the basic even-splitting approach by splitting the alcohol and food checks and doing two calculations instead of one: "if you ordered drinks, give me $20, if you didn't drink, give me $15."
- Impossibility: you guess that the emergent impacts of your behaviors are so complex and unpredictable, there's no point thinking about them. The traditional form is religious fatalism. The modern form is complex-systems fatalism (butterfly effect resignation, where everything is a strange attractor beyond reach of meaningful influence). You accept the outcome but are not necessarily indifferent to it. In the check-splitting situation, you might remark on the outcome and what surprises you about it, and attempt hindsight explanations. For instance, "wow, that turned out to be more/less per head than I expected."
- Scaling by Scaling: Here you assume that problems simply change character as they scale and must be solved anew at each scale (of both size and time) by actually trying to scale them through progressively more complex and long-lasting regimes. It is seemingly the most reasonable approach, except that there is no guarantee that complexity will increase smoothly with scale. Sometimes it is easy to solve N=10 and N=1000 cases, but nearly impossible to solve the N=100 case. Sometimes it is easier to build very transient and very enduring things than it is to build things with a design lifespan somewhere in between. In the check-splitting situation, if the group is larger than the largest one you've coordinated before, you might try to build on the old solution in better ways than just adding features. You might even remove features.
- Scaling by First Principles: Here you simply pose and solve the problem at a specific scale as best you can, assuming that it has no correlation to lower or higher scales. In the check-splitting situation, you might make up entirely new ideas. For example, deciding that a certain group size merits pre-catering and a fixed cover charge for joining the group.
- Assumption of Evil: Here you expect that whatever the form of the scaled solution, it will be evil simply by virtue of being a scaled solution. This is a hugely popular guess right now. One response to this guess is to actively work against scaling, and attempting to preserve anarchy at higher scales. This unfortunately does not work, since nature is full of things-that-scale whether we want them to or not. Preventing scaled structures from emerging, ironically, requires scaled efforts. In the check-splitting situation, you might decide to limit the party size to the known capabilities of the best check-splitting method you know and decide that larger groups are actually bad because they require more complex methods.
8 Comments
Venkat I love your nonstandard analyses to death but don't you think this claim warrants a bit more than a bald assertion: "The restaurant check-splitting problem, when scaled sufficiently, is actually the problem of managing an entire macroeconomy. But the interesting thing is that you actually get to nearly every key feature of an entire country’s economy with a group as small as 20-30."
What's the analogue for economic growth or inflation or monetary policy in a 20-30 person check splitting group? You might say that's just one feature but doesn't that illustrate how the 20-30 person check splitting scenario operates parallel to the lump of labor fallacy (i.e. static supply of stuff to be distributed)?
I had a fairly detailed mapping worked out. Monetary policy shows up as the goodwill (or lack thereof) of other groups/individuals dining in the same location, which can result in people appreciating you for creating a lively atmosphere or asking management that you be thrown out. Of course, this means the bond market is all goodwill and the domestic economy is cash, and it's not clear how much more the group would be willing to spend on the basis of increased external goodwill. A more direct cash monetary policy (rare these days) is bar owners extending credit to regulars and allowing them to run up a tab that they may have difficulty paying for.
Inflation does happen. Defined as too much money chasing too few things, it describes tourist trap pricing strategy well, especially ones that cater to tourist groups from richer regions.
Of course, to get to some of these extended effects, you have to model the boundary conditions more clearly: restaurant owner, other diners, waitstaff...
Just because you can construct a mapping does not mean it's isomorphic
http://bit.ly/1v1uyCw
I think this could be an example of pattern #3, but if you expand the context, it could be a #2, the priceless value being democracy.
When you say:
"When we engineer a particular veil of ignorance and paint it a certain way, we hope it will do more harm than good in the particular situation. "
Did you mean "more good than harm"?
Fixed, thanks. Sociopath Freudian slip :)
Seems reminiscent of the categorical imperative
When engineers split the check but don't want to let it get unduly difficult, do they ever invoke the value of time? I mean saying something like"that last dollar takes too long to figure out, let's just split it".