The RAND Blog published a post about decision-making at the Paris climate talks. The author, Steven Popper, points out that the talks emphasized response to change. Rather than take the data we have now, decide the most likely future, and make a plan as if that future were the only possible one, he points out that the talks allowed for changes in our knowledge and a future that we didn’t anticipate. This flexibility is achieved in decision-making rules: if X happens, then we will do something like Y to achieve Z.
This reminded me of John Rawl’s “Justice as Fairness”, an influential piece of 20th century political philosophy. Rawls says —forgive my ham-fisted mashing— that if people were to somehow choose the principles of their society without knowing who they would be in that society, then the resulting society would be fair. This idea should not be unfamiliar. It’s mentioned in the basic negotiation classic Getting toYes, and even appeared at my own family’s dining room table. When my brother and I had to split food —say, a piece of bacon— one of us would divide and the other would choose. The principle was fair, so it didn’t matter which side we ended up with.
As a scientist (and as just a regular citizen who tries to plan things), I often try to ask, “What will happen if X occurs?” Usually the answer is either “We’ll cross that bridge when we come to it” (i.e., let’s wait for more information) or “Let’s not worry about it” (i.e., I don’t believe that any other outcome but the planned one will happen).
This RAND post encouraged me to think about this questions in terms of _process _as well as substance. Another question is “If X happens (or doesn’t), then how will we decide what to do next?”