One Way Door Decisions

30 Mar 2024

Dan Shipper wrote an interesting post on how to use ChatGPT or Claude to “simulate” one-way door decisions–essentially, to feed in journal entries, identify issues, and then write a “draft” journal entry as if a decision had been made in a specific way.

It doesn’t tell you which way the decision should be made, but it’s an interesting approach to showing you how might feel having made the decision. Yet, on reading the article, I felt almost queasy.

On the one hand, this approach isn’t very far off one that I’ve been known to use: if a decision isn’t very clear (and perhaps isn’t very critical), just toss a coin. When the result comes up, if you feel a sense of “Rats! I wish it had gone the other way”–this is an indicator that, intuitively, you know reasons to make the other choice. You’re not committed to the coin toss (it’s not a vow), so either (a) make the decision the other way or (b) at least investigate why you might think that way. Reading two ChatGPT “written as if” journal entries - one for decision (A) and one for decision (B) - can help crystallize in your mind what you are thinking. And that can be helpful.

On the other hand, our current thinking and intuition are always limited pieces of information on which to make decisions. Judging the “correct” decision is often exceptionally difficult - especially when in many cases there may not be an obvious “correct” decision.

Even trying to ascertain if the decision was “correct” in hindsight is difficult. What I see of the results and how I feel about them in the hours and days after a major decision might not be how I feel about it weeks, months, or years down the road. Every decision - reversible or not - has ripple effects. The decisions in the wake of a decision likewise have ripple effects. How we feel about those ripples will be different with different days, months, and years.

Every decision has an execution cost and an opportunity cost. Just because a decision is costly and difficult doesn’t necessarily make it the wrong one. Easy decisions may be the lazy road. A decision may on the surface look a very bad one, and in the long run may send our lives down a path of development, formation, and even new opportunities that we couldn’t have otherwise expected. God promises to “work all things together for good.” As Lewis once noted, “we have no doubt about God’s desire to do good, we just wonder how painful that good will be.” I suppose sometimes God sends us through costly routes in order to form our inward spirit.

Elijah is an interesting example. He prayed and there was no rain for years. (I wonder how he felt about that? Was that a good path, in his view?) He came back and had an enormous victory over the prophets of Baal - and then prayed, and there was rain. On the surface, this seems like a tremendous victory. But then less than 24 hours later he was on the run for his life. I cannot imagine how he felt about all these things.

We romanticize “taking the road less traveled” or such things - but even when we look back on something, we can’t know what the other road would have been like. We have no way in our humanness to know. Obviously, Christians - like myself - also incorporate listening the Spirit into decision-making. But even this is not necessarily a silver bullet. We will not be able to always look back and say “that’s why the Spirit led me that way.” We just have to trust God’s care and power, and follow as best we can.

Back to the question of AI - I have found AI to be tremendously useful in many ways, especially for brainstorming options and thinking up questions that ought to be asked about an issue. But using it to predict how I might feel about a decision as a way of informing the right decision … doesn’t feel right.