The Pond

SearchSearch

Search

  • About me
  • My research
  • All posts
  • Open source
  • Subscribe
Sequence: Reframing Impact

Previous: Deducing Impact

Next: World State Is the Wrong Abstraction for Impact

Tags
  • understanding the world
  • impact regularization
About this post
  • Read time: 1 minute
  • Published on September 27th, 2019
  • Updated on November 22nd, 2025
Links to this page
  • Posts & Sequences
  • Non-Obstruction: A Simple Concept Motivating Corrigibility
  • Reasons for Excitement About Impact of Impact Measure Research

Attainable Utility Theory: Why Things Matter

By Alex Turner

Published on September 27th, 2019

Warning

If you haven’t read the prior posts, please do so now. This sequence can be spoiled.

A cartoon of two robots on a grassy field. A large robot looks surprised, with an exclamation mark above its head, as it looks at a smaller, smiling robot holding a pink ball. In the blue sky above, clouds spell out "Seriously."

Handwritten text asking, "When thinking about whether something impacts us, we ask: How does this change my ability to get what I want?". The question is highlighted with a purple and yellow splash. Below, it concludes: "These are the big deals. This is what affects us. This is what matters to us. This is impact."

Six oval illustrations depicting various scenarios: 1.  An asteroid hurtling towards a planet. 2.  A white void next to a red frowny-face. 3.  The Sun going supernova and ravaging Earth. 4. A robot looking at small obsidian blocks (which it considers worthless, unlike pebbles). 5. A traffic jam. 6. An alien relaxing as a distant star goes supernova.

Handwritten text: "When you think about it, how could something possibly be a big deal to you if it doesn't change your ability to get what you want? If it doesn't change your ability to get what you want, you won't care. Conversely, how could something not matter to you if it does?"

Handwritten text defines Attainable Utility (AU) as "your ability to get what you want." In the center, a black circle contains the text "This is the attainable utility theory of impact." Below, it says, "Correct theories make correct predictions, so let's take AU theory out for a spin."

Handwritten heading "Locality." Below, it reads, "AU theory predicts: objective impact to places we can't reach doesn't feel impactful."

"Imagine a giant stack of money is about to be yours."

A stick figure whistles, with colorful musical notes coming from their mouth. To the right, a large, glowing pile of cash.

"Then, I move it to the moon."

A cartoon stick figure with a red, angry face stands on a curved surface, looking up in frustration at a distant stack of cash on the moon.

Handwritten text: "If I then move it even farther away, does this matter to you?" Below: "Well, you couldn't reach it anyways, so who cares?"

Handwritten text titled "Discounting." It reads: "AU theory predicts: We discount impact to our future selves exactly the same way we discount value to our future selves."

Handwritten text: "Close your eyes and pretend you're the kind of person who lives in and for the moment. You care about things if they happen soon, and after that— who knows? You learn that in ten years, your net worth will be $10 million."

A massive, glowing pile of cash, with three large dollar signs floating above.

The question "How big of a deal is that?" in handwritten text.

"Now clear your mind, and imagine you learn this as your normal self: $10 million, ten years from now. How big of a deal is it now?"

"It probably felt more impactful when you actually care about the future."

Handwritten text with the heading "Beliefs." Below, it says: "AU theory predicts: If knowledge changes your expectations about whether you can get what you want, then learning it feels impactful."

"Remember the world where everyone thought the sun was going to explode? Then, you calculated that the sun can't go supernova. This feels impactful. As the news spreads, you're reminded of the imminent non-explosion by the newspapers."

A stick figure sits reading, with a thought bubble: "There's nothing to read. When is Reframing Impact coming out, again?". Nearby, a small, excited crowd of stick figures chatters with words like "did you hear?", "wow!", and "crazy".

"You feel no impact, even though everyone else is pretty blown away."

Universality. AU theory predicts: We can imagine impact in environments unlike our own; the more we think we know about how things work, the more acute our sense of impact becomes.

A 9x9 grid of colorful blocks represents a complex space. A handwritten label "You" with an arrow points to a white starting block with a plus sign on the left. Another label, "Your goal," points to a dark grey destination block in the top-right corner.

"Pretend this maze is your reality. You're presently able to reach the goal. How impactful would the following be?".

Two grids of colored squares illustrate a change. The left grid has a plus sign next to a purple L-shaped block. An arrow points to the right grid, where the purple block is gone and its space is now filled with yellow, red, and pink squares.

 ¯\__(ツ)__/¯

"You literally don't know how anything works. However, when I tell you that you're how stuck, this seems like a big deal. Surprisingly, this works no matter how "weird" the reality: AU theory correctly predicts impact for agents running on a Powerpoint presentation. Isn't that something?"

Multi-colored handwritten text asks, "How does this change my ability to get what I want?" on a soft yellow and purple background that dissolves into small question marks at the edges.

"I think that with its mere eleven words, AU Theory completely explains our intuitions about impact. Again, how could something possibly be a big deal to us if it doesn't change our ability to get what we want? How could something not matter to us if it does change our ability to get what we want?"

"You really can't separate perceived changes in our ability to get what we want from our sense of impact, because they're the same thing."

☙❧
Sequence: Reframing Impact

Previous
Deducing Impact

Next
World State Is the Wrong Abstraction for Impact

Find out when I post more content: newsletter & rss

Thoughts? Email me at alex@turntrout.com (pgp)