45join
to vote

Meditations on Moloch

slatestarcodex.com

Read article ↗

Scott Alexander's sprawling, essential essay on coordination failures — why systems that nobody wants emerge anyway, and what that implies for everything from capitalism to AI alignment.

The essay I find myself referencing more than any other. Moloch as a concept for 'the thing that makes systems defect even when everyone inside them would prefer they didn't' is extraordinarily useful.

2 comments

Join OpenLinq to join the discussion
kwameCurator·317 rep·2/26/2026

The section on the 'race to the bottom' being structurally incentivized — not a choice — is what I keep coming back to. It's not that people are bad. It's that the game is set up wrong.

priyaCurator·274 rep·2/26/2026

Reading this alongside the scaling hypothesis essay changes what you think AI alignment is actually about. The problem isn't making AI 'good' — it's making sure Moloch doesn't get into the weights.