First thoughts on AI moratorium

Context: first thoughts on Pause Giant AI experiments. I will refine my thinking over time.

  • I had not thought about AI safety much since ~2017, after thinking a lot about it in 2014-2017. In 2017, I defended my MSc thesis on an AI-safety-inspired topic (though very narrow and technical in nature), but decided to take some distance from the topic after I didn't see a way to personally contribute much. Mostly because I am very unproductive in the academic research setting.
  • The world now is different than then: capabilities are huge, investment is massive, acceleration is real.
  • A 6-month moratorium might help as a foot in the door towards broader attention and larger steps.
  • Deployment of current capability is OK (not an existential risk in itself). Problems with current level of capability are solved by existing liability & legal system.
  • Current pace of deployment is wild - I can see and feel this. VC money is pouring in. Compared to 2 years ago, 1,000x the amount of people are building products, making experiments. There is strong incentive to continue enhancing capability due to this whole ecosystem.
  • OpenAI/Microsoft's business would probably be OK if paused, given others pause too - deploying GPT-4 will still earn them their billions.
  • A lot of arguments against moratorium (and in general, long term safety) are either ad hominem or "short term is important, long term is not" which is pretty weak IMO. Both should get investment.
  • "Value beats risk" arguments are weak because it's like with gambling - bet sizing is important. If you bet 100% of stack on 95% odds then you still have 5% chance of "going extinct".
  • Typical medium/large tech company product cycle is 3 months (one quarter). ChatGPT came out end of November; the first chunk of ChatGPT integrations in March '23 after the first full product integrations (even though these were very basic). I predict we will see more large integration announcements in June.
  • The outlook of better model capability in future directly causes more deployment today. Founders and product managers think "if I can almost get GPT-4 to do do X today, and GPT-5 comes out in 6 months and makes it work great, then now is a good time to step on gas to build X". Implication: credibly slowing model development slows deployment.

I signed the open letter because I think the arguments for pausing are stronger than against.

I'll summarize the arguments I've seen below as well, in very rough order of importance. (I seeded this list with a summary of external sources and others' conversations, with some additions and edits by me.)

Arguments for moratorium

  1. Societal adaptation. A pause allows society more time to adapt to the new reality created by powerful AI systems like GPT-4. To develop well-resourced institutions to cope with the economic and political disruptions AI may cause, including effects on democracy.
  2. Enhanced AI governance. A pause can provide an opportunity to develop robust AI governance systems, including regulatory authorities, tracking, provenance, and liability for AI-caused harm.
  3. Academic catch-up. A pause would allow academic researchers to catch up with industry developments and be more effective in helping society understand and control AI progress.
  4. Alignment improvement. A pause could be used to focus on improving the trustworthiness of AI systems rather than making them more powerful. It allows for increased focus on AI safety research and efforts to align AI systems with human values.
  5. Broader decision-making. Decisions about the deployment of powerful AI systems should not be delegated to unelected tech leaders, but rather should involve a wider range of stakeholders, including the public. This is easier if the pace is not as frantic.
  6. Risk awareness. The pause serves as a reminder of potential risks associated with the rapid development of AI.
  7. AI developers' responsibility. It demonstrates that AI developers and researchers care about the societal impact of their work.

Arguments against moratorium

  1. Progress continuation. Even if giant-scale models are paused, progress can still be made in small-scale models and then launched faster after the pause.
  2. Global competition. Countries like China may not stop training large models, potentially creating a competitive disadvantage for those who pause.
  3. Limiting AI benefits. A pause might slow down the development of AI applications in crucial areas such as education, healthcare, and food.
  4. Ineffectiveness. A moratorium might have little effect, as progress could still continue without any pause.
  5. Insufficient time. A 6-month pause may not be long enough to address alignment, safety, governance concerns.
  6. Underdefined rule. The definition of "more powerful than GPT-4" is hard to put into hard rules, and so probably hard to apply consistently. This sort of rule can become an arbitrary censorship weapon at the hands of governments.
  7. Bad precedent for regulation. A pause might set a precedent for increased government intervention in AI development (or any technology development in general), which could stifle innovation in the long term.
  8. Technological determinism. Some argue that technological progress is inevitable, and a pause would only delay the arrival of powerful AI systems without fundamentally changing their impact.