The complements to AI will increase in value

Here's a model I use to think about how AI will change work.

Some parts of every activity will cost 1,000x less than they do when a human does it. For engineers it might mean reading and typing code, for radiologists reading X-ray images, etc.

But it's not 100% of the time a human spends on that job. An engineer also has meetings, designing the domain model, reviewing feebdack and requests and bug reports, debugging, designing and architecting solutions. A doctor, speaking to patients, reading from and writing into the EHR system, and much more.

When technology (in this case AI) makes 60% of a job 1,000x faster, the other 40% remain. Output per human increases, but will now be bottlenecked by the 40%, which becomes the new whole of the job. The things AI cannot do remain.

In other words, what becomes valuable is the complement to AI capabilities: what AI cannot do (yet). Let me give some examples (while noting AI capabilities are increasing over time):

  1. Agency: AI does not choose to start a project, attack a problem, create a startup. You make that happen. You continue to believe in it, continue to work.
  2. Choosing the right problem: obvious startup ideas are usually bad because they are heavily competed. True value comes from finding the Secret -- something that is true but few believe. Less drastically, LLMs are great generators but bad pruners. There are many problems you could solve; identifying the most useful one to set the AIs to is valuable.
  3. Charisma: AI does not have a warm handshake, reassuring voice, strong eye contact. In-person especially, but even virtually -- the slop and blandness detracts from this.
  4. Customer service: even if AI knows how it should be done, human touch could make an objectively worse interaction better.
  5. Convinction: AI can easily be dissuaded with lots of arguments. Convincing yourself and others to align behind something big combines many things AI is bad at right now.
  6. Managing the AIs: finding the right context to work on a task; describing the goals and constraints; evaluating the outputs. This is standard management work with people, now applied to AI.

Much of this can be summarized as "non-technical founder" skills.

If you want to think a few years into the future, embodiment is the biggest unsolved bottleneck to automation of big tasks. AI is virtual-only right now, and robotics is far from having general-purpose robots that can operate in human environments (right now they are single-purpose like self-driving cars, or operate in a standardized environment like Amazon warehouse robots).

Perhaps a step before embodiment (AI acting in the world) is digitization (AI perceiving the world). Even if you cannot act, it is valuable to collect information about what is going on, adding it to context for the AI. Think security drones, safety inspection cameras, etc.