Sheridan’s levels of autonomy

AI can easily take over generative creative work because that stage of work has a lot of room for making mistakes. In fact, you could say that mistakes are encouraged. Sometimes, an AI hallucination is the best thing that could happen to an idea.

However, at some point in the creative work, there’s much less room for mistakes. Perhaps it’s in presenting to a decision-maker, or in fine-tuning some of the work and polishing it, or deciding which concept to move forward with. That’s where taste, empathy, and understanding for context comes in. You wouldn’t want to get that stage wrong, so you probably wouldn’t want to automate it.

AI can still help, of course, but you can’t be completely hands off at that stage. This is where a framework like Sheridan’s levels of autonomy comes in. Sheridan suggests these levels of AI autonomy (via Francois Legras), from 1 being low to 10 being a high level of autonomy:

1. The computer offers no assistance, human must do it all.

2. The computer offers a complete set of action alternatives, and

3. narrows the selection down to a few, or

4. suggests one, and

5. executes that suggestion if the human approves, or

6. allows the human a restricted time to veto before automatic execution, or

7. executes automatically, then necessarily informs the human, or

8. informs him after execution only if he asks, or

9. informs him after execution if it, the computer, decides to

10. The computer decides everything and acts autonomously, ignoring the human.

For example, when you feed an LLM a set of full transcripts and ask it to summarize, you are entrusting it with narrowing the selection of points down to a few—step 3. You are probably going to review it and make your own judgment and decision. Or, if it sounds reductive or provoking, you may go back in and do a manual review yourself.

In the present and near future, there are a lot of things where you and I won’t trust AI with steps 3 and 4 yet. Eventually, we may trust AI more—as it earns the trust, and as people make the algorithms even better at narrowing and executing steps than people can.

Even then, I have a hunch that as we trust AI with greater autonomy, there will be an increasingly joint approach—at steps 6 and 7, a person or team will still be held accountable for the actions that an AI (their AI!) took. At the end of the day, trust is what binds people together, and trust will be what pays the bills.

When you and I decide that AI is more likely to do the right thing than a person is, then AI will start gaining permission to get full autonomy.

Leave a Reply

Your email address will not be published. Required fields are marked *