Don’t let AI determine your intentions

Intention is the determination to do a specific thing, or act in a specific way. When you’re being intentional, and deliberate, you are taking the first step to exercising your will into the world. Everybody else might accept what’s happening, but you won’t.

At its finest, AI technology helps augment and amplify your intention. The people making AI want to help it make your life more convenient though, so they start getting it to anticipate your intention. 

Sheridan’s levels of autonomy documents this transition very clearly, with the first level involving no AI (i.e., you have to do everything), the middle involves the AI offering suggestions and asking which one you like, and the final levels of the AI doing everything (keeping you informed if you ask, and then eventually ignoring you).

Intention is not convenient, at all. You need to not only spend your energy considering what you want, but also how important it is to you. You need to fret about whether it’s the right thing to do, to overcome your self-doubt, to adapt and pivot when setbacks come up. So AI will most likely not encourage you to be more intentional, unless you ask it to.

That’s the thing. You may not be inclined to use AI beyond a single prompt, or perhaps to work harder and faster. 

Intention isn’t supposed to be convenient. You’re supposed to think about it, wrestle with it, and evolve. You don’t benefit by letting AI make it convenient. “A system that acts on your behalf before you act is not honoring your will but preempting it,” Katalin Bártfai-Walcott writes. “It removes friction not only from the user experience but also from the process of deliberation itself.” 

Perhaps a simpler way of phrasing is to use AI as an assistant, not as an expert. Take all AI recommendations and outputs with a grain of salt, and be careful. As I wrote a couple of months ago:

While the team at The New York Times has similarly asked AI to help their writers by suggesting edits, they will not use AI to draft or make significant revisions. That seems like a sensible policy. I’m excited to see how else AI can help augment this line editing process—though I would not be interested in having it replace a person’s writing directly, and I don’t think it will be as helpful as a developmental editor.

Sometimes, it’s better to risk being wrong on your terms than to be able to blame the AI because you went with its terms.


How can you use AI in a way that fuels your human supply chain, not the AI’s software supply chain?

Leave a Reply

Your email address will not be published. Required fields are marked *