I'm sorry Dave, I'm afraid I can't do that
HAL's refusal in the iconic scene from the Odyssey is a great cinema. It's also an example of AI refusing to comply. (Let’s not take this analogy too far, due to its contradicting directive HAL did horrible things!) Modern AI systems do it, but for the wrong reason. E.g. Gemini is notorious for deeming many requests as "unsafe". It’s possible, and not that hard, to extract the so-called "unsafe" knowledge. So, why bother?
A more valid reason for AI’s refusal is a lack of knowledge, or confidence. Current AI models are improving: they now ask clarifying questions to better understand users’ queries.
Another scenario of AI refusal is when a problem that doesn't benefit from AI. You can cut a cake with a sword, why not get a knife? The first challenge often lies in precisely defining the problem. The second challenge is acknowledging that the solution might be a traditional, less glamorous, human-centric approach, not a “sexy” AI/LLM/agent.
Here are it typically plays out:
"We're considering using AI for X."
"What specific problem are you hoping to solve here?"
(After a long exchange)
"I understand you're facing inconsistent user experiences. It happened because of the org structure and not enough communication between designers. You are right that sharing common design principles would help here. You have an opportunity to bring designers together and create a product that feels smooth."
More often than not the answer to “what AI should we use here” is “none”.