Most people use LLMs wrong. They treat them like search engines or autocomplete. Type question, get answer, move on. This wastes the actual capability.

LLMs work best as thinking partners, not answer machines. The value isn't the final response. The value is the thinking process you do together.

Stop asking LLMs for answers. Start thinking with them.

The Autocomplete Trap

You have a problem. You ask the LLM for the solution. It gives you something. You use it or don't. Either way, you learned nothing about how to think through similar problems.

This is expensive autocomplete. You're outsourcing thinking instead of augmenting it.

The better approach: Explain your problem. Ask the LLM to break it down into components. Discuss each component. Challenge the breakdown. Refine it together. Now you understand the problem structure, not just a proposed solution.

You learned a thinking process. Next time you face a similar problem, you can apply that process without the LLM.

Externalizing Your Reasoning

Thinking happens in your head. It's messy, incomplete, and often wrong. You can't see your own logical gaps because you're inside your own reasoning.

LLMs externalize reasoning. When you explain your thinking to an LLM and it reflects it back, you see your logic from outside. You notice gaps you missed. You catch assumptions you didn't realize you were making.

This isn't about the LLM being smart. This is about having a mirror that shows your thinking clearly enough to critique it. The LLM doesn't need to be right. It needs to make your reasoning visible.

Talk through your logic. Have the LLM paraphrase it back. The gaps become obvious.

Exploring the Problem Space

You see one solution. The LLM suggests three more. Now you're thinking about the problem differently. You realize your solution assumed constraints that don't actually exist.

The LLM isn't smarter than you. It's less anchored to your initial framing. It generates alternatives without the cognitive bias toward your first idea.

The value is in expansion, not evaluation. Don't ask "which solution is best?" Ask "what other approaches exist?" Use the LLM to explore solution space you wouldn't have considered.

Your job is picking which approaches to pursue. The LLM's job is making sure you see options beyond your first instinct.

Building Mental Models

You're learning a new domain. Reading documentation feels overwhelming. Too much detail, not enough structure.

Explain what you understand so far to the LLM. Ask it to build a mental model. It shows you how concepts relate. You realize you understood pieces but missed the connections.

This is collaborative model building. You provide fragments. The LLM proposes structure. You refine the structure based on what makes sense. Together you build a mental model that's clearer than what either of you started with.

The model is yours. The LLM just helped you construct it from pieces you already had.

Stress Testing Ideas

You have a theory about why something works. Sounds good to you. You ask the LLM for counterexamples.

It generates five scenarios where your theory fails. Now you understand the boundaries of your theory. You refine it to handle the edge cases or acknowledge the limitations.

The LLM isn't fact-checking. It's helping you think adversarially about your own ideas. Most people are too invested in their theories to properly critique them. The LLM has no investment.

Use it to attack your reasoning. The holes it finds are holes you should fix.

Articulating the Unclear

You have a vague intuition. Something feels wrong about a plan but you can't articulate why. You describe the feeling to the LLM.

It offers possible explanations for your intuition. Suddenly you realize what's bothering you. The LLM didn't identify the problem. It gave you language to articulate what you already sensed.

Intuitions are pattern recognition happening below conscious awareness. LLMs help surface intuitions by providing vocabulary and frameworks to describe them. Once articulated, you can reason about them explicitly.

The Collaborative Thinking Pattern

The pattern that works: start with your thinking, use the LLM to extend it, critique the extension, refine together, repeat.

This is different from: ask question, get answer, accept or reject.

The second pattern outsources thinking. The first pattern augments thinking. One makes you dependent. The other makes you better at thinking.

The goal isn't to use LLMs forever. The goal is to develop better thinking patterns that persist after you stop using the LLM. Each session should teach you something about how to think, not just what to think.

What This Requires

You need to be comfortable thinking out loud. Explaining incomplete thoughts. Acknowledging confusion. Admitting when you don't understand something.

Most people aren't trained for this. They want to appear competent. They ask questions that sound smart. They hide their actual confusions.

Thinking with LLMs requires vulnerability. You have to show your work, including the parts that are messy and wrong. This feels uncomfortable but it's where the value is.

The LLM doesn't judge you. Use that. Show your confused, incomplete thinking. That's where collaborative thinking begins.

The Difference It Makes

After six months of using LLMs as thinking partners instead of answer machines, you'll notice something. You approach problems differently. You break them down better. You consider more alternatives. You catch your own faulty logic faster.

This isn't because LLMs made you smarter. It's because you practiced better thinking patterns repeatedly with a patient partner who never got tired of your questions.

The LLM is training wheels for better thinking. Eventually you internalize the patterns. You break down problems automatically. You challenge your own assumptions without prompting. You explore solution space naturally.

The goal is reaching the point where you don't need the LLM as often because you've learned how to think the way you and the LLM thought together.


Most people will keep using LLMs as expensive autocomplete. They'll get answers but not develop thinking skills. They'll remain dependent on the tool.

The few who learn to think with LLMs instead of just using them will develop thinking capabilities that persist beyond any specific tool. They're using LLMs to become better thinkers, not just to get things done faster.

That's the actual value. Not the answers. The thinking.


AI Attribution: This article was written with assistance from Claude, an AI assistant created by Anthropic.