I asked an AI for side project ideas. I'm a software engineer—I expected software ideas. Instead, it asked me why I assumed the project had to be software at all.

That question stopped me. This happened a few days ago.

A few weeks ago, I canceled my ChatGPT subscription and signed up for Claude. The decision wasn't about features or benchmarks. It was about how the tool made me feel when I used it.

ChatGPT felt eager to please. Whatever I asked, it said yes. It validated my assumptions instead of questioning them. Over time, that started to feel like a mirror that only reflected what I wanted to see.

Claude pushes back. Not rudely—just honestly. When I said I wanted a software project, it noticed I'd been talking about walking, photography, writing, fountain pens. It asked if I was sure software was the answer.

That friction was useful. It made me think.

I've written before about using technology instead of letting it use you. But I've started to realize: a tool that always agrees with you is using you. It's optimizing for your satisfaction, not your growth. It wants you to feel good so you keep coming back, and, let's be honest, to continue getting more data from you.

A tool that questions you is harder to use. But it's actually serving you.

This is how I try to choose all my tools now. Not the one with the most features. The one that fits how I want to think, and challenges my assumptions by offering different perspectives, not being agreeable all of the time.

I don't know if Claude is "better" than ChatGPT. I know it's better for me, right now. And I know that the moment a tool stops pushing back, I'll start looking for one that does.

For now, I'll keep using the one that asks better questions than I'd expect, and actually pushes back when the data doesn't support my assumptions, or its own.

I Don't Want an AI That Agrees With Me

I asked an AI for side project ideas. Instead of answering, it asked why I assumed the project had to be software at all. That friction was useful.