𝗔𝗿𝗲 𝘆𝗼𝘂 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝘁𝗼 𝗰𝗼𝗱𝗲 𝗼𝗿 𝗹𝗲𝘁𝘁𝗶𝗻𝗴 𝗔𝗜 𝗮𝘂𝘁𝗼𝗰𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝘆𝗼𝘂𝗿 𝘀𝗸𝗶𝗹𝗹𝘀?

A few days ago we posted a question on Reddit:

"If you are learning to program today, how do you balance AI tools with actually learning

The interesting part was the consistency. While everyone had different stories, the themes were almost identical.

This article breaks down what we learned.

1. Fundamentals still matter

The strongest consensus across the thread was simple and blunt:

If you do not understand the fundamentals, AI will slow your growth more than it helps you.

People listed common red flags they see in beginners:

  • letting AI write entire solutions

  • copying code they cannot explain

  • skipping documentation completely

  • trusting confident answers without a baseline

  • using AI on concepts they do not yet understand

One person put it perfectly:

"If you cannot judge when the AI is wrong, it is too early to rely on it."

AI does not replace foundational thinking. It amplifies whatever foundation you already have. If that foundation is weak, AI becomes a shortcut that quietly removes the part of the process where real understanding forms.

2. The struggle is the actual learning process

A surprising number of people brought up cognitive science. Not in a pretentious way, but in a practical, lived way.

The idea was this:

Learning happens during the struggle, not the smooth parts.

The process of:

  • wrestling with an error

  • guessing and testing

  • comparing expected vs actual behavior

  • rewriting a piece three different ways

  • discovering why your mental model was wrong

Those moments lay down the neural circuits that become intuition.

When AI swoops in too early, it feels good. It also removes the exact friction that produces skill.

As one commenter said:

"If it feels smooth all the time, you are not learning."

3. The right way to use AI is as a tutor, not a butler

This was the clearest difference between beginners and senior engineers.

Beginners ask:

  • "Write this function for me."

  • "Fix this error."

  • "Build this page."

They hand over the thinking.

Senior engineers ask:

  • "Here is my reasoning. What am I missing?"

  • "Give me edge cases I should test."

  • "Critique this approach and poke holes in it."

  • "Generate a counterexample that breaks this logic."

They guide the AI. They use it as a thinking partner, not as a replacement for their thought process.

Several people even program their model with strict rules:

  • "Do not give me solutions unless I explicitly ask."

  • "Ask me questions until I can explain the idea clearly."

  • "Make me justify my reasoning step by step."

  • "Do not let me move on until my explanation is correct."

That is not AI as a code generator.
That is AI as a mentor.

4. Concepts in, code out

Healthy uses of AI that nearly everyone agreed on:

✔ explaining error messages
✔ walking through examples step by step
✔ naming patterns so you can Google them
✔ clarifying the part of your mental model that is off
✔ generating quizzes about a topic
✔ giving buggy code for you to debug

Risky uses of AI:

✖ writing full projects
✖ copying solutions you cannot explain
✖ learning exclusively through explanations
✖ skipping documentation
✖ trusting conceptual answers without cross checking

Code can be run.
Concepts cannot.

That is why conceptual hallucinations are the real danger for beginners.

5. Your brain still has to do the work

This was the deepest insight across the thread:

AI does not remove the cognitive load of learning. It just moves it.

It does not magically inject intuition into your head.
It does not replace the hours of confusion and breakthrough.
It cannot give you mental models.

It can help you test, challenge, and refine the models you build.
But it cannot build them for you.

If you outsource the thinking, you get shallow skill.
If you use AI to push your thinking further, you get deeper skill.

That is the real line between using AI well and using AI in a way that erodes your growth.

6. So what does a healthy workflow look like?

Combining all the best advice from the thread:

  1. Learn from primary sources first.

  2. Try the problem yourself.

  3. Sit with the confusion for a bit.

  4. Bring AI in as a tutor:

    • ask for hints, not answers

    • ask for questions, not solutions

    • ask for critiques, not shortcuts

  5. Cross check anything important with docs.

  6. Explain everything you submit or ship.

If you cannot explain it, you do not understand it yet.

Final Thought

Our biggest takeaway from the Reddit experiment was this:

AI magnifies your habits.

If your habit is to shortcut and outsource, AI will happily remove the learning.
If your habit is to explore and understand, AI becomes a superpower.

The tool does not determine the growth.
The way you use it does.

We are working on a follow up piece with actual prompt templates for learning, debugging, and practicing with AI the right way.

Until then, we are curious:

If you were starting from zero in 2025, how would you use AI while learning to code?

Next
Next

🔥 Techies Off The Clock - Sunday Edition