Skip to content

Conversation

LLMs can maintain context across multiple turns of conversation. Each message builds on previous ones, enabling follow-up questions and iterative refinement.

Under the hood, the full conversation history is sent with each request.

Turn 1 — User
What are the three laws of thermodynamics?

The model provides an initial answer.

Turn 1 — Assistant
The three laws of thermodynamics are:

1. Energy cannot be created or destroyed, only
   transformed (conservation of energy).

2. In any energy transfer, the total entropy of a
   closed system always increases.

3. As temperature approaches absolute zero, the entropy
   of a system approaches a minimum value.

Now we ask a follow-up. The model knows "the second one" refers to the second law from its previous answer.

Turn 2 — User
Can you explain the second one with a simple analogy?

The model uses context from the conversation to give a relevant, connected answer.

Turn 2 — Assistant
Think of a hot cup of coffee in a cold room. The heat
always flows from the coffee to the room — never the
other way around. The coffee cools, the room warms
slightly, and the energy spreads out.

That "spreading out" is entropy increasing. You can
never un-spread it without adding external energy
(like a refrigerator, which uses electricity).

You can keep refining. Here we ask the model to adjust its previous answer.

Turn 3 — User
Now explain it to a 5-year-old.

The model remembers the full context — it knows "it" refers to the second law — and simplifies further.

Turn 3 — Assistant
You know how when you drop a bit of paint in water, it
spreads out and makes the water colorful? It never
un-spreads back into a dot by itself.

Things in the world like to spread out and mix up.
That's just how nature works!

Key takeaway: Multi-turn conversation is powerful for exploration and refinement. Be aware that the entire history counts toward the model's context window — very long conversations may need to be summarized or trimmed.

Tips:
- Use follow-ups to drill deeper: "tell me more"
- Refine outputs: "make it shorter", "add examples"
- Reference earlier answers: "going back to point 2"
- For apps: manage conversation history to stay within
  the model's context window limit