Skip to content

Code Generation

LLMs can generate working code from natural language descriptions. Describe what you want, and the model writes the implementation.

Prompt
Write a Python function that takes a list of numbers
and returns the two numbers that add up to a given
target. Return their indices.

The model produces a complete, runnable function with clear variable names and comments.

Response
def two_sum(nums, target):
    seen = {}
    for i, num in enumerate(nums):
        complement = target - num
        if complement in seen:
            return [seen[complement], i]
        seen[num] = i
    return []

# Example usage:
print(two_sum([2, 7, 11, 15], 9))  # [0, 1]

Being specific about language, constraints, and edge cases improves the output. Here we request error handling.

Prompt
Write a JavaScript function that fetches data from a
URL and returns the parsed JSON. Handle network errors
and invalid JSON gracefully. Use async/await.

The model includes try/catch, meaningful error messages, and follows the async/await pattern as requested.

Response
async function fetchJSON(url) {
  try {
    const response = await fetch(url);
    if (!response.ok) {
      throw new Error(`HTTP ${response.status}`);
    }
    return await response.json();
  } catch (error) {
    console.error(`Failed to fetch ${url}:`, error.message);
    return null;
  }
}

Key takeaway: The more context you provide — language, framework, error handling, style — the better the generated code. Always review and test generated code before using it in production.

Tips:
- Specify the language and version
- Mention frameworks: "using Express", "using React"
- Ask for tests: "include unit tests"
- Request explanations: "add comments explaining each step"