← Back to Lessons
  • Best Practices

  • artificial intelligence

  • Productivity

  • Prompts

Anatomy of an Effective Prompt and Continuous Iteration

What will you learn?
  • Prerequisites

For years, we've seen prompts as “the magic phrase” that unlocks a great response. With practice, we realize it's simpler and more useful; a prompt is not magic, it's a process. It starts with a clear intention, continues with a “good enough” first output, and improves with small, measurable tweaks. No anxiety, just method.

In this article, we bring together two mutually necessary pieces: the anatomy of an effective prompt and continuous iteration. The goal is for you to stop relying on luck and start working with a reproducible workflow.

What will you learn?

  • Identify the 6 parts of an effective prompt in a given case.
  • Write a prompt with a fixed output format and verifiable criteria.
  • Apply an iteration loop (≥3 cycles) documenting what you change and why.
  • Measure improvement with a short rubric (0–3 per criterion) and apply self-check.

Prerequisites

Nothing special. If you use Cursor or another editor with integrated AI, you can apply everything instantly. Just knowing how to paste prompts and read outputs is enough.

Anatomy of an Effective Prompt

Think of your prompt as a brief; a written, short, and concrete assignment that explains what you need, with what limits, and in what format you want the delivery. If you complete these pieces, the AI will have the same information a colleague would, and you'll avoid unnecessary back-and-forth:

  • Objective – What exactly do you need (deliverable, not activity). Example: “A function isPrime(n) and 8 unit tests.”

  • Context — Stack, version, audience, scope (what's included/excluded). Example: “Node 20 + Jest, Spanish, no external dependencies.”

  • Constraints — Limits and conventions (performance, style, language). Example: “O(√n) complexity, Spanish names, brief comments.”

  • Output format — Files, blocks, JSON, section order.

  • Minimal examples — Include a few examples (1–2) to clarify the standard. Example: Mini input/output with 1–2 cases.

  • Quality criteria — How you'll know it's right (tests, rules). Example: “Must pass 5 test cases, no linting errors.”

You won't always use all six. The more ambiguous the task, the more you should complete them.

Practical Template (with delimiters and “Don'ts”)

Copy and fill in the {placeholders}. Delimiters help the model respect the format.

1[OBJECTIVE] 2I want: {concrete_result} 3 4[CONTEXT] 5Stack/environment: {technology_and_version} 6Audience/use: {for_whom} 7Scope: {includes} / {excludes} 8 9[CONSTRAINTS] 10- Limit 1: {...} 11- Limit 2: {...} 12- Conventions: {names_language_style} 13 14[OUTPUT FORMAT] 15Return it EXACTLY in this format: 16--- file: {file_name_1} 17{content_1} 18--- file: {file_name_2} 19{content_2} 20 21[EXAMPLES (optional)] 22Mini input: 23{example_input} 24Mini output: 25{example_output} 26 27[QUALITY CRITERIA] 28- Must pass: {tests_or_cases} 29- Not allowed: {forbidden_libraries/verbosity} 30 31[DON'T] 32- Do not include explanations outside the blocks. 33- Do not change the indicated format.

Iteration and Continuous Improvement

A good prompt is just the beginning; quality appears when you go through a short, conscious loop. Think of the flow like this:

prompt-img

  • BRIEF: define the intention (use the template).
  • DRAFT: request the first draft (don't seek perfection).
  • REVIEW: compare output vs. criteria. Note what's missing.
  • REFINE: modify one or two things in the prompt (not everything at once).
  • REPEAT: repeat until criteria are met.

Golden rule: small changes per iteration. If you change everything, you won't know what worked.

Practical Example: from idea to code in 3 iterations

The task is to implement a function that detects if a number is prime and accompany it with unit tests. We'll work in Node 20 using Jest, and the solution must maintain O(√n) efficiency, use Spanish names and comments, and not depend on external libraries.

Iteration 1 — Draft

Prompt (summary):

I want a function `isPrime(n)` in JavaScript that returns true/false.
Stack: Node 20, Jest.
Constraints: O(√n), no external dependencies, Spanish names.
Format: two code blocks, prime.js and prime.test.js.
Criteria: 5 test cases (include 1, 2, 17, 18, 9973).

Observation (REVIEW):

  • Generates the function and tests, but doesn't consider negative integers or non-integers.

Refine:

  • Add a clear precondition: “if n is not a positive integer, return false and document the behavior.”

Iteration 2 — Preconditions and Coverage

Prompt (adjustment):

Add validations: if n is not a positive integer, return false.
Increase coverage to 8 tests (negatives, 0, 1, evens, large primes).
Keep O(√n).

Observation (REVIEW):

  • Now it validates, but uses a suboptimal loop (tests all divisors up to n). Lacks optimization with 2 and multiples of 2.

Refine:

  • Request optimization: handle evens and loop from 3 with step 2.

Iteration 3 — Optimization

Prompt (adjustment):

Optimize: handle cases (n===2) and evens quickly; iterate i from 3 to √n with step 2.
Document complexity and preconditions in comments.

Result: More efficient function, correct validations, 8 passing tests. Criteria met.

Notice how in three small rounds you got a clear, efficient, and tested solution. It wasn't luck, it was process.

Good Habits When Working with Prompts

  • Ask for less, more often. Break the task into steps with small deliverables.
  • Fix the format. Save time by requesting output with file names or sections.
  • Reuse prompts. Save your best briefs as templates.
  • Explain the “why.” If you request a constraint (e.g., O(√n)), state the reason.
  • Evaluate with data. When possible, accompany with tests or verifiable examples.

Common Mistakes and How to Avoid Them

  • Vagueness: “Make it better.” You need to say what “better” means—faster, more readable, uses less memory.
  • Multi-asking: three big tasks in one prompt. Don't do it! Break into subtasks.
  • No format: prose output when you wanted files. Ask for blocks and names.
  • Changing everything at once: if you do, you won't know what helped. Make small adjustments per iteration.
  • No criteria: how will you know if it's right? You must define tests, rules, or conditions.

Quality in AI doesn't depend on “inspiration” with the perfect phrase, but on clarity + process. Write briefs with intention, set formats and criteria, and improve in small cycles. If you adopt this discipline, good outputs stop being accidental and become consistent and repeatable.