The problem

Prompt design is iterative: small wording changes can shift outputs, and comparing across models or versions is tedious when done in separate UIs or scripts. Teams need a single place to edit prompts, run them, and compare results without copying between tools.

The solution

Prompt Lab is a hobby project for prompt experimentation: edit prompts (and system messages, params), run them against one or more LLMs, and compare outputs side by side. Designed to speed up iteration and make prompt decisions visible.

Without Prompt Lab

Ad-hoc scripts or vendor UIs; no easy way to version prompts and compare model outputs in one flow.

With Prompt Lab

One place to edit, run, and compare prompts across models—faster iteration and clearer decisions.

What it does

  • Prompt editing – Edit prompts and system messages; optional variables or templates.
  • Multi-model runs – Run the same prompt against multiple models or settings.
  • Output comparison – View and compare outputs side by side.
  • Iteration support – Track or export runs for later review (see repo for current scope).

Tech stack

See the repository for current stack (e.g. frontend + backend, or CLI). Built to plug in common LLM APIs.

Next steps

Roadmap & ideas

  • Demo (TBD) – hosted UI or local run instructions.
  • Save/load prompt sets and run history.
  • More providers and parameters (temperature, max tokens, etc.).