The problem
Single-LLM answers can be wrong or overconfident on hard questions. Combining multiple models—each proposing and then judging or refining—often yields better, more consistent answers, but building that workflow (prompts, rounds, consolidation) is custom work every time.
The solution
Council of LLMs implements a deliberation workflow: generate candidate answers from multiple LLMs, run judging rounds (e.g. “who is best?” or “synthesize”), and consolidate into a final answer. Configurable providers and prompts so you can plug in your own models.
Without a council
Single-model answers; no structured way to compare or agree across models.
With Council of LLMs
Multi-model completion → judging → consolidation; better robustness and transparency for hard queries.
What it does
- Multi-LLM completion – Several models answer the same question.
- Judging – Customizable prompts to compare or rank responses.
- Consolidation – Single final answer from the council (e.g. best response or synthesis).
- Pluggable providers – Support for multiple LLM backends and prompt templates.
Tech stack
See the repository for implementation (e.g. Python, async LLM clients, config-driven prompts).