The problem
Design and product teams rely on personas to stay user-centred, but creating them is slow and inconsistent. Research lives in scattered docs, interview notes, and PDFs. Manually turning that into structured personas is tedious, and different people produce different formats—so personas often sit unused or get outdated.
The solution
PEP is a backend API that ingests your context documents and interview transcripts, stores them in a vector database for semantic search, and uses an LLM to generate and expand persona sets. You get structured personas (with optional AI-generated images) and prompt completion grounded in your own research.
Without PEP
Manual persona creation, inconsistent formats, research locked in long documents, and no single source of truth for “what we know about users.”
With PEP
Upload docs → process and embed → generate persona sets → expand profiles → optional images. Personas stay aligned with your research and are queryable via the API.
What it does
- Document processing – Upload context and interview files (txt, md, pdf, docx); chunk and embed for retrieval.
- Vector search – ChromaDB for semantic search over your research when generating or completing prompts.
- Multi-step persona generation – (1) Generate a set with basic demographics, (2) Expand each into full profiles, (3) Optionally generate AI images per persona.
- Prompt completion – Complete prompts using retrieved context from your documents.
- REST API – Clear FastAPI endpoints for documents, persona sets, and completion; Swagger/ReDoc included.
Tech stack
FastAPI, PostgreSQL, ChromaDB, OpenAI (embeddings + chat), Docker Compose for local or deployment. Designed to scale (e.g. add Redis, Celery, S3) when needed.