← Docs home

Guides

Step-by-step guides for SourcePrep's advanced features.

Built-in Embeddings

SourcePrep ships with a built-in embedding model (nomic-embed-text). No Ollama required. Learn how to use it, switch providers, and pre-download the model.

Model Configuration

Configure local LLMs for analysis, reasoning, and compression. Learn about the recommended Ministral 3 stack and model slots.

Dynamic Model Loading

How SourcePrep manages VRAM by loading and unloading local models on demand. Covers Ollama vs LM Studio, MLX performance, persistent models, and recommended setups.

Smart Context Compression

Two built-in engines: structural compression for code (3–20×) and language-aware compression for docs. No GPU or sidecar needed.

Path Weights

Boost or suppress specific files and folders in search results. Hierarchical weights let you tune relevance without rebuilding.

Team Sync

Set up headless CI/CD indexing so your team shares a single, pre-built trace graph. Supports CPU + BYOK (GitHub Actions) or GPU + local LLM (RunPod, Modal).

Model Setup Advisor

Interactive tool: pick your GPU, choose Local / Hybrid / Cloud, and get personalized model recommendations with VRAM calculations.

Codebase Audit

Autonomous health analysis: architecture reports, gap analysis, tech debt summaries, and actionable recommendations — all generated from the trace graph.

Audit Enrichment

Pipe lint and static-analysis findings through SourcePrep to get structural context on every result — dependent count, hub status, related concepts, and risk score. Accepts simple JSON or SARIF round-trip.

Smart Search

How prep_search routes your query to the right backend — symbol lookup, semantic search, concepts, or trace graph — based on whether you asked where, how, why, or who calls.