← Back to Docs

Dashboard Guide

A comprehensive tour of the SourcePrep desktop interface panels and controls.

Overview

The dashboard adopts a unified Two-Pane Architecture designed to separate concerns:

  1. Graph Scope (Panel A): Managing the inventory of files (what enters the graph).
  2. Graph Engine (Panel B): Orchestrating the knowledge pipeline that learns how your code connects (how it’s processed).

This layout streamlines the workflow: you define the scope, and the AI engine handles the heavy lifting of tracing, indexing, and enriching your codebase.

Loading component preview…

The SourcePrep modular dashboard — drag, resize, and arrange panels to your workflow


1. Graph Scope (Panel A)

The Graph Scope panel (left pane) is your inventory control. It defines exactly what code and documentation SourcePrep is allowed to see.

Header & Health

The header displays the total file count tracked by the system and a Health Indicator (e.g., "97% Traced").

  • Green Bar: High coverage. Most files are successfully parsed and indexed.
  • Yellow/Red: Low coverage. You may need to check the Queue or Excluded tabs.

Management Tabs

Queue Tab

Lists files that have been detected by the file watcher but are not yet fully integrated into the graph.

  • Untraced: New files waiting for analysis.
  • Stale: Modified files that need re-parsing.
  • Action: Click Trace Selected or Trace All to hand these off to the Engine.

Excluded Tab

Manage files that are intentionally ignored.

  • View active exclusion patterns (e.g., `**/*.min.js`).
  • Un-ignore: Select files to remove them from the blocklist and add them to the Queue.
Loading component preview…

Graph Scope panel — file coverage and queue management


2. Knowledge Pipeline (Panel B)

The Knowledge Pipeline panel (right pane) is the “Factory”. It visualizes each step of the process that turns your files (Scope) into context your AI can actually reason over — parsing the structure, reasoning about meaning, and packaging the guides and safeguards your tools consume.

Controls

  • Auto-Pilot: Master toggle. When ON, the engine automatically advances files through the pipeline stages as resources allow.
  • Budget Info: Real-time tracking of token usage (e.g., "12k / 50k tokens") to ensure no surprise costs.

What each step does

The pipeline has two modes: Fast Sync runs on every file save to keep the structural map current, and Deep Enrichment runs on idle or a schedule to layer in meaning and relationships. The individual steps below are the detail for the curious — day to day, you just let it run.

1. Structural Graph (Rust)
Tree-sitter AST parsing: symbols, imports, call edges, Markdown section extraction.
2. Fast Catalogue (3b LLM)
Rapid triage — classifies each file’s role and produces an initial summary.
3. Relationship Validation (Rust)
LLM-hypothesized relationships validated against the filesystem. Hallucinations discarded.
4. Knowledge Embedding (Embeddings)
Validated nodes embedded for semantic search. Makes catalogue immediately searchable.
5. Deep Reasoning (14b LLM)
Epistemic enrichment. A larger model reasons about each node in graph context — adding domain tags, architecture layers, design patterns, and computing an understanding score (0.0–1.0).
6. Module Synthesis (14b LLM)
Cluster synthesis. Groups enriched nodes by domain into subsystem modules with navigable summaries.
7. Codebase Atlas (Routing)
Builds a pre-retrieval routing index from synthesized modules. Scopes queries to the right subsystem.
8. Continuous Deepening (Loop)
Convergence loop. Re-enriches nodes with decayed understanding scores until the graph stabilizes. Inspired by belief propagation.
9. Deep Knowledge Embedding (Embeddings)
Re-embeds all enriched knowledge, module summaries, and refined connections for maximum retrieval accuracy.
Loading component preview…

Knowledge Pipeline in action — every step visualized with live progress


3. Global Settings

The Engine Room where you configure the behavior of the AI.

  • Model Selection: Toggle between efficiency (3b models) and depth (14b+ models) for the enrichment stages.
  • Budget Limits: Set hard caps on tokens or processing time.
  • Schedule: Configure auto-save triggers and background processing intervals.

4. Search & Context

(Legacy View) The search interface remains available for direct queries against the graph.

Loading component preview…

Semantic search — find code by meaning with context assembly