Skip to main content
Version: 0.2.3

Basic Configuration

This guide covers the essential configuration options for Sercha: choosing a search mode and configuring AI providers. For advanced configuration like scheduling and pipeline customisation, see Advanced Configuration.

Configuration Methods

Sercha can be configured through:

  1. Settings Wizard - Interactive CLI guide (sercha settings wizard)
  2. Individual Commands - Configure specific settings (sercha settings mode)
  3. Terminal UI - Visual settings panel
  4. Config File - Direct editing of ~/.sercha/config.toml

Search Modes

Sercha supports four search modes, each offering different capabilities and requirements:

ModeDescriptionRequirements
text_onlyKeyword search using BM25None
hybridKeyword + semantic vector searchEmbedding provider
llm_assistedKeyword + LLM query expansionLLM provider
fullKeyword + semantic + LLMBoth providers

Choosing a Search Mode

Text Only is the default and works immediately without setup. It's fast and effective for exact phrase matching and code search.

Hybrid adds semantic understanding, finding conceptually related documents even when they don't contain the exact keywords.

LLM Assisted uses an LLM to expand and rewrite your queries, improving results for complex questions.

Full combines all techniques for maximum recall and relevance.

Setting the Search Mode

# Interactive selection
sercha settings mode

# Or use the wizard for guided setup
sercha settings wizard

Embedding Provider

An embedding provider is required for hybrid and full search modes. Embeddings convert text into numerical vectors for semantic similarity search.

Supported Providers

ProviderLocal/CloudAPI KeyDefault Model
OllamaLocalNonomic-embed-text
OpenAICloudYestext-embedding-3-small

Configuring Embeddings

# Interactive configuration
sercha settings embedding

The command will prompt for:

  1. Provider selection (Ollama or OpenAI)
  2. Model name (or accept the default)
  3. API key (for OpenAI)

Using Ollama

Ollama runs AI models locally on your machine.

  1. Install Ollama from https://ollama.com
  2. Pull an embedding model:
    ollama pull nomic-embed-text
  3. Configure Sercha to use Ollama

Embedding Model Options

ModelProviderDimensionsNotes
nomic-embed-textOllama768Good balance of quality and speed
mxbai-embed-largeOllama1024Higher quality, larger vectors
all-minilmOllama384Fastest, smallest vectors
text-embedding-3-smallOpenAI1536Good quality, cost-effective
text-embedding-3-largeOpenAI3072Highest quality

Vector dimensions are automatically detected and configured when you select a model.

LLM Provider

An LLM provider is required for llm_assisted and full search modes. The LLM expands and rewrites queries to improve search results.

Supported Providers

ProviderLocal/CloudAPI KeyDefault Model
OllamaLocalNollama3.2
OpenAICloudYesgpt-4o-mini
AnthropicCloudYesclaude-3-5-sonnet-latest

Configuring LLM

# Interactive configuration
sercha settings llm

The command will prompt for:

  1. Provider selection
  2. Model name (or accept the default)
  3. API key (for cloud providers)

Settings Wizard

The settings wizard guides you through complete configuration in one go:

sercha settings wizard

It walks through:

  1. Search Mode - Select how searches should work
  2. Embedding Provider - Configure if required by your mode
  3. LLM Provider - Configure if required by your mode

After each step, the configuration is validated and saved.

Viewing Current Settings

Check your current configuration:

sercha settings

Example output:

Current Settings
================

[Search]
Mode: Full (text + semantic + LLM)

[Embedding]
Provider: OpenAI (cloud)
Model: text-embedding-3-small
API Key: sk-p...96KoA
Status: configured

[LLM]
Provider: OpenAI (cloud)
Model: gpt-4o-mini
API Key: sk-p...96KoA
Status: configured

[Vector Index]
Enabled: yes
Dimensions: 1536

Configuration is valid.

Configuration File Location

Settings are stored in TOML format at:

~/.sercha/config.toml

A complete example configuration:

[search]
mode = "full"

[embedding]
provider = "openai"
model = "text-embedding-3-small"
api_key = "sk-your-api-key"

[llm]
provider = "openai"
model = "gpt-4o-mini"
api_key = "sk-your-api-key"

[vector_index]
enabled = true
dimensions = 1536
precision = "float16"

Next Steps