Skip to main content
Version: 0.2.3

sercha settings

Configure search modes, AI providers, and other application settings.

Usage

sercha settings [command]

Subcommands

CommandDescription
showDisplay current settings
wizardInteractive setup wizard
modeSet search mode
embeddingConfigure embedding provider
llmConfigure LLM provider

Running sercha settings without a subcommand is equivalent to sercha settings show.


sercha settings show

Display current settings including search mode and AI provider configuration.

Usage

sercha settings
sercha settings show

Output

Current Settings
================

[Search]
Mode: Text + LLM Query Expansion (requires LLM provider)

[Embedding]
Provider: OpenAI
Model: text-embedding-3-small
API Key: sk-p...96KoA
Status: configured

[LLM]
Provider: OpenAI
Model: gpt-4o-mini
API Key: sk-p...96KoA
Status: configured

[Vector Index]
Enabled: no

Configuration is valid.

sercha settings wizard

Run an interactive wizard to configure all settings step by step.

Usage

sercha settings wizard

What It Configures

  1. Search mode - Choose how search works
  2. Embedding provider - If required by selected mode
  3. LLM provider - If required by selected mode

The wizard validates your configuration and shows any warnings or errors.


sercha settings mode

Set the search mode interactively.

Usage

sercha settings mode

Available Modes

ModeDescriptionRequirements
text_onlyKeyword search only (fastest)None
hybridText + semantic vector searchEmbedding provider
llm_assistedText + LLM query expansionLLM provider
fullText + semantic + LLMEmbedding + LLM providers

Mode Details

Text Only (text_only)

  • Uses BM25 keyword matching
  • Fastest search, no AI required
  • Best for: exact phrase matching, code search

Hybrid (hybrid)

  • Combines keyword and semantic search
  • Requires embedding model to generate vectors
  • Best for: natural language queries

LLM Assisted (llm_assisted)

  • Uses LLM to expand and rewrite queries
  • Better understanding of query intent
  • Best for: complex questions

Full (full)

  • Combines all techniques
  • Most comprehensive but requires both providers
  • Best for: maximum recall and relevance

sercha settings embedding

Configure the embedding provider interactively.

Usage

sercha settings embedding

Prompts

  • Provider: Ollama (local) or OpenAI (cloud)
  • Model: Embedding model name (defaults provided)
  • API Key: Required for cloud providers (OpenAI)
  • Base URL: For Ollama, defaults to http://localhost:11434

Supported Providers

ProviderModelsAPI Key Required
Ollamanomic-embed-text, all-minilm, etc.No
OpenAItext-embedding-3-small, text-embedding-3-largeYes

sercha settings llm

Configure the LLM provider interactively.

Usage

sercha settings llm

Prompts

  • Provider: Ollama (local), OpenAI, or Anthropic (cloud)
  • Model: LLM model name (defaults provided)
  • API Key: Required for cloud providers

Supported Providers

ProviderModelsAPI Key Required
Ollamallama3, mistral, etc.No
OpenAIgpt-4o-mini, gpt-4o, etc.Yes
Anthropicclaude-3-5-sonnet-latest, etc.Yes

Configuration File

Settings are stored in $HOME/.sercha.yaml. You can also edit this file directly:

search_mode: full
embedding:
provider: openai
model: text-embedding-3-small
api_key: sk-...
llm:
provider: openai
model: gpt-4o-mini
api_key: sk-...