AI Settings
AI settings are no longer just a single API URL + API Key + Model form.
MemoFlow now manages AI as services -> models -> default uses, which works better for multiple providers, local models, and custom integrations.
1. Current structure
My Profile
- Optional background information about your role, topics, or preferred analysis style.
- This is used as supporting context during AI analysis.
- AI Summary still works even if you leave it empty.
Services
- Each service stores an isolated config set: service name,
Base URL,API Key, extra headers, and validation status. - One service can contain multiple models.
- From service detail, you can check connectivity, duplicate the service, delete it, and manage models.
Model capabilities
Chat: used for AI Summary, analysis reports, quick prompts, and other generation tasks.Embedding: used for vector retrieval and evidence recall, optional but recommended.- When adding or editing a model, you can enable
Chat,Embedding, or both.
2. Three-step service setup flow
Step 1: Choose a template
- Pick a built-in provider template.
- Or choose a custom protocol integration.
- Current custom protocol types are:
- OpenAI-compatible
- Anthropic / Claude-style
- Gemini / Google AI-style
Step 2: Configure the service
Fill in or confirm:
- Service name
Base URLAPI Key(optional for some services)Extra Headersfor gateways, proxies, or special auth rules
You can also open the provider's documentation from this step.
Step 3: Configure models
- Add at least one model.
- Each model can be marked with
Chatand/orEmbeddingcapability. - A model can also be marked as:
- Generation Default: used by AI Summary / analysis report / quick prompt
- Embedding Default: used by vector retrieval
3. What model management supports
Model management under each service supports:
- Adding custom models
- Adding from built-in model presets
- Syncing models for providers that support model discovery
- Searching, filtering, sorting, enabling, and disabling models
- Deleting models, with automatic unbinding or warning if a default use depends on them
4. How default uses work
The two most important default-use bindings are:
- Generation Default: used for
AI Summary,Analysis Report, andQuick Promptstyle chat tasks. - Embedding Default: used for vector retrieval and evidence recall.
Practical interpretation:
- Without a
Chatmodel, AI Summary cannot start. - Without an
Embeddingmodel, AI Summary can still run, but it falls back to direct reading mode and analysis accuracy is usually lower.
5. How this relates to AI Summary
If you only want the shortest setup path, use this order:
- Configure one working
Chatmodel first. - Add one
Embeddingmodel to improve retrieval quality. - Then refine My Profile and any custom templates if needed.
That is the biggest difference in the reworked AI flow:
configure services and models first, then choose an insight template to run analysis.
6. Configuration tips
- Prefer
HTTPSfor cloud providers. - For local model services, confirm the current device can actually reach the host.
- If a gateway requires custom auth headers, add them in
Extra Headers. - For custom models, make sure
Model Keymatches the actual provider-side model identifier. - If you test multiple providers, keeping one provider per service instance makes troubleshooting easier.
7. Common issues
I added a service, but AI Summary still cannot start
Check these first:
- Is there at least one enabled
Chatmodel? - Is that model marked as the generation default?
- Are
Base URL,API Key, and model key valid?
AI Summary runs, but the results feel weak or unstable
Common causes:
- no
Embeddingmodel is configured; - the embedding model is failing repeatedly;
- My Profile or the current template prompt is too vague.
What if connection check fails?
Check these items first:
- Is
Base URLreachable? - Is
API Keycorrect? - Are required
Extra Headersmissing? - Is there a certificate, proxy, or gateway issue?