Open-source AI code assistant for VS Code and JetBrains
Continue is the open-source alternative to GitHub Copilot, available as a VS Code and JetBrains extension. It supports any OpenAI-compatible API, which makes JuiceFactory a drop-in provider — your code stays in EU jurisdiction while you get autocomplete, chat, and agent features inside your editor.
Continue defaults to commercial cloud providers, which means your repository contents (functions, comments, business logic) cross the Atlantic on every keystroke. Routing Continue at JuiceFactory keeps the entire context-window in EU infrastructure, with stateless inference and no retention by default.
In VS Code: Cmd/Ctrl+Shift+P → "Continue: Open config.json". On disk it lives at ~/.continue/config.json. JetBrains uses the same path.
Continue treats every model as an entry under "models". Provider type is "openai" (because JuiceFactory exposes the OpenAI-compatible API), with apiBase pointing at JuiceFactory.
Autocomplete is a separate model selection — point it at a fast model (qwen3-coder-7b or similar) for sub-200ms response.
Continue reloads its config on save in newer versions, but a window reload guarantees a clean state if you previously had a different provider.
{
"models": [
{
"title": "JuiceFactory Qwen3 (chat)",
"provider": "openai",
"model": "qwen3-vl",
"apiKey": "jf-...",
"apiBase": "https://api.juicefactory.ai/v1"
}
],
"tabAutocompleteModel": {
"title": "JuiceFactory Coder (autocomplete)",
"provider": "openai",
"model": "qwen3-coder-7b",
"apiKey": "jf-...",
"apiBase": "https://api.juicefactory.ai/v1"
},
"embeddingsProvider": {
"provider": "openai",
"model": "qwen3-embed",
"apiKey": "jf-...",
"apiBase": "https://api.juicefactory.ai/v1"
}
}| Use case | Model | Why |
|---|---|---|
| Chat and refactoring | qwen3-vl | Strongest open-weight chat model with vision; handles multi-file context. |
| Autocomplete | qwen3-coder-7b | Smaller, faster — keeps tab-suggestion latency under 200ms. |
| Codebase indexing | qwen3-embed | Embeddings for Continue's @codebase context provider. |
Verify the apiBase ends with /v1 (not /v1/ and not the bare domain). Continue does not append the path.
Switch tabAutocompleteModel to a smaller model. Larger models like qwen3-vl are intended for chat, not inline completions.
Continue passes some OpenAI-specific parameters that not all open-weight serving stacks accept. Set "completionOptions": { "temperature": 0.2 } explicitly in config.json.
Yes. Tool calls work via the OpenAI-compatible function_call / tool_calls schema. Continue's agent mode (still beta in v0.9.x) issues tool calls in this format, and JuiceFactory returns them unchanged.
Yes — the models array can mix providers. Many teams keep a Claude entry for occasional heavy reasoning and use JuiceFactory for everything else, switching via the model picker.
No. Continue's anonymous usage telemetry goes directly to its own analytics endpoint (you can disable it in config). Your prompts and code only travel to whichever apiBase you configure.
Free tier gives you enough credits to verify Continue works end-to-end before committing.