Home/Integrations/Continue

Continue + JuiceFactory

Open-source AI code assistant for VS Code and JetBrains

https://continue.dev

Continue is the open-source alternative to GitHub Copilot, available as a VS Code and JetBrains extension. It supports any OpenAI-compatible API, which makes JuiceFactory a drop-in provider — your code stays in EU jurisdiction while you get autocomplete, chat, and agent features inside your editor.

Why route Continue at JuiceFactory

Continue defaults to commercial cloud providers, which means your repository contents (functions, comments, business logic) cross the Atlantic on every keystroke. Routing Continue at JuiceFactory keeps the entire context-window in EU infrastructure, with stateless inference and no retention by default.

Before you start

Setup

  1. 1

    Open the Continue config file

    In VS Code: Cmd/Ctrl+Shift+P → "Continue: Open config.json". On disk it lives at ~/.continue/config.json. JetBrains uses the same path.

  2. 2

    Add JuiceFactory under the models array

    Continue treats every model as an entry under "models". Provider type is "openai" (because JuiceFactory exposes the OpenAI-compatible API), with apiBase pointing at JuiceFactory.

  3. 3

    Set tabAutocompleteModel for autocomplete

    Autocomplete is a separate model selection — point it at a fast model (qwen3-coder-7b or similar) for sub-200ms response.

  4. 4

    Reload the editor

    Continue reloads its config on save in newer versions, but a window reload guarantees a clean state if you previously had a different provider.

Configuration

config.json — chat + autocomplete

{
  "models": [
    {
      "title": "JuiceFactory Qwen3 (chat)",
      "provider": "openai",
      "model": "qwen3-vl",
      "apiKey": "jf-...",
      "apiBase": "https://api.juicefactory.ai/v1"
    }
  ],
  "tabAutocompleteModel": {
    "title": "JuiceFactory Coder (autocomplete)",
    "provider": "openai",
    "model": "qwen3-coder-7b",
    "apiKey": "jf-...",
    "apiBase": "https://api.juicefactory.ai/v1"
  },
  "embeddingsProvider": {
    "provider": "openai",
    "model": "qwen3-embed",
    "apiKey": "jf-...",
    "apiBase": "https://api.juicefactory.ai/v1"
  }
}

Recommended models

Use caseModelWhy
Chat and refactoringqwen3-vlStrongest open-weight chat model with vision; handles multi-file context.
Autocompleteqwen3-coder-7bSmaller, faster — keeps tab-suggestion latency under 200ms.
Codebase indexingqwen3-embedEmbeddings for Continue's @codebase context provider.

Troubleshooting

Continue shows "Connection refused" on first message

Verify the apiBase ends with /v1 (not /v1/ and not the bare domain). Continue does not append the path.

Autocomplete is slow (>500ms)

Switch tabAutocompleteModel to a smaller model. Larger models like qwen3-vl are intended for chat, not inline completions.

Model returns errors with stop sequences or temperature settings

Continue passes some OpenAI-specific parameters that not all open-weight serving stacks accept. Set "completionOptions": { "temperature": 0.2 } explicitly in config.json.

FAQ

Does Continue support tool use against JuiceFactory?

Yes. Tool calls work via the OpenAI-compatible function_call / tool_calls schema. Continue's agent mode (still beta in v0.9.x) issues tool calls in this format, and JuiceFactory returns them unchanged.

Can I use multiple providers in one Continue config?

Yes — the models array can mix providers. Many teams keep a Claude entry for occasional heavy reasoning and use JuiceFactory for everything else, switching via the model picker.

Is Continue's telemetry sent through JuiceFactory?

No. Continue's anonymous usage telemetry goes directly to its own analytics endpoint (you can disable it in config). Your prompts and code only travel to whichever apiBase you configure.

Set up in 2 minutes

Free tier gives you enough credits to verify Continue works end-to-end before committing.