Enabling Cody on Sourcegraph Enterprise
- Instructions for self-hosted Sourcegraph Enterprise
- Instructions for Sourcegraph Cloud
- Enabling codebase-aware answers
- Turning Cody off
Cody on self-hosted Sourcegraph Enterprise
Prerequisites
- Sourcegraph 5.1.0 or above
- A Sourcegraph Enterprise subscription with Cody Gateway access, or an account with a third-party LLM provider.
There are two steps required to enable Cody on your enterprise instance:
- Enable Cody on your Sourcegraph instance
- Configure the VS Code extension
Step 1: Enable Cody on your Sourcegraph instance
This requires site-admin privileges.
-
First, configure your desired LLM provider:
-
Go to Site admin > Site configuration (
/site-admin/configuration
) on your instance and set:{ // [...] "cody.enabled": true }
-
Set up a policy to automatically create embeddings for repositories: "Configuring embeddings"
Cody is now fully set up on your instance!
Step 2: Configure the VS Code extension
Now that Cody is turned on on your Sourcegraph instance, any user can configure and use the Cody VS Code extension. This does not require admin privilege.
-
If you currently have a previous version of Cody installed, uninstall it and reload VS Code before proceeding to the next steps.
-
Search for “Sourcegraph Cody” in your VS Code extension marketplace, and install it.
-
Reload VS Code, and open the Cody extension. Review and accept the terms.
-
Now you'll need to point the Cody extension to your Sourcegraph instance. On your Sourcegraph instance, click on Settings, then on Access tokens (
https://<your-instance>.sourcegraph.com/users/<your-instance>/settings/tokens
). Generate an access token, copy it, and set it in the Cody extension. -
In the Cody VS Code extension, set your instance URL and the access token
-
See this section on how to enable codebase-aware answers.
You're all set!
Step 3: Try Cody!
These are a few things you can ask Cody:
- "What are popular go libraries for building CLIs?"
- Open your workspace, and ask "Do we have a React date picker component in this repository?"
- Right click on a function, and ask Cody to explain it
- Try any of the Cody recipes!

Cody on Sourcegraph Cloud
On Sourcegraph Cloud, Cody is a managed service and you do not need to follow step 1 of the self-hosted guide above.
Step 1: Enable Cody for your instance
Cody can be enabled on demand on your Sourcegraph instance by contacting your account manager. The Sourcegraph team will refer to the handbook.
Step 2: Configure the VS Code extension
Step 3: Try Cody!
Learn more about running Cody on Sourcegraph Cloud.
Enabling codebase-aware answers
The Cody: Codebase
setting in VS Code enables codebase-aware answers for the Cody extension. By setting this configuration option to the repository name on your Sourcegraph instance, Cody will be able to provide more accurate and relevant answers to your coding questions, based on the context of the codebase you are currently working in.
- Open the VS Code workspace settings by pressing Cmd/Ctrl+,, (or File > Preferences (Settings) on Windows & Linux).
- Search for the
Cody: Codebase
setting. - Enter the repository name as listed on your Sourcegraph instance.
- For example:
github.com/sourcegraph/sourcegraph
without thehttps
protocol
- For example:
Turning Cody off
To turn Cody off:
-
Go to Site admin > Site configuration (
/site-admin/configuration
) on your instance and set:{ // [...] "cody.enabled": false }
-
Remove
completions
andembeddings
configuration if they exist.
Turning Cody on, only for some users
To turn Cody on only for some users, for example when rolling out a Cody POC, follow all the steps in Step 1: Enable Cody on your Sourcegraph instance. Then use the feature flag cody
to turn Cody on selectively for some users.To do so:
-
Go to Site admin > Site configuration (
/site-admin/configuration
) on your instance and set:{ // [...] "cody.enabled": true, "cody.restrictUsersFeatureFlag": true }
-
Go to Site admin > Feature flags (
/site-admin/feature-flags
) -
Add a feature flag called
cody
. Select theboolean
type and set it tofalse
. -
Once added, click on the feature flag and use add overrides to pick users that will have access to Cody.

Using a third-party LLM provider directly
Instead of Sourcegraph Cody Gateway, you can configure Sourcegraph to use a third-party provider directly. Currently, this can be one of
- Anthropic
- OpenAI
- Azure OpenAI Experimental
Anthropic
First, you must create your own key with Anthropic here. Once you have the key, go to Site admin > Site configuration (/site-admin/configuration
) on your instance and set:
{ // [...] "cody.enabled": true, "completions": { "provider": "anthropic", "chatModel": "claude-2", // Or any other model you would like to use "fastChatModel": "claude-instant-1", // Or any other model you would like to use "completionModel": "claude-instant-1", // Or any other model you would like to use "accessToken": "<key>" } }
OpenAI
First, you must create your own key with OpenAI here. Once you have the key, go to Site admin > Site configuration (/site-admin/configuration
) on your instance and set:
{ // [...] "cody.enabled": true, "completions": { "provider": "openai", "chatModel": "gpt-4", // Or any other model you would like to use "fastChatModel": "gpt-35-turbo", // Or any other model you would like to use "completionModel": "gpt-35-turbo", // Or any other model you would like to use "accessToken": "<key>" } }
Experimental
Azure OpenAIFirst, make sure you created a project in the Azure OpenAI portal.
From the project overview, go to Keys and Endpoint and grab one of the keys on that page, and the endpoint.
Next, under Model deployments click "manage deployments" and make sure you deploy the models you want to use. For example, gpt-35-turbo
. Take note of the deployment name.
Once done, go to Site admin > Site configuration (/site-admin/configuration
) on your instance and set:
{ // [...] "cody.enabled": true, "completions": { "provider": "azure-openai", "chatModel": "<deployment name of the model>", "fastChatModel": "<deployment name of the model>", "completionModel": "<deployment name of the model>", "endpoint": "<endpoint>", "accessToken": "<key>" } }
Similarly, you can also use a third-party LLM provider directly for embeddings.