Enabling Cody on Sourcegraph Enterprise

Cody on self-hosted Sourcegraph Enterprise

Prerequisites

There are two steps required to enable Cody on your enterprise instance:

  1. Enable Cody on your Sourcegraph instance
  2. Configure the VS Code extension

Step 1: Enable Cody on your Sourcegraph instance

This requires site-admin privileges.

  1. First, configure your desired LLM provider:

  2. Go to Site admin > Site configuration (/site-admin/configuration) on your instance and set:

    {
      // [...]
      "cody.enabled": true
    }
    
  3. Set up a policy to automatically create embeddings for repositories: "Configuring embeddings"

Cody is now fully set up on your instance!

Step 2: Configure the VS Code extension

Now that Cody is turned on on your Sourcegraph instance, any user can configure and use the Cody VS Code extension. This does not require admin privilege.

  1. If you currently have a previous version of Cody installed, uninstall it and reload VS Code before proceeding to the next steps.
  2. Search for "Cody AI” in your VS Code extension marketplace, and install it.
Sourcegraph Cody in VS Code Marketplace
  1. Reload VS Code, and open the Cody extension.

  2. Now you'll need to point the Cody extension to your Sourcegraph instance. Click on "Other Sign In Options..." and select your enterpise option depending on your sourcegraph version (to check your Sourcegraph version go to Sourcegraph => Settings and the version will be in the bottom left)

image
  1. If you on version 5.1 and above you will just need to follow an authorization flow to give Cody access to your enterpise instance.

    • For Sourcegraph 5.0 and above you'll need to generate an access token. On your Sourcegraph instance, click on Settings, then on Access tokens (https://<your-instance>.sourcegraph.com/users/<your-instance>/settings/tokens). Generate an access token, copy it, and set it in the Cody extension.
    image
    • After creating your access token, copy it and return to VS code. Click on the "Other Sign In Options..." button and select "Sign in to Sourcegraph Enterprise instance via Access Token".
    • Enter the URL for your sourcegraph instance and then paste in your access token.

You're all set!

Step 3: Try Cody!

These are a few things you can ask Cody:

  • "What are popular go libraries for building CLIs?"
  • Open your workspace, and ask "Do we have a React date picker component in this repository?"
  • Right click on a function, and ask Cody to explain it

See more Cody use cases here.

Cody on Sourcegraph Cloud

On Sourcegraph Cloud, Cody is a managed service and you do not need to follow step 1 of the self-hosted guide above.

Step 1: Enable Cody for your instance

Cody can be enabled on demand on your Sourcegraph instance by contacting your account manager. The Sourcegraph team will refer to the handbook.

Step 2: Configure the VS Code extension

See above.

Step 3: Try Cody!

See above.

Learn more about running Cody on Sourcegraph Cloud.

Enabling codebase-aware answers

The Cody: Codebase setting in VS Code enables codebase-aware answers for the Cody extension. By setting this configuration option to the repository name on your Sourcegraph instance, Cody will be able to provide more accurate and relevant answers to your coding questions, based on the context of the codebase you are currently working in.

  1. Open the VS Code workspace settings by pressing Cmd/Ctrl+,, (or File > Preferences (Settings) on Windows & Linux).
  2. Search for the Cody: Codebase setting.
  3. Enter the repository name as listed on your Sourcegraph instance.
    1. For example: github.com/sourcegraph/sourcegraph without the https protocol

Turning Cody off

To turn Cody off:

  1. Go to Site admin > Site configuration (/site-admin/configuration) on your instance and set:

    {
      // [...]
      "cody.enabled": false
    }
    
  2. Remove completions and embeddings configuration if they exist.

Turning Cody on, only for some users

To turn Cody on only for some users, for example when rolling out a Cody POC, follow all the steps in Step 1: Enable Cody on your Sourcegraph instance. Then use the feature flag cody to turn Cody on selectively for some users. To do so:

  1. Go to Site admin > Site configuration (/site-admin/configuration) on your instance and set:

    {
      // [...]
      "cody.enabled": true,
      "cody.restrictUsersFeatureFlag": true
    }
    
  2. Go to Site admin > Feature flags (/site-admin/feature-flags)

  3. Add a feature flag called cody. Select the boolean type and set it to false.

  4. Once added, click on the feature flag and use add overrides to pick users that will have access to Cody.

Add overides

Using a third-party LLM provider directly

Instead of Sourcegraph Cody Gateway, you can configure Sourcegraph to use a third-party provider directly. Currently, this can be one of

  • Anthropic
  • OpenAI
  • Azure OpenAI Experimental

Anthropic

First, you must create your own key with Anthropic here. Once you have the key, go to Site admin > Site configuration (/site-admin/configuration) on your instance and set:

{
  // [...]
  "cody.enabled": true,
  "completions": {
    "provider": "anthropic",
    "chatModel": "claude-2", // Or any other model you would like to use
    "fastChatModel": "claude-instant-1", // Or any other model you would like to use
    "completionModel": "claude-instant-1", // Or any other model you would like to use
    "accessToken": "<key>"
  }
}

OpenAI

First, you must create your own key with OpenAI here. Once you have the key, go to Site admin > Site configuration (/site-admin/configuration) on your instance and set:

{
  // [...]
  "cody.enabled": true,
  "completions": {
    "provider": "openai",
    "chatModel": "gpt-4", // Or any other model you would like to use
    "fastChatModel": "gpt-35-turbo", // Or any other model you would like to use
    "completionModel": "gpt-35-turbo", // Or any other model you would like to use
    "accessToken": "<key>"
  }
}

*OpenAI models supported

Azure OpenAI Experimental

First, make sure you created a project in the Azure OpenAI portal.

From the project overview, go to Keys and Endpoint and grab one of the keys on that page, and the endpoint.

Next, under Model deployments click "manage deployments" and make sure you deploy the models you want to use. For example, gpt-35-turbo. Take note of the deployment name.

Once done, go to Site admin > Site configuration (/site-admin/configuration) on your instance and set:

{
  // [...]
  "cody.enabled": true,
  "completions": {
    "provider": "azure-openai",
    "chatModel": "<deployment name of the model>",
    "fastChatModel": "<deployment name of the model>",
    "completionModel": "<deployment name of the model>",
    "endpoint": "<endpoint>",
    "accessToken": "<key>"
  }
}

Anthropic Claude through AWS Bedrock Experimental

First, make sure you have access to AWS Bedrock (currently in beta). Next, request access to the Anthropic Claude models in Bedrock. This may take some time to provision.

Next, create an IAM user with programmatic access in your AWS account. Depending on your AWS setup, different ways may be required to provide access. All completions requests are made from the frontend service, so this service needs to be able to access AWS. You can either use instance role bindings, or directly configure the IAM user credentials in configuration.

Once ready, go to Site admin > Site configuration (/site-admin/configuration) on your instance and set:

{
  // [...]
  "cody.enabled": true,
  "completions": {
    "provider": "aws-bedrock",
    "chatModel": "anthropic.claude-v2",
    "fastChatModel": "anthropic.claude-instant-v1",
    "completionModel": "anthropic.claude-instant-v1",
    "endpoint": "<AWS-Region>", // For example: us-west-2.
    "accessToken": "<See below>"
  }
}

For the access token, you can either:

  • Leave it empty and rely on instance role bindings or other AWS configurations that are present in the frontend service.
  • Set it to <ACCESS_KEY_ID>:<SECRET_ACCESS_KEY> if directly configuring the credentials.
  • Set it to <ACCESS_KEY_ID>:<SECRET_ACCESS_KEY>:<SESSION_TOKEN> if a session token is also required.

Similarly, you can also use a third-party LLM provider directly for embeddings.