Install Sourcegraph with Kubernetes

Deploying Sourcegraph into a Kubernetes cluster is for organizations that need highly scalable and available code search and code intelligence.

The Kubernetes manifests for a Sourcegraph on Kubernetes installation are in the repository deploy-sourcegraph.



1) After meeting all the requirements, make sure you can access your cluster with kubectl.

# Google Cloud Platform (GCP) users are required to give their user the ability to create roles in Kubernetes.
# See the [GCP's documentation:
kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole cluster-admin --user $(gcloud config get-value account)

2) Clone the deploy-sourcegraph repository and check out the version tag you wish to deploy:

# 🚨 The master branch tracks development.
# Use the branch of this repository corresponding to the version of Sourcegraph you wish to deploy, e.g. git checkout 3.24
git clone
cd deploy-sourcegraph
export SOURCEGRAPH_VERSION="v3.30.3"

3) Configure the sourcegraph storage class for the cluster by following “Configure a storage class”.

4) (OPTIONAL) By default sourcegraph will be deployed in the default kubernetes namespace. If you wish to deploy sourcegraph in a non-default namespace, it is highly recommended you use the provided overlays to ensure updates are made in all manifests correctly. See the “Overlays docs” for full instructions on how to use overlays with Sourcegraph and learn more about “Use non-default namespace”.

5) (OPTIONAL) If you want to add a large number of repositories to your instance, you should configure the number of gitserver replicas and the number of indexed-search replicas before you continue with the next step. (See “Tuning replica counts for horizontal scalability” for guidelines.)

6) Deploy the desired version of Sourcegraph to your cluster:


7) Monitor the status of the deployment:

kubectl get pods -o wide --watch

8) After deployment is completed, verify Sourcegraph is running by temporarily making the frontend port accessible:

kubectl port-forward svc/sourcegraph-frontend 3080:30080

9) Open http://localhost:3080 in your browser and you will see a setup page.

10) 🎉 Congrats, you have Sourcegraph up and running! Now configure your deployment.


See the Configuration docs.


See the Overlays docs.


See the Troubleshooting docs.



Some updates, such as changing the externalURL for an instance, will require restarting the instance using kubectl. To restart, run kubectl rollout restart deployment sourcegraph-frontend. If updating the externalURL for the instance, only the frontend pods will need to be restarted.

Cluster-admin privileges

Note: Not all organizations have this split in admin privileges. If your organization does not then you don’t need to change anything and can ignore this section.

The default installation has a few manifests that require cluster-admin privileges to apply. We have labelled all resources with a label indicating if they require cluster-admin privileges or not. This allows cluster admins to install the manifests that cannot be installed otherwise.

  • Manifests deployed by cluster-admin
./ -l sourcegraph-resource-requires=cluster-admin
  • Manifests deployed by non-cluster-admin
./ -l sourcegraph-resource-requires=no-cluster-admin

We also provide an overlay that generates a version of the manifests that does not require cluster-admin privileges.

Cloud installation guides

Security note: If you intend to set this up as a production instance, we recommend you create the cluster in a VPC or other secure network that restricts unauthenticated access from the public Internet. You can later expose the necessary ports via an Internet Gateway or equivalent mechanism. Take care to secure your cluster in a manner that meets your organization’s security requirements.

Follow the instructions linked in the table below to provision a Kubernetes cluster for the infrastructure provider of your choice, using the recommended node and list types in the table.

Note: Sourcegraph can run on any Kubernetes cluster, so if your infrastructure provider is not listed, see the “Other” row. Pull requests to add rows for more infrastructure providers are welcome!

Provider Node type Boot/ephemeral disk size
Compute nodes
Amazon EKS (better than plain EC2) m5.4xlarge N/A
AWS EC2 m5.4xlarge N/A
Google Kubernetes Engine (GKE) n1-standard-16 100 GB (default)
Azure D16 v3 100 GB (SSD preferred)
Other 16 vCPU, 60 GiB memory per node 100 GB (SSD preferred)