Monitoring and tracing

Builtin monitoring

A Sourcegraph instance includes Prometheus for monitoring and Grafana for monitoring dashboards.

Site admins can view the monitoring dashboards on a Sourcegraph instance:

  1. Go to User menu > Site admin.
  2. Open the Monitoring page (last menu item in the left sidebar). (The URL is

See descriptions of the Grafana dashboards provisioned by Sourcegraph.

Accessing Grafana directly

Follow the instructions below to access Grafana directly by visiting http://localhost:3370/-/debug/grafana. This URL will show the home dashboard and from there you can add, modify and delete your own dashboards and panels, as well as configure alerts.


If you’re using the Kubernetes cluster deployment option,
you can access Grafana directly using Kubernetes port forwarding to your local machine:

kubectl port-forward svc/grafana 3370:30070

### Single-container server deployments

For simplicity, Garafana does not require authentication, as the port binding of 3370 is restricted to connections from localhost only.

Therefore, if accessing Grafana locally, the URL will be http://localhost:3370/-/debug/grafana. If Sourcegraph is deployed to a remote server, then access via an SSH tunnel using a tool
such as [sshuttle]( is required to establish a secure connection to Grafana.
To access the remote server using `sshuttle` from your local machine:

```bash script
sshuttle -r [email protected] 0/0

Then simply visit http://host:3370 in your browser.

Docker images


We are running our own image of Prometheus which contains a standard Prometheus installation packaged together with rules files and target files for our monitoring.

A directory can be mounted at /sg_prometheus_add_ons. It can contains additional config files of two types: - rule files which must have the suffix _rules.yml in their filename (ie gitserver_rules.yml) - target files which must have the suffix _targets.yml in their filename (ie local_targets.yml)

Rule files and target files must use the latest Prometheus 2.x syntax.

The environment variable PROMETHEUS_ADDITIONAL_FLAGS can be used to pass on additional flags to the prometheus executable running in the container.


We are running our own image of Grafana which contains a standard Grafana installation packaged together with provisioned dashboards.

A directory containing dashboard json specifications can be mounted in the docker container at /sg_grafana_additional_dashboards and they will be picked up automatically. Changes to files in that directory will be detected automatically while Grafana is running.

More behavior can be controlled with environmental variables.

Additional monitoring and tracing systems

Sourcegraph supports forwarding internal performance and debugging information to many monitoring and tracing systems.

If you’re using the Kubernetes cluster deployment option, see “Kubernetes cluster administrator guide” and “Prometheus README” for more information.

We are in the process of documenting more common monitoring and tracing deployment scenarios. For help configuring monitoring and tracing on your Sourcegraph instance, use our public issue tracker.

Health check

An application health check status endpoint is available at the URL path /healthz. It returns HTTP 200 if and only if the main frontend server and databases (PostgreSQL and Redis) are available.

The Kubernetes cluster deployment option ships with comprehensive health checks for each Kubernetes deployment.


Sourcegraph provides tracing, metrics and logs to help you troubleshoot problems. When investigating an issue, we recommend using the following resources:

  1. View verbose logs (most common)
  2. Inspect traces
  3. Inspect the Go net/trace information for individual services (rarely needed)

Viewing logs

A Sourcegraph service’s log level is configured via the environment variable SRC_LOG_LEVEL. The valid values (from most to least verbose) are:

  • dbug: Debug. Output all logs. Default in cluster deployments.
  • info: Informational.
  • warn: Warning. Default in Docker deployments.
  • eror: Error.
  • crit: Critical.

If you are having issues with repository syncing, view the output of repo-updater’s logs.

Inspecting traces (Jaeger or LightStep)

If LightStep or Jaeger is configured (using the useJaeger or lightstep* critical configuration properties, every HTTP response will include an X-Trace header with a link to the trace for that request. Inspecting the spans and logs attached to the trace will help identify the problematic service or dependency.

Viewing Go net/trace information

If you are using Sourcegraph’s Docker deployment, site admins can access Go net/trace information at If you are using Sourcegraph cluster, you need to kubectl port-forward ${POD_NAME} 6060 to access the debug page. From there, when you are viewing the debug page of a service, click Requests to view the traces for that service.