Dashboards reference

This document contains a complete reference on Sourcegraph's available dashboards, as well as details on how to interpret the panels and metrics.

To learn more about Sourcegraph's metrics and how to view these dashboards, see our metrics guide.

Frontend

Serves all end-user browser and API requests.

To see this dashboard, visit /-/debug/grafana/d/frontend/frontend on your Sourcegraph instance.

Frontend: Search at a glance

frontend: 99th_percentile_search_request_duration

99th percentile successful search request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.99, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))


frontend: 90th_percentile_search_request_duration

90th percentile successful search request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100001 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))


frontend: hard_timeout_search_responses

Hard timeout search responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100010 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: (sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name!="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name!="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100


frontend: hard_error_search_responses

Hard error search responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100011 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100


frontend: partial_timeout_search_responses

Partial timeout search responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100012 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100


frontend: search_alert_user_suggestions

Search alert user suggestions shown every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100013 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out|no_results__suggest_quotes",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100


frontend: page_load_latency

90th percentile page load latency over all routes over 10m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100020 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: histogram_quantile(0.9, sum by(le) (rate(src_http_request_duration_seconds_bucket{route!="raw",route!="blob",route!~"graphql.*"}[10m])))


frontend: blob_load_latency

90th percentile blob load latency over 10m. The 90th percentile of API calls to the blob route in the frontend API is at 5 seconds or more, meaning calls to the blob route, are slow to return a response. The blob API route provides the files and code snippets that the UI displays. When this alert fires, the UI will likely experience delays loading files and code snippets. It is likely that the gitserver and/or frontend services are experiencing issues, leading to slower responses.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100021 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.9, sum by(le) (rate(src_http_request_duration_seconds_bucket{route="blob"}[10m])))


Frontend: Search-based code intelligence at a glance

frontend: 99th_percentile_search_codeintel_request_duration

99th percentile code-intel successful search request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))


frontend: 90th_percentile_search_codeintel_request_duration

90th percentile code-intel successful search request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))


frontend: hard_timeout_search_codeintel_responses

Hard timeout search code-intel responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100110 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: (sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100


frontend: hard_error_search_codeintel_responses

Hard error search code-intel responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100111 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100


frontend: partial_timeout_search_codeintel_responses

Partial timeout search code-intel responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100112 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) * 100


frontend: search_codeintel_alert_user_suggestions

Search code-intel alert user suggestions shown every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100113 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100


Frontend: Search GraphQL API usage at a glance

frontend: 99th_percentile_search_api_request_duration

99th percentile successful search API request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))


frontend: 90th_percentile_search_api_request_duration

90th percentile successful search API request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))


frontend: hard_error_search_api_responses

Hard error search API responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="other"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="other"}[5m]))


frontend: partial_timeout_search_api_responses

Partial timeout search API responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(increase(src_graphql_search_response{status="partial_timeout",source="other"}[5m])) / sum(increase(src_graphql_search_response{source="other"}[5m]))


frontend: search_api_alert_user_suggestions

Search API alert user suggestions shown every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100212 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out|no_results__suggest_quotes",source="other"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{status="alert",source="other"}[5m]))


Frontend: Codeintel: Precise code intelligence usage at a glance

frontend: codeintel_resolvers_total

Aggregate graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_resolvers_99th_percentile_duration

Aggregate successful graphql operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_resolvers_errors_total

Aggregate graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100302 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_resolvers_error_rate

Aggregate graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100303 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_resolvers_total

Graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100310 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_resolvers_99th_percentile_duration

99th percentile successful graphql operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100311 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_resolvers_errors_total

Graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100312 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_resolvers_error_rate

Graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100313 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: Auto-index enqueuer

frontend: codeintel_autoindex_enqueuer_total

Aggregate enqueuer operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_autoindex_enqueuer_99th_percentile_duration

Aggregate successful enqueuer operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100401 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_autoindex_enqueuer_errors_total

Aggregate enqueuer operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100402 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_autoindex_enqueuer_error_rate

Aggregate enqueuer operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100403 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_autoindex_enqueuer_total

Enqueuer operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100410 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_autoindex_enqueuer_99th_percentile_duration

99th percentile successful enqueuer operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100411 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_autoindex_enqueuer_errors_total

Enqueuer operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100412 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_autoindex_enqueuer_error_rate

Enqueuer operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100413 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: dbstore stats

frontend: codeintel_uploads_store_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_store_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100501 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_store_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100502 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_store_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100503 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_uploads_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100510 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100511 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_uploads_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100512 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100513 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Workerutil: lsif_indexes dbworker/store stats

frontend: workerutil_dbworker_store_codeintel_index_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100600 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: workerutil_dbworker_store_codeintel_index_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100601 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_index_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: workerutil_dbworker_store_codeintel_index_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100602 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: workerutil_dbworker_store_codeintel_index_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100603 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_index_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: lsifstore stats

frontend: codeintel_uploads_lsifstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100700 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_lsifstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100701 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_lsifstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100702 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_lsifstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100703 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_uploads_lsifstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100710 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_lsifstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100711 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_uploads_lsifstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100712 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_lsifstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100713 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: gitserver client

frontend: codeintel_gitserver_total

Aggregate client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100800 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_gitserver_99th_percentile_duration

Aggregate successful client operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100801 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_gitserver_errors_total

Aggregate client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100802 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_gitserver_error_rate

Aggregate client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100803 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_gitserver_total

Client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100810 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_gitserver_99th_percentile_duration

99th percentile successful client operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100811 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_gitserver_errors_total

Client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100812 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_gitserver_error_rate

Client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100813 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: uploadstore stats

frontend: codeintel_uploadstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploadstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploadstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploadstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100903 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_uploadstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100910 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploadstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100911 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_uploadstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100912 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploadstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100913 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: dependencies service stats

frontend: codeintel_dependencies_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_dependencies_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101003 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_dependencies_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101010 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101011 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_dependencies_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101012 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101013 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: dependencies service store stats

frontend: codeintel_dependencies_background_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101102 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101103 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_dependencies_background_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101110 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101111 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_dependencies_background_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101112 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101113 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: dependencies service background stats

frontend: codeintel_dependencies_background_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101201 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101202 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101203 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_dependencies_background_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101210 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101211 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_dependencies_background_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101212 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101213 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: lockfiles service stats

frontend: codeintel_lockfiles_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101300 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_lockfiles_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101301 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_lockfiles_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_lockfiles_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101302 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_lockfiles_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101303 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_lockfiles_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101310 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_lockfiles_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101311 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_lockfiles_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_lockfiles_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101312 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_lockfiles_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101313 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Gitserver: Gitserver Client

frontend: gitserver_client_total

Aggregate graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101400 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: gitserver_client_99th_percentile_duration

Aggregate successful graphql operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101401 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: gitserver_client_errors_total

Aggregate graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101402 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: gitserver_client_error_rate

Aggregate graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101403 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: gitserver_client_total

Graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101410 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (op)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: gitserver_client_99th_percentile_duration

99th percentile successful graphql operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101411 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: gitserver_client_errors_total

Graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101412 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: gitserver_client_error_rate

Graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101413 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Batches: dbstore stats

frontend: batches_dbstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101500 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_dbstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101501 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_dbstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101502 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_dbstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101503 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: batches_dbstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101510 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_dbstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101511 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: batches_dbstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101512 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_dbstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101513 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Batches: service stats

frontend: batches_service_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101600 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_service_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101601 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_service_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101602 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_service_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101603 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: batches_service_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101610 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_service_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101611 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: batches_service_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101612 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_service_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101613 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Batches: Workspace execution dbstore

frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101700 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101701 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101702 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101703 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Batches: HTTP API File Handler

frontend: batches_httpapi_total

Aggregate http handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101800 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_httpapi_99th_percentile_duration

Aggregate successful http handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101801 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_httpapi_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_httpapi_errors_total

Aggregate http handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101802 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_httpapi_error_rate

Aggregate http handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101803 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: batches_httpapi_total

Http handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101810 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_httpapi_99th_percentile_duration

99th percentile successful http handler operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101811 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_httpapi_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: batches_httpapi_errors_total

Http handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101812 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_httpapi_error_rate

Http handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101813 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Out-of-band migrations: up migration invocation (one batch processed)

frontend: oobmigration_total

Migration handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_99th_percentile_duration

Aggregate successful migration handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_oobmigration_duration_seconds_bucket{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_errors_total

Migration handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_error_rate

Migration handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101903 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Out-of-band migrations: down migration invocation (one batch processed)

frontend: oobmigration_total

Migration handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_99th_percentile_duration

Aggregate successful migration handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_oobmigration_duration_seconds_bucket{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_errors_total

Migration handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_error_rate

Migration handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102003 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: GRPC server metrics

frontend: frontend_grpc_request_rate_all_methods

Request rate across all methods over 1m

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102100 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(frontend_grpc_server_started_total{instance=~${internalInstance:regex}}[1m]))


frontend: frontend_grpc_request_rate_per_method

Request rate per-method over 1m

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102101 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(frontend_grpc_server_started_total{grpc_method=~${method:regex},instance=~${internalInstance:regex}}[1m])) by (grpc_method)


frontend: frontend_error_percentage_all_methods

Error percentage across all methods over 1m

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102110 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ( (sum(rate(frontend_grpc_server_handled_total{grpc_code!="OK",instance=~${internalInstance:regex}}[1m]))) / (sum(rate(frontend_grpc_server_handled_total{instance=~${internalInstance:regex}}[1m]))) ))


frontend: frontend_grpc_error_percentage_per_method

Error percentage per-method over 1m

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102111 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ( (sum(rate(frontend_grpc_server_handled_total{grpc_method=~${method:regex},grpc_code!="OK",instance=~${internalInstance:regex}}[1m])) by (grpc_method)) / (sum(rate(frontend_grpc_server_handled_total{grpc_method=~${method:regex},instance=~${internalInstance:regex}}[1m])) by (grpc_method)) ))


frontend: frontend_p99_response_time_per_method

99th percentile response time per method over 1m

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102120 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(frontend_grpc_server_handling_seconds_bucket{grpc_method=~${method:regex},instance=~${internalInstance:regex}}[1m])))


frontend: frontend_p90_response_time_per_method

90th percentile response time per method over 1m

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102121 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(frontend_grpc_server_handling_seconds_bucket{grpc_method=~${method:regex},instance=~${internalInstance:regex}}[1m])))


frontend: frontend_p75_response_time_per_method

75th percentile response time per method over 1m

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102122 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(frontend_grpc_server_handling_seconds_bucket{grpc_method=~${method:regex},instance=~${internalInstance:regex}}[1m])))


frontend: frontend_grpc_response_stream_message_count_per_method

Average streaming response message count per-method over 1m

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102130 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: ((sum(rate(frontend_grpc_server_msg_sent_total{grpc_type="server_stream",instance=~${internalInstance:regex}}[1m])) by (grpc_method))/(sum(rate(frontend_grpc_server_started_total{grpc_type="server_stream",instance=~${internalInstance:regex}}[1m])) by (grpc_method)))


frontend: frontend_grpc_all_codes_per_method

Response codes rate per-method over 1m

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102140 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(frontend_grpc_server_handled_total{grpc_method=~${method:regex},instance=~${internalInstance:regex}}[1m])) by (grpc_method, grpc_code)


Frontend: Internal service requests

frontend: internal_indexed_search_error_responses

Internal indexed search error responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102200 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by(code) (increase(src_zoekt_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_zoekt_request_duration_seconds_count[5m])) * 100


frontend: internal_unindexed_search_error_responses

Internal unindexed search error responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102201 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by(code) (increase(searcher_service_request_total{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(searcher_service_request_total[5m])) * 100


frontend: internalapi_error_responses

Internal API error responses every 5m by route

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102202 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum by(category) (increase(src_frontend_internal_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_frontend_internal_request_duration_seconds_count[5m])) * 100


frontend: 99th_percentile_gitserver_duration

99th percentile successful gitserver query duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102210 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.99, sum by (le,category)(rate(src_gitserver_request_duration_seconds_bucket{job=~"(sourcegraph-)?frontend"}[5m])))


frontend: gitserver_error_responses

Gitserver error responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102211 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend",code!~"2.."}[5m])) / ignoring(code) group_left sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend"}[5m])) * 100


frontend: observability_test_alert_warning

Warning test alert metric

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102220 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by(owner) (observability_test_metric_warning)


frontend: observability_test_alert_critical

Critical test alert metric

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102221 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by(owner) (observability_test_metric_critical)


Frontend: Authentication API requests

frontend: sign_in_rate

Rate of API requests to sign-in

Rate (QPS) of requests to sign-in

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102300 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))


frontend: sign_in_latency_p99

99 percentile of sign-in latency

99% percentile of sign-in latency

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102301 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-in",method="post"}[5m])) by (le))


frontend: sign_in_error_rate

Percentage of sign-in requests by http code

Percentage of sign-in requests grouped by http code

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102302 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))*100


frontend: sign_up_rate

Rate of API requests to sign-up

Rate (QPS) of requests to sign-up

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102310 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))


frontend: sign_up_latency_p99

99 percentile of sign-up latency

99% percentile of sign-up latency

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102311 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-up",method="post"}[5m])) by (le))


frontend: sign_up_code_percentage

Percentage of sign-up requests by http code

Percentage of sign-up requests grouped by http code

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102312 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100


frontend: sign_out_rate

Rate of API requests to sign-out

Rate (QPS) of requests to sign-out

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102320 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))


frontend: sign_out_latency_p99

99 percentile of sign-out latency

99% percentile of sign-out latency

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102321 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-out"}[5m])) by (le))


frontend: sign_out_error_rate

Percentage of sign-out requests that return non-303 http code

Percentage of sign-out requests grouped by http code

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102322 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100


frontend: account_failed_sign_in_attempts

Rate of failed sign-in attempts

Failed sign-in attempts per minute

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102330 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(rate(src_frontend_account_failed_sign_in_attempts_total[1m]))


frontend: account_lockouts

Rate of account lockouts

Account lockouts per minute

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102331 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(rate(src_frontend_account_lockouts_total[1m]))


Frontend: Organisation GraphQL API requests

frontend: org_members_rate

Rate of API requests to list organisation members

Rate (QPS) of API requests to list organisation members

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102400 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum(irate(src_graphql_request_duration_seconds_count{route="OrganizationMembers"}[5m]))


frontend: org_members_latency_p99

99 percentile latency of API requests to list organisation members

99 percentile latency ofAPI requests to list organisation members

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102401 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="OrganizationMembers"}[5m])) by (le))


frontend: org_members_error_rate

Percentage of API requests to list organisation members that return an error

Percentage of API requests to list organisation members that return an error

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102402 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum (irate(src_graphql_request_duration_seconds_count{route="OrganizationMembers",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="OrganizationMembers"}[5m]))*100


frontend: create_org_rate

Rate of API requests to create an organisation

Rate (QPS) of API requests to create an organisation

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102410 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum(irate(src_graphql_request_duration_seconds_count{route="CreateOrganization"}[5m]))


frontend: create_org_latency_p99

99 percentile latency of API requests to create an organisation

99 percentile latency ofAPI requests to create an organisation

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102411 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="CreateOrganization"}[5m])) by (le))


frontend: create_org_error_rate

Percentage of API requests to create an organisation that return an error

Percentage of API requests to create an organisation that return an error

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102412 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum (irate(src_graphql_request_duration_seconds_count{route="CreateOrganization",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="CreateOrganization"}[5m]))*100


frontend: remove_org_member_rate

Rate of API requests to remove organisation member

Rate (QPS) of API requests to remove organisation member

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102420 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum(irate(src_graphql_request_duration_seconds_count{route="RemoveUserFromOrganization"}[5m]))


frontend: remove_org_member_latency_p99

99 percentile latency of API requests to remove organisation member

99 percentile latency ofAPI requests to remove organisation member

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102421 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="RemoveUserFromOrganization"}[5m])) by (le))


frontend: remove_org_member_error_rate

Percentage of API requests to remove organisation member that return an error

Percentage of API requests to remove organisation member that return an error

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102422 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum (irate(src_graphql_request_duration_seconds_count{route="RemoveUserFromOrganization",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="RemoveUserFromOrganization"}[5m]))*100


frontend: invite_org_member_rate

Rate of API requests to invite a new organisation member

Rate (QPS) of API requests to invite a new organisation member

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102430 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum(irate(src_graphql_request_duration_seconds_count{route="InviteUserToOrganization"}[5m]))


frontend: invite_org_member_latency_p99

99 percentile latency of API requests to invite a new organisation member

99 percentile latency ofAPI requests to invite a new organisation member

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102431 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="InviteUserToOrganization"}[5m])) by (le))


frontend: invite_org_member_error_rate

Percentage of API requests to invite a new organisation member that return an error

Percentage of API requests to invite a new organisation member that return an error

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102432 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum (irate(src_graphql_request_duration_seconds_count{route="InviteUserToOrganization",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="InviteUserToOrganization"}[5m]))*100


frontend: org_invite_respond_rate

Rate of API requests to respond to an org invitation

Rate (QPS) of API requests to respond to an org invitation

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102440 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum(irate(src_graphql_request_duration_seconds_count{route="RespondToOrganizationInvitation"}[5m]))


frontend: org_invite_respond_latency_p99

99 percentile latency of API requests to respond to an org invitation

99 percentile latency ofAPI requests to respond to an org invitation

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102441 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="RespondToOrganizationInvitation"}[5m])) by (le))


frontend: org_invite_respond_error_rate

Percentage of API requests to respond to an org invitation that return an error

Percentage of API requests to respond to an org invitation that return an error

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102442 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum (irate(src_graphql_request_duration_seconds_count{route="RespondToOrganizationInvitation",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="RespondToOrganizationInvitation"}[5m]))*100


frontend: org_repositories_rate

Rate of API requests to list repositories owned by an org

Rate (QPS) of API requests to list repositories owned by an org

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102450 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum(irate(src_graphql_request_duration_seconds_count{route="OrgRepositories"}[5m]))


frontend: org_repositories_latency_p99

99 percentile latency of API requests to list repositories owned by an org

99 percentile latency ofAPI requests to list repositories owned by an org

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102451 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="OrgRepositories"}[5m])) by (le))


frontend: org_repositories_error_rate

Percentage of API requests to list repositories owned by an org that return an error

Percentage of API requests to list repositories owned by an org that return an error

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102452 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum (irate(src_graphql_request_duration_seconds_count{route="OrgRepositories",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="OrgRepositories"}[5m]))*100


Frontend: Cloud KMS and cache

frontend: cloudkms_cryptographic_requests

Cryptographic requests to Cloud KMS every 1m

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102500 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(increase(src_cloudkms_cryptographic_total[1m]))


frontend: encryption_cache_hit_ratio

Average encryption cache hit ratio per workload

  • Encryption cache hit ratio (hits/(hits+misses)) - minimum across all instances of a workload.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102501 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: min by (kubernetes_name) (src_encryption_cache_hit_total/(src_encryption_cache_hit_total+src_encryption_cache_miss_total))


frontend: encryption_cache_evictions

Rate of encryption cache evictions - sum across all instances of a given workload

  • Rate of encryption cache evictions (caused by cache exceeding its maximum size) - sum across all instances of a workload

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102502 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (kubernetes_name) (irate(src_encryption_cache_eviction_total[5m]))


Frontend: Database connections

frontend: max_open_conns

Maximum open

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102600 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="frontend"})


frontend: open_conns

Established

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102601 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="frontend"})


frontend: in_use

Used

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102610 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="frontend"})


frontend: idle

Idle

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102611 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="frontend"})


frontend: mean_blocked_seconds_per_conn_request

Mean blocked seconds per conn request

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102620 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="frontend"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="frontend"}[5m]))


frontend: closed_max_idle

Closed by SetMaxIdleConns

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102630 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="frontend"}[5m]))


frontend: closed_max_lifetime

Closed by SetConnMaxLifetime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102631 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="frontend"}[5m]))


frontend: closed_max_idle_time

Closed by SetConnMaxIdleTime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102632 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="frontend"}[5m]))


Frontend: Container monitoring (not available on server)

frontend: container_missing

Container missing

This value is the number of times a container has not been seen for more than one minute. If you observe thisvalue change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod (frontend|sourcegraph-frontend) (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p (frontend|sourcegraph-frontend).
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' (frontend|sourcegraph-frontend) (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the (frontend|sourcegraph-frontend) container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs (frontend|sourcegraph-frontend) (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102700 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: count by(name) ((time() - container_last_seen{name=~"^(frontend|sourcegraph-frontend).*"}) > 60)


frontend: container_cpu_usage

Container cpu usage total (1m average) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102701 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}


frontend: container_memory_usage

Container memory usage by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102702 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}


frontend: fs_io_operations

Filesystem reads and writes rate by instance over 1h

This value indicates the number of filesystem read and write operations by containers of this service.When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102703 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(name) (rate(container_fs_reads_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]) + rate(container_fs_writes_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]))


Frontend: Provisioning indicators (not available on server)

frontend: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102800 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])


frontend: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102801 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])


frontend: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102810 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])


frontend: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102811 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])


frontend: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102812 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^(frontend|sourcegraph-frontend).*"})


Frontend: Golang runtime monitoring

frontend: go_goroutines

Maximum active goroutines

A high value here indicates a possible goroutine leak.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102900 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by(instance) (go_goroutines{job=~".*(frontend|sourcegraph-frontend)"})


frontend: go_gc_duration_seconds

Maximum go garbage collection duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102901 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by(instance) (go_gc_duration_seconds{job=~".*(frontend|sourcegraph-frontend)"})


Frontend: Kubernetes monitoring (only available on Kubernetes)

frontend: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103000 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(app) (up{app=~".*(frontend|sourcegraph-frontend)"}) / count by (app) (up{app=~".*(frontend|sourcegraph-frontend)"}) * 100


Frontend: Search: Ranking

frontend: total_search_clicks

Total number of search clicks over 6h

The total number of search clicks across all search types over a 6 hour window.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103100 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h]))


frontend: percent_clicks_on_top_search_result

Percent of clicks on top search result over 6h

The percent of clicks that were on the top search result, excluding searches with very few results (3 or fewer).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103101 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum by (ranked) (increase(src_search_ranking_result_clicked_bucket{le="1",resultsLength=">3"}[6h])) / sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h])) * 100


frontend: percent_clicks_on_top_3_search_results

Percent of clicks on top 3 search results over 6h

The percent of clicks that were on the first 3 search results, excluding searches with very few results (3 or fewer).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103102 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum by (ranked) (increase(src_search_ranking_result_clicked_bucket{le="3",resultsLength=">3"}[6h])) / sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h])) * 100


frontend: distribution_of_clicked_search_result_type_over_6h_in_percent

Distribution of clicked search result type over 6h

The distribution of clicked search results by result type. At every point in time, the values should sum to 100.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103110 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(increase(src_search_ranking_result_clicked_count{type="repo"}[6h])) / sum(increase(src_search_ranking_result_clicked_count[6h])) * 100


frontend: percent_zoekt_searches_hitting_flush_limit

Percent of zoekt searches that hit the flush time limit

The percent of Zoekt searches that hit the flush time limit. These searches don`t visit all matches, so they could be missing relevant results, or be non-deterministic.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103111 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(increase(zoekt_final_aggregate_size_count{reason="timer_expired"}[1d])) / sum(increase(zoekt_final_aggregate_size_count[1d])) * 100


Frontend: Email delivery

frontend: email_delivery_failures

Email delivery failures every 30 minutes

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103200 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum(increase(src_email_send{success="false"}[30m]))


frontend: email_deliveries_total

Total emails successfully delivered every 30 minutes

Total emails successfully delivered.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103210 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum (increase(src_email_send{success="true"}[30m]))


frontend: email_deliveries_by_source

Emails successfully delivered every 30 minutes by source

Emails successfully delivered by source, i.e. product feature.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103211 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (email_source) (increase(src_email_send{success="true"}[30m]))


Frontend: Sentinel queries (only on sourcegraph.com)

frontend: mean_successful_sentinel_duration_over_2h

Mean successful sentinel search duration over 2h

Mean search duration for all successful sentinel queries

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103300 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(rate(src_search_response_latency_seconds_sum{source=~searchblitz., status=success}[2h])) / sum(rate(src_search_response_latency_seconds_count{source=~searchblitz., status=success}[2h]))


frontend: mean_sentinel_stream_latency_over_2h

Mean successful sentinel stream latency over 2h

Mean time to first result for all successful streaming sentinel queries

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103301 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[2h])) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[2h]))


frontend: 90th_percentile_successful_sentinel_duration_over_2h

90th percentile successful sentinel search duration over 2h

90th percentile search duration for all successful sentinel queries

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103310 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum by (le)(label_replace(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[2h]), "source", "$1", "source", "searchblitz_(.*)")))


frontend: 90th_percentile_sentinel_stream_latency_over_2h

90th percentile successful sentinel stream latency over 2h

90th percentile time to first result for all successful streaming sentinel queries

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103311 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum by (le)(label_replace(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[2h]), "source", "$1", "source", "searchblitz_(.*)")))


frontend: mean_successful_sentinel_duration_by_query

Mean successful sentinel search duration by query

Mean search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103320 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(rate(src_search_response_latency_seconds_sum{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_response_latency_seconds_count{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source)


frontend: mean_sentinel_stream_latency_by_query

Mean successful sentinel stream latency by query

Mean time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103321 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source)


frontend: 90th_percentile_successful_sentinel_duration_by_query

90th percentile successful sentinel search duration by query

90th percentile search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103330 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))


frontend: 90th_percentile_successful_stream_latency_by_query

90th percentile successful sentinel stream latency by query

90th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103331 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))


frontend: 90th_percentile_unsuccessful_duration_by_query

90th percentile unsuccessful sentinel search duration by query

90th percentile search duration of unsuccessful sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103340 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~searchblitz.*, status!=success}[$sentinel_sampling_duration])) by (le, source))


frontend: 75th_percentile_successful_sentinel_duration_by_query

75th percentile successful sentinel search duration by query

75th percentile search duration of successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103350 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))


frontend: 75th_percentile_successful_stream_latency_by_query

75th percentile successful sentinel stream latency by query

75th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103351 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.75, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))


frontend: 75th_percentile_unsuccessful_duration_by_query

75th percentile unsuccessful sentinel search duration by query

75th percentile search duration of unsuccessful sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103360 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~searchblitz.*, status!=success}[$sentinel_sampling_duration])) by (le, source))


frontend: unsuccessful_status_rate

Unsuccessful status rate

The rate of unsuccessful sentinel queries, broken down by failure type.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103370 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(rate(src_graphql_search_response{source=~"searchblitz.*", status!="success"}[$sentinel_sampling_duration])) by (status)


Frontend: Incoming webhooks

frontend: p95_time_to_handle_incoming_webhooks

P95 time to handle incoming webhooks

						p95 response time to incoming webhook requests from code hosts.

						Increases in response time can point to too much load on the database to keep up with the incoming requests.

						See this documentation page for more details on webhook requests: (https://docs.sourcegraph.com/admin/config/webhooks/incoming)

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103400 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum (rate(src_http_request_duration_seconds_bucket{route=~"webhooks|github.webhooks|gitlab.webhooks|bitbucketServer.webhooks|bitbucketCloud.webhooks"}[5m])) by (le, route))


Frontend: Search aggregations: proactive and expanded search aggregations

frontend: insights_aggregations_total

Aggregate search aggregations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103500 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: insights_aggregations_99th_percentile_duration

Aggregate successful search aggregations operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103501 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (le)(rate(src_insights_aggregations_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: insights_aggregations_errors_total

Aggregate search aggregations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103502 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: insights_aggregations_error_rate

Aggregate search aggregations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103503 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: insights_aggregations_total

Search aggregations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103510 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op,extended_mode)(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: insights_aggregations_99th_percentile_duration

99th percentile successful search aggregations operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103511 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op,extended_mode)(rate(src_insights_aggregations_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: insights_aggregations_errors_total

Search aggregations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103512 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: insights_aggregations_error_rate

Search aggregations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103513 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op,extended_mode)(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Git Server

Stores, manages, and operates Git repositories.

To see this dashboard, visit /-/debug/grafana/d/gitserver/gitserver on your Sourcegraph instance.

gitserver: memory_working_set

Memory working set

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (container_memory_working_set_bytes{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}})


gitserver: go_routines

Go routines

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100001 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: go_goroutines{app="gitserver", instance=~${shard:regex}}


gitserver: cpu_throttling_time

Container CPU throttling time %

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100010 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) ((rate(container_cpu_cfs_throttled_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]) / rate(container_cpu_cfs_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m])) * 100)


gitserver: cpu_usage_seconds

Cpu usage seconds

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100011 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_cpu_usage_seconds_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]))


gitserver: disk_space_remaining

Disk space remaining by instance

Indicates disk space remaining for each gitserver instance, which is used to determine when to start evicting least-used repository clones from disk (default 10%, configured by SRC_REPOS_DESIRED_PERCENT_FREE).

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100020 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (src_gitserver_disk_space_available / src_gitserver_disk_space_total) * 100


gitserver: io_reads_total

I/o reads total

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100030 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (container_label_io_kubernetes_container_name) (rate(container_fs_reads_total{container_label_io_kubernetes_container_name="gitserver"}[5m]))


gitserver: io_writes_total

I/o writes total

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100031 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (container_label_io_kubernetes_container_name) (rate(container_fs_writes_total{container_label_io_kubernetes_container_name="gitserver"}[5m]))


gitserver: io_reads

I/o reads

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100040 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_reads_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]))


gitserver: io_writes

I/o writes

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100041 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_writes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]))


gitserver: io_read_througput

I/o read throughput

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100050 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_reads_bytes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]))


gitserver: io_write_throughput

I/o write throughput

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100051 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_writes_bytes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]))


gitserver: running_git_commands

Git commands running on each gitserver instance

A high value signals load.

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100060 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (instance, cmd) (src_gitserver_exec_running{instance=~${shard:regex}})


gitserver: git_commands_received

Rate of git commands received across all instances

per second rate per command across all instances

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100061 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (cmd) (rate(src_gitserver_exec_duration_seconds_count[5m]))


gitserver: repository_clone_queue_size

Repository clone queue size

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100070 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(src_gitserver_clone_queue)


gitserver: repository_existence_check_queue_size

Repository existence check queue size

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100071 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(src_gitserver_lsremote_queue)


gitserver: echo_command_duration_test

Echo test command duration

A high value here likely indicates a problem, especially if consistently high.You can query for individual commands using sum by (cmd)(src_gitserver_exec_running) in Grafana (/-/debug/grafana) to see if a specific Git Server command might be spiking in frequency.

If this value is consistently high, consider the following:

  • Single container deployments: Upgrade to a Docker Compose deployment which offers better scalability and resource isolation.
  • Kubernetes and Docker Compose: Check that you are running a similar number of git server replicas and that their CPU/memory limits are allocated according to what is shown in the Sourcegraph resource estimator.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100080 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(src_gitserver_echo_duration_seconds)


gitserver: frontend_internal_api_error_responses

Frontend-internal API error responses every 5m by route

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100081 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="gitserver",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="gitserver"}[5m]))


gitserver: src_gitserver_repo_count

Number of repositories on gitserver

This metric is only for informational purposes. It indicates the total number of repositories on gitserver.

It does not indicate any problems with the instance.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100090 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: src_gitserver_repo_count


Git Server: Gitserver: Gitserver API (powered by internal/observation)

gitserver: gitserver_api_total

Aggregate graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m]))


gitserver: gitserver_api_99th_percentile_duration

Aggregate successful graphql operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (le)(rate(src_gitserver_api_duration_seconds_bucket{job=~"^gitserver.*"}[5m]))


gitserver: gitserver_api_errors_total

Aggregate graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100102 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))


gitserver: gitserver_api_error_rate

Aggregate graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100103 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m])) / (sum(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m])) + sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))) * 100


gitserver: gitserver_api_total

Graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100110 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (op)(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m]))


gitserver: gitserver_api_99th_percentile_duration

99th percentile successful graphql operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100111 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_gitserver_api_duration_seconds_bucket{job=~"^gitserver.*"}[5m])))


gitserver: gitserver_api_errors_total

Graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100112 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))


gitserver: gitserver_api_error_rate

Graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100113 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))) * 100


Git Server: Global operation semaphores

gitserver: batch_log_semaphore_wait_99th_percentile_duration

Aggregate successful batch log semaphore operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (le)(rate(src_batch_log_semaphore_wait_duration_seconds_bucket{job=~"^gitserver.*"}[5m]))


Git Server: Gitservice for internal cloning

gitserver: aggregate_gitservice_request_duration

95th percentile gitservice request duration aggregate

A high value means any internal service trying to clone a repo from gitserver is slowed down.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=gitserver, error=false}[5m])) by (le))


gitserver: gitservice_request_duration

95th percentile gitservice request duration per shard

A high value means any internal service trying to clone a repo from gitserver is slowed down.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=gitserver, error=false, instance=~${shard:regex}}[5m])) by (le, instance))


gitserver: aggregate_gitservice_error_request_duration

95th percentile gitservice error request duration aggregate

95th percentile gitservice error request duration aggregate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100310 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=gitserver, error=true}[5m])) by (le))


gitserver: gitservice_request_duration

95th percentile gitservice error request duration per shard

95th percentile gitservice error request duration per shard

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100311 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=gitserver, error=true, instance=~${shard:regex}}[5m])) by (le, instance))


gitserver: aggregate_gitservice_request_rate

Aggregate gitservice request rate

Aggregate gitservice request rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100320 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=gitserver, error=false}[5m]))


gitserver: gitservice_request_rate

Gitservice request rate per shard

Per shard gitservice request rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100321 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=gitserver, error=false, instance=~${shard:regex}}[5m]))


gitserver: aggregate_gitservice_request_error_rate

Aggregate gitservice request error rate

Aggregate gitservice request error rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100330 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=gitserver, error=true}[5m]))


gitserver: gitservice_request_error_rate

Gitservice request error rate per shard

Per shard gitservice request error rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100331 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=gitserver, error=true, instance=~${shard:regex}}[5m]))


gitserver: aggregate_gitservice_requests_running

Aggregate gitservice requests running

Aggregate gitservice requests running

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100340 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(src_gitserver_gitservice_running{type=gitserver})


gitserver: gitservice_requests_running

Gitservice requests running per shard

Per shard gitservice requests running

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100341 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(src_gitserver_gitservice_running{type=gitserver, instance=~${shard:regex}}) by (instance)


Git Server: Gitserver cleanup jobs

gitserver: janitor_running

If the janitor process is running

1, if the janitor process is currently running

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by (instance) (src_gitserver_janitor_running)


gitserver: janitor_job_duration

95th percentile job run duration

95th percentile job run duration

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100410 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_gitserver_janitor_job_duration_seconds_bucket[5m])) by (le, job_name))


gitserver: janitor_job_failures

Failures over 5m (by job)

the rate of failures over 5m (by job)

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100420 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (job_name) (rate(src_gitserver_janitor_job_duration_seconds_count{success="false"}[5m]))


gitserver: repos_removed

Repositories removed due to disk pressure

Repositories removed due to disk pressure

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100430 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (instance) (rate(src_gitserver_repos_removed_disk_pressure[5m]))


gitserver: non_existent_repos_removed

Repositories removed because they are not defined in the DB

Repositoriess removed because they are not defined in the DB

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100440 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (instance) (increase(src_gitserver_non_existing_repos_removed[5m]))


gitserver: sg_maintenance_reason

Successful sg maintenance jobs over 1h (by reason)

the rate of successful sg maintenance jobs and the reason why they were triggered

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100450 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (reason) (rate(src_gitserver_maintenance_status{success="true"}[1h]))


gitserver: git_prune_skipped

Successful git prune jobs over 1h

the rate of successful git prune jobs over 1h and whether they were skipped

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100460 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (skipped) (rate(src_gitserver_prune_status{success="true"}[1h]))


gitserver: search_latency

Mean time until first result is sent

Mean latency (time to first result) of gitserver search requests

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: rate(src_gitserver_search_latency_seconds_sum[5m]) / rate(src_gitserver_search_latency_seconds_count[5m])


gitserver: search_duration

Mean search duration

Mean duration of gitserver search requests

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100501 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: rate(src_gitserver_search_duration_seconds_sum[5m]) / rate(src_gitserver_search_duration_seconds_count[5m])


gitserver: search_rate

Rate of searches run by pod

The rate of searches executed on gitserver by pod

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100510 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: rate(src_gitserver_search_latency_seconds_count{instance=~${shard:regex}}[5m])


gitserver: running_searches

Number of searches currently running by pod

The number of searches currently executing on gitserver by pod

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100511 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (instance) (src_gitserver_search_running{instance=~${shard:regex}})


Git Server: Repos disk I/O metrics

gitserver: repos_disk_reads_sec

Read request rate over 1m (per instance)

The number of read requests that were issued to the device per second.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100600 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_writes_sec

Write request rate over 1m (per instance)

The number of write requests that were issued to the device per second.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100601 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_read_throughput

Read throughput over 1m (per instance)

The amount of data that was read from the device per second.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100610 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_write_throughput

Write throughput over 1m (per instance)

The amount of data that was written to the device per second.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100611 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_read_duration

Average read duration over 1m (per instance)

The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100620 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~node-exporter.}[1m])))))))


gitserver: repos_disk_write_duration

Average write duration over 1m (per instance)

The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100621 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~node-exporter.}[1m])))))))


gitserver: repos_disk_read_request_size

Average read request size over 1m (per instance)

The average size of read requests that were issued to the device.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100630 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~node-exporter.}[1m])))))))


gitserver: repos_disk_write_request_size)

Average write request size over 1m (per instance)

The average size of write requests that were issued to the device.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100631 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~node-exporter.}[1m])))))))


gitserver: repos_disk_reads_merged_sec

Merged read request rate over 1m (per instance)

The number of read requests merged per second that were queued to the device.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100640 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_writes_merged_sec

Merged writes request rate over 1m (per instance)

The number of write requests merged per second that were queued to the device.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100641 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_average_queue_size

Average queue size over 1m (per instance)

The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100650 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~node-exporter.*}[1m])))))


Git Server: GRPC server metrics

gitserver: gitserver_grpc_request_rate_all_methods

Request rate across all methods over 1m

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100700 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(gitserver_grpc_server_started_total{instance=~${shard:regex}}[1m]))


gitserver: gitserver_grpc_request_rate_per_method

Request rate per-method over 1m

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100701 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(gitserver_grpc_server_started_total{grpc_method=~${method:regex},instance=~${shard:regex}}[1m])) by (grpc_method)


gitserver: gitserver_error_percentage_all_methods

Error percentage across all methods over 1m

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100710 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ( (sum(rate(gitserver_grpc_server_handled_total{grpc_code!="OK",instance=~${shard:regex}}[1m]))) / (sum(rate(gitserver_grpc_server_handled_total{instance=~${shard:regex}}[1m]))) ))


gitserver: gitserver_grpc_error_percentage_per_method

Error percentage per-method over 1m

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100711 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ( (sum(rate(gitserver_grpc_server_handled_total{grpc_method=~${method:regex},grpc_code!="OK",instance=~${shard:regex}}[1m])) by (grpc_method)) / (sum(rate(gitserver_grpc_server_handled_total{grpc_method=~${method:regex},instance=~${shard:regex}}[1m])) by (grpc_method)) ))


gitserver: gitserver_p99_response_time_per_method

99th percentile response time per method over 1m

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100720 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(gitserver_grpc_server_handling_seconds_bucket{grpc_method=~${method:regex},instance=~${shard:regex}}[1m])))


gitserver: gitserver_p90_response_time_per_method

90th percentile response time per method over 1m

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100721 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(gitserver_grpc_server_handling_seconds_bucket{grpc_method=~${method:regex},instance=~${shard:regex}}[1m])))


gitserver: gitserver_p75_response_time_per_method

75th percentile response time per method over 1m

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100722 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(gitserver_grpc_server_handling_seconds_bucket{grpc_method=~${method:regex},instance=~${shard:regex}}[1m])))


gitserver: gitserver_grpc_response_stream_message_count_per_method

Average streaming response message count per-method over 1m

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100730 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: ((sum(rate(gitserver_grpc_server_msg_sent_total{grpc_type="server_stream",instance=~${shard:regex}}[1m])) by (grpc_method))/(sum(rate(gitserver_grpc_server_started_total{grpc_type="server_stream",instance=~${shard:regex}}[1m])) by (grpc_method)))


gitserver: gitserver_grpc_all_codes_per_method

Response codes rate per-method over 1m

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100740 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(gitserver_grpc_server_handled_total{grpc_method=~${method:regex},instance=~${shard:regex}}[1m])) by (grpc_method, grpc_code)


Git Server: Codeintel: Coursier invocation stats

gitserver: codeintel_coursier_total

Aggregate invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100800 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_coursier_99th_percentile_duration

Aggregate successful invocations operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100801 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_coursier_errors_total

Aggregate invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100802 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_coursier_error_rate

Aggregate invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100803 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100


gitserver: codeintel_coursier_total

Invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100810 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_coursier_99th_percentile_duration

99th percentile successful invocations operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100811 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m])))


gitserver: codeintel_coursier_errors_total

Invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100812 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_coursier_error_rate

Invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100813 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100


Git Server: Codeintel: npm invocation stats

gitserver: codeintel_npm_total

Aggregate invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_npm_99th_percentile_duration

Aggregate successful invocations operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_npm_errors_total

Aggregate invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_npm_error_rate

Aggregate invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100903 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100


gitserver: codeintel_npm_total

Invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100910 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_npm_99th_percentile_duration

99th percentile successful invocations operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100911 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m])))


gitserver: codeintel_npm_errors_total

Invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100912 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_npm_error_rate

Invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100913 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100


Git Server: HTTP handlers

gitserver: healthy_request_rate

Requests per second, by route, when status code is 200

The number of healthy HTTP requests per second to internal HTTP api

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101000 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (route) (rate(src_http_request_duration_seconds_count{app="gitserver",code=~"2.."}[5m]))


gitserver: unhealthy_request_rate

Requests per second, by route, when status code is not 200

The number of unhealthy HTTP requests per second to internal HTTP api

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101001 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (route) (rate(src_http_request_duration_seconds_count{app="gitserver",code!~"2.."}[5m]))


gitserver: request_rate_by_code

Requests per second, by status code

The number of HTTP requests per second by code

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101002 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (code) (rate(src_http_request_duration_seconds_count{app="gitserver"}[5m]))


gitserver: 95th_percentile_healthy_requests

95th percentile duration by route, when status code is 200

The 95th percentile duration by route when the status code is 200

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101010 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="gitserver",code=~"2.."}[5m])) by (le, route))


gitserver: 95th_percentile_unhealthy_requests

95th percentile duration by route, when status code is not 200

The 95th percentile duration by route when the status code is not 200

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101011 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="gitserver",code!~"2.."}[5m])) by (le, route))


Git Server: Database connections

gitserver: max_open_conns

Maximum open

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101100 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="gitserver"})


gitserver: open_conns

Established

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101101 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="gitserver"})


gitserver: in_use

Used

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101110 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="gitserver"})


gitserver: idle

Idle

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101111 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="gitserver"})


gitserver: mean_blocked_seconds_per_conn_request

Mean blocked seconds per conn request

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101120 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="gitserver"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="gitserver"}[5m]))


gitserver: closed_max_idle

Closed by SetMaxIdleConns

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101130 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="gitserver"}[5m]))


gitserver: closed_max_lifetime

Closed by SetConnMaxLifetime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101131 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="gitserver"}[5m]))


gitserver: closed_max_idle_time

Closed by SetConnMaxIdleTime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101132 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="gitserver"}[5m]))


Git Server: Container monitoring (not available on server)

gitserver: container_missing

Container missing

This value is the number of times a container has not been seen for more than one minute. If you observe thisvalue change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod gitserver (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p gitserver.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' gitserver (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the gitserver container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs gitserver (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101200 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: count by(name) ((time() - container_last_seen{name=~"^gitserver.*"}) > 60)


gitserver: container_cpu_usage

Container cpu usage total (1m average) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101201 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}


gitserver: container_memory_usage

Container memory usage by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101202 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}


gitserver: fs_io_operations

Filesystem reads and writes rate by instance over 1h

This value indicates the number of filesystem read and write operations by containers of this service.When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101203 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by(name) (rate(container_fs_reads_total{name=~"^gitserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^gitserver.*"}[1h]))


Git Server: Provisioning indicators (not available on server)

gitserver: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101300 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[1d])


gitserver: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Git Server is expected to use up all the memory it is provided.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101301 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[1d])


gitserver: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101310 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[5m])


gitserver: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Git Server is expected to use up all the memory it is provided.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101311 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[5m])


gitserver: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101312 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^gitserver.*"})


Git Server: Golang runtime monitoring

gitserver: go_goroutines

Maximum active goroutines

A high value here indicates a possible goroutine leak.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101400 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by(instance) (go_goroutines{job=~".*gitserver"})


gitserver: go_gc_duration_seconds

Maximum go garbage collection duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101401 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by(instance) (go_gc_duration_seconds{job=~".*gitserver"})


Git Server: Kubernetes monitoring (only available on Kubernetes)

gitserver: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101500 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by(app) (up{app=~".*gitserver"}) / count by (app) (up{app=~".*gitserver"}) * 100


GitHub Proxy

Proxies all requests to github.com, keeping track of and managing rate limits.

To see this dashboard, visit /-/debug/grafana/d/github-proxy/github-proxy on your Sourcegraph instance.

GitHub Proxy: GitHub API monitoring

github-proxy: github_proxy_waiting_requests

Number of requests waiting on the global mutex

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(github_proxy_waiting_requests)


GitHub Proxy: Container monitoring (not available on server)

github-proxy: container_missing

Container missing

This value is the number of times a container has not been seen for more than one minute. If you observe thisvalue change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod github-proxy (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p github-proxy.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' github-proxy (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the github-proxy container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs github-proxy (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: count by(name) ((time() - container_last_seen{name=~"^github-proxy.*"}) > 60)


github-proxy: container_cpu_usage

Container cpu usage total (1m average) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: cadvisor_container_cpu_usage_percentage_total{name=~"^github-proxy.*"}


github-proxy: container_memory_usage

Container memory usage by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100102 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: cadvisor_container_memory_usage_percentage_total{name=~"^github-proxy.*"}


github-proxy: fs_io_operations

Filesystem reads and writes rate by instance over 1h

This value indicates the number of filesystem read and write operations by containers of this service.When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100103 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(name) (rate(container_fs_reads_total{name=~"^github-proxy.*"}[1h]) + rate(container_fs_writes_total{name=~"^github-proxy.*"}[1h]))


GitHub Proxy: Provisioning indicators (not available on server)

github-proxy: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^github-proxy.*"}[1d])


github-proxy: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^github-proxy.*"}[1d])


github-proxy: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^github-proxy.*"}[5m])


github-proxy: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^github-proxy.*"}[5m])


github-proxy: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100212 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^github-proxy.*"})


GitHub Proxy: Golang runtime monitoring

github-proxy: go_goroutines

Maximum active goroutines

A high value here indicates a possible goroutine leak.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by(instance) (go_goroutines{job=~".*github-proxy"})


github-proxy: go_gc_duration_seconds

Maximum go garbage collection duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by(instance) (go_gc_duration_seconds{job=~".*github-proxy"})


GitHub Proxy: Kubernetes monitoring (only available on Kubernetes)

github-proxy: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(app) (up{app=~".*github-proxy"}) / count by (app) (up{app=~".*github-proxy"}) * 100


Postgres

Postgres metrics, exported from postgres_exporter (not available on server).

To see this dashboard, visit /-/debug/grafana/d/postgres/postgres on your Sourcegraph instance.

postgres: connections

Active connections

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (job) (pg_stat_activity_count{datname!~"template.*|postgres|cloudsqladmin"}) OR sum by (job) (pg_stat_activity_count{job="codeinsights-db", datname!~"template.*|cloudsqladmin"})


postgres: usage_connections_percentage

Connection in use

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100001 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum(pg_stat_activity_count) by (job) / (sum(pg_settings_max_connections) by (job) - sum(pg_settings_superuser_reserved_connections) by (job)) * 100


postgres: transaction_durations

Maximum transaction durations

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100002 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (job) (pg_stat_activity_max_tx_duration{datname!~"template.*|postgres|cloudsqladmin",job!="codeintel-db"}) OR sum by (job) (pg_stat_activity_max_tx_duration{job="codeinsights-db", datname!~"template.*|cloudsqladmin"})


Postgres: Database and collector status

postgres: postgres_up

Database availability

A non-zero value indicates the database is online.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: pg_up


postgres: invalid_indexes

Invalid indexes (unusable by the query planner)

A non-zero value indicates the that Postgres failed to build an index. Expect degraded performance until the index is manually rebuilt.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (relname)(pg_invalid_index_count)


postgres: pg_exporter_err

Errors scraping postgres exporter

This value indicates issues retrieving metrics from postgres_exporter.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100110 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: pg_exporter_last_scrape_error


postgres: migration_in_progress

Active schema migration

A 0 value indicates that no migration is in progress.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100111 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: pg_sg_migration_status


Postgres: Object size and bloat

postgres: pg_table_size

Table size

Total size of this table

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (relname)(pg_table_bloat_size)


postgres: pg_table_bloat_ratio

Table bloat ratio

Estimated bloat ratio of this table (high bloat = high overhead)

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (relname)(pg_table_bloat_ratio) * 100


postgres: pg_index_size

Index size

Total size of this index

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (relname)(pg_index_bloat_size)


postgres: pg_index_bloat_ratio

Index bloat ratio

Estimated bloat ratio of this index (high bloat = high overhead)

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (relname)(pg_index_bloat_ratio) * 100


Postgres: Provisioning indicators (not available on server)

postgres: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[1d])


postgres: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[1d])


postgres: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100310 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[5m])


postgres: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100311 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[5m])


postgres: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100312 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^(pgsql|codeintel-db|codeinsights).*"})


Postgres: Kubernetes monitoring (only available on Kubernetes)

postgres: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(app) (up{app=~".*(pgsql|codeintel-db|codeinsights)"}) / count by (app) (up{app=~".*(pgsql|codeintel-db|codeinsights)"}) * 100


Precise Code Intel Worker

Handles conversion of uploaded precise code intelligence bundles.

To see this dashboard, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker on your Sourcegraph instance.

Precise Code Intel Worker: Codeintel: LSIF uploads

precise-code-intel-worker: codeintel_upload_queue_size

Unprocessed upload record queue size

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_upload_total{job=~"^precise-code-intel-worker.*"})


precise-code-intel-worker: codeintel_upload_queue_growth_rate

Unprocessed upload record queue growth rate over 30m

This value compares the rate of enqueues against the rate of finished jobs.

- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[30m])) / sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[30m]))


precise-code-intel-worker: codeintel_upload_queued_max_age

Unprocessed upload record queue longest time in queue

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_upload_queued_duration_seconds_total{job=~"^precise-code-intel-worker.*"})


Precise Code Intel Worker: Codeintel: LSIF uploads

precise-code-intel-worker: codeintel_upload_handlers

Handler active handlers

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(src_codeintel_upload_processor_handlers{job=~"^precise-code-intel-worker.*"})


precise-code-intel-worker: codeintel_upload_processor_upload_size

Sum of upload sizes in bytes being processed by each precise code-intel worker instance

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by(instance) (src_codeintel_upload_processor_upload_size{job="precise-code-intel-worker"})


precise-code-intel-worker: codeintel_upload_processor_total

Handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100110 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_upload_processor_99th_percentile_duration

Aggregate successful handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100111 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_upload_processor_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_upload_processor_errors_total

Handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100112 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_upload_processor_error_rate

Handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100113 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Codeintel: dbstore stats

precise-code-intel-worker: codeintel_uploads_store_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_store_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_store_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100202 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_store_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100203 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


precise-code-intel-worker: codeintel_uploads_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))


precise-code-intel-worker: codeintel_uploads_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100212 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100213 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Codeintel: lsifstore stats

precise-code-intel-worker: codeintel_uploads_lsifstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_lsifstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_lsifstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100302 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_lsifstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100303 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


precise-code-intel-worker: codeintel_uploads_lsifstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100310 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_lsifstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100311 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))


precise-code-intel-worker: codeintel_uploads_lsifstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100312 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_lsifstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100313 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Workerutil: lsif_uploads dbworker/store stats

precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100401 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_upload_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100402 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100403 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Codeintel: gitserver client

precise-code-intel-worker: codeintel_gitserver_total

Aggregate client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_gitserver_99th_percentile_duration

Aggregate successful client operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100501 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_gitserver_errors_total

Aggregate client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100502 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_gitserver_error_rate

Aggregate client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100503 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


precise-code-intel-worker: codeintel_gitserver_total

Client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100510 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_gitserver_99th_percentile_duration

99th percentile successful client operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100511 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))


precise-code-intel-worker: codeintel_gitserver_errors_total

Client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100512 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_gitserver_error_rate

Client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100513 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Codeintel: uploadstore stats

precise-code-intel-worker: codeintel_uploadstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100600 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100601 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploadstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100602 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploadstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100603 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


precise-code-intel-worker: codeintel_uploadstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100610 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100611 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))


precise-code-intel-worker: codeintel_uploadstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100612 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploadstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100613 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Internal service requests

precise-code-intel-worker: frontend_internal_api_error_responses

Frontend-internal API error responses every 5m by route

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100700 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="precise-code-intel-worker",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="precise-code-intel-worker"}[5m]))


Precise Code Intel Worker: Database connections

precise-code-intel-worker: max_open_conns

Maximum open

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100800 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="precise-code-intel-worker"})


precise-code-intel-worker: open_conns

Established

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100801 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="precise-code-intel-worker"})


precise-code-intel-worker: in_use

Used

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100810 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="precise-code-intel-worker"})


precise-code-intel-worker: idle

Idle

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100811 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="precise-code-intel-worker"})


precise-code-intel-worker: mean_blocked_seconds_per_conn_request

Mean blocked seconds per conn request

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100820 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="precise-code-intel-worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="precise-code-intel-worker"}[5m]))


precise-code-intel-worker: closed_max_idle

Closed by SetMaxIdleConns

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100830 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="precise-code-intel-worker"}[5m]))


precise-code-intel-worker: closed_max_lifetime

Closed by SetConnMaxLifetime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100831 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="precise-code-intel-worker"}[5m]))


precise-code-intel-worker: closed_max_idle_time

Closed by SetConnMaxIdleTime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100832 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="precise-code-intel-worker"}[5m]))


Precise Code Intel Worker: Container monitoring (not available on server)

precise-code-intel-worker: container_missing

Container missing

This value is the number of times a container has not been seen for more than one minute. If you observe thisvalue change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod precise-code-intel-worker (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p precise-code-intel-worker.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' precise-code-intel-worker (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the precise-code-intel-worker container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs precise-code-intel-worker (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: count by(name) ((time() - container_last_seen{name=~"^precise-code-intel-worker.*"}) > 60)


precise-code-intel-worker: container_cpu_usage

Container cpu usage total (1m average) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}


precise-code-intel-worker: container_memory_usage

Container memory usage by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}


precise-code-intel-worker: fs_io_operations

Filesystem reads and writes rate by instance over 1h

This value indicates the number of filesystem read and write operations by containers of this service.When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100903 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by(name) (rate(container_fs_reads_total{name=~"^precise-code-intel-worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^precise-code-intel-worker.*"}[1h]))


Precise Code Intel Worker: Provisioning indicators (not available on server)

precise-code-intel-worker: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])


precise-code-intel-worker: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])


precise-code-intel-worker: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101010 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])


precise-code-intel-worker: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101011 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])


precise-code-intel-worker: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101012 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^precise-code-intel-worker.*"})


Precise Code Intel Worker: Golang runtime monitoring

precise-code-intel-worker: go_goroutines

Maximum active goroutines

A high value here indicates a possible goroutine leak.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by(instance) (go_goroutines{job=~".*precise-code-intel-worker"})


precise-code-intel-worker: go_gc_duration_seconds

Maximum go garbage collection duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by(instance) (go_gc_duration_seconds{job=~".*precise-code-intel-worker"})


Precise Code Intel Worker: Kubernetes monitoring (only available on Kubernetes)

precise-code-intel-worker: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by(app) (up{app=~".*precise-code-intel-worker"}) / count by (app) (up{app=~".*precise-code-intel-worker"}) * 100


Redis

Metrics from both redis databases.

To see this dashboard, visit /-/debug/grafana/d/redis/redis on your Sourcegraph instance.

Redis: Redis Store

redis: redis-store_up

Redis-store availability

A value of 1 indicates the service is currently running

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: redis_up{app="redis-store"}


Redis: Redis Cache

redis: redis-cache_up

Redis-cache availability

A value of 1 indicates the service is currently running

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: redis_up{app="redis-cache"}


Redis: Provisioning indicators (not available on server)

redis: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[1d])


redis: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[1d])


redis: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[5m])


redis: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[5m])


redis: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100212 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^redis-cache.*"})


Redis: Provisioning indicators (not available on server)

redis: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[1d])


redis: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[1d])


redis: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100310 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[5m])


redis: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100311 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[5m])


redis: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100312 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^redis-store.*"})


Redis: Kubernetes monitoring (only available on Kubernetes)

redis: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(app) (up{app=~".*redis-cache"}) / count by (app) (up{app=~".*redis-cache"}) * 100


Redis: Kubernetes monitoring (only available on Kubernetes)

redis: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(app) (up{app=~".*redis-store"}) / count by (app) (up{app=~".*redis-store"}) * 100


Worker

Manages background processes.

To see this dashboard, visit /-/debug/grafana/d/worker/worker on your Sourcegraph instance.

Worker: Active jobs

worker: worker_job_count

Number of worker instances running each job

The number of worker instances running each job type.It is necessary for each job type to be managed by at least one worker instance.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100000 on your Sourcegraph instance.

Technical details

Query: sum by (job_name) (src_worker_jobs{job="worker"})


worker: worker_job_codeintel-upload-janitor_count

Number of worker instances running the codeintel-upload-janitor job

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100010 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum (src_worker_jobs{job="worker", job_name="codeintel-upload-janitor"})


worker: worker_job_codeintel-commitgraph-updater_count

Number of worker instances running the codeintel-commitgraph-updater job

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100011 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum (src_worker_jobs{job="worker", job_name="codeintel-commitgraph-updater"})


worker: worker_job_codeintel-autoindexing-scheduler_count

Number of worker instances running the codeintel-autoindexing-scheduler job

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100012 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum (src_worker_jobs{job="worker", job_name="codeintel-autoindexing-scheduler"})


Worker: Database record encrypter

worker: records_encrypted_at_rest_percentage

Percentage of database records encrypted at rest

Percentage of encrypted database records

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (max(src_records_encrypted_at_rest_total) by (tableName)) / ((max(src_records_encrypted_at_rest_total) by (tableName)) + (max(src_records_unencrypted_at_rest_total) by (tableName))) * 100


worker: records_encrypted_total

Database records encrypted every 5m

Number of encrypted database records every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (tableName)(increase(src_records_encrypted_total{job=~"^worker.*"}[5m]))


worker: records_decrypted_total

Database records decrypted every 5m

Number of encrypted database records every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100102 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (tableName)(increase(src_records_decrypted_total{job=~"^worker.*"}[5m]))


worker: record_encryption_errors_total

Encryption operation errors every 5m

Number of database record encryption/decryption errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100103 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(increase(src_record_encryption_errors_total{job=~"^worker.*"}[5m]))


Worker: Codeintel: Repository with stale commit graph

worker: codeintel_commit_graph_queue_size

Repository queue size

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_commit_graph_total{job=~"^worker.*"})


worker: codeintel_commit_graph_queue_growth_rate

Repository queue growth rate over 30m

This value compares the rate of enqueues against the rate of finished jobs.

- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_commit_graph_total{job=~"^worker.*"}[30m])) / sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[30m]))


worker: codeintel_commit_graph_queued_max_age

Repository queue longest time in queue

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100202 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_commit_graph_queued_duration_seconds_total{job=~"^worker.*"})


Worker: Codeintel: Repository commit graph updates

worker: codeintel_commit_graph_processor_total

Update operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m]))


worker: codeintel_commit_graph_processor_99th_percentile_duration

Aggregate successful update operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_commit_graph_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_commit_graph_processor_errors_total

Update operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100302 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_commit_graph_processor_error_rate

Update operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100303 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeintel: Dependency index job

worker: codeintel_dependency_index_queue_size

Dependency index job queue size

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_dependency_index_total{job=~"^worker.*"})


worker: codeintel_dependency_index_queue_growth_rate

Dependency index job queue growth rate over 30m

This value compares the rate of enqueues against the rate of finished jobs.

- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100401 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_index_total{job=~"^worker.*"}[30m])) / sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[30m]))


worker: codeintel_dependency_index_queued_max_age

Dependency index job queue longest time in queue

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100402 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_dependency_index_queued_duration_seconds_total{job=~"^worker.*"})


Worker: Codeintel: Dependency index jobs

worker: codeintel_dependency_index_handlers

Handler active handlers

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(src_codeintel_dependency_index_processor_handlers{job=~"^worker.*"})


worker: codeintel_dependency_index_processor_total

Handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100510 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_index_processor_99th_percentile_duration

Aggregate successful handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100511 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_dependency_index_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_index_processor_errors_total

Handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100512 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_index_processor_error_rate

Handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100513 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeintel: Auto-index scheduler

worker: codeintel_autoindexing_total

Auto-indexing job scheduler operations every 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100600 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))


worker: codeintel_autoindexing_99th_percentile_duration

Aggregate successful auto-indexing job scheduler operation duration distribution over 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100601 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_autoindexing_duration_seconds_bucket{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))


worker: codeintel_autoindexing_errors_total

Auto-indexing job scheduler operation errors every 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100602 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))


worker: codeintel_autoindexing_error_rate

Auto-indexing job scheduler operation error rate over 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100603 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m])) / (sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m])) + sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))) * 100


Worker: Codeintel: dbstore stats

worker: codeintel_uploads_store_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100700 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_store_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100701 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_store_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100702 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_store_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100703 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: codeintel_uploads_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100710 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100711 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: codeintel_uploads_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100712 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100713 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeintel: lsifstore stats

worker: codeintel_uploads_lsifstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100800 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_lsifstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100801 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_lsifstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100802 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_lsifstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100803 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: codeintel_uploads_lsifstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100810 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_lsifstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100811 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: codeintel_uploads_lsifstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100812 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_lsifstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100813 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Workerutil: lsif_dependency_indexes dbworker/store stats

worker: workerutil_dbworker_store_codeintel_dependency_index_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_codeintel_dependency_index_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_dependency_index_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_codeintel_dependency_index_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_codeintel_dependency_index_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100903 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeintel: gitserver client

worker: codeintel_gitserver_total

Aggregate client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m]))


worker: codeintel_gitserver_99th_percentile_duration

Aggregate successful client operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_gitserver_errors_total

Aggregate client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_gitserver_error_rate

Aggregate client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101003 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: codeintel_gitserver_total

Client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101010 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m]))


worker: codeintel_gitserver_99th_percentile_duration

99th percentile successful client operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101011 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: codeintel_gitserver_errors_total

Client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101012 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_gitserver_error_rate

Client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101013 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeintel: Dependency repository insert

worker: codeintel_dependency_repos_total

Aggregate insert operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_repos_99th_percentile_duration

Aggregate successful insert operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_dependency_repos_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_repos_errors_total

Aggregate insert operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101102 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_repos_error_rate

Aggregate insert operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101103 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: codeintel_dependency_repos_total

Insert operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101110 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_repos_99th_percentile_duration

99th percentile successful insert operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101111 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,scheme,new)(rate(src_codeintel_dependency_repos_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: codeintel_dependency_repos_errors_total

Insert operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101112 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_repos_error_rate

Insert operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101113 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m])) / (sum by (scheme,new)(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m])) + sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: dbstore stats

worker: batches_dbstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101200 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))


worker: batches_dbstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101201 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: batches_dbstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101202 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))


worker: batches_dbstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101203 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: batches_dbstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101210 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))


worker: batches_dbstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101211 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: batches_dbstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101212 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))


worker: batches_dbstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101213 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: service stats

worker: batches_service_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101300 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_total{job=~"^worker.*"}[5m]))


worker: batches_service_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101301 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: batches_service_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101302 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))


worker: batches_service_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101303 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: batches_service_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101310 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m]))


worker: batches_service_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101311 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: batches_service_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101312 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))


worker: batches_service_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101313 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: Workspace resolver dbstore

worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101400 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101401 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101402 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101403 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: Bulk operation processor dbstore

worker: workerutil_dbworker_store_batches_bulk_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101500 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batches_bulk_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101501 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batches_bulk_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_batches_bulk_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101502 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batches_bulk_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101503 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: Changeset reconciler dbstore

worker: workerutil_dbworker_store_batches_reconciler_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101600 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batches_reconciler_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101601 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batches_reconciler_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_batches_reconciler_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101602 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batches_reconciler_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101603 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: Workspace execution dbstore

worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101700 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101701 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101702 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101703 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: Executor jobs

worker: executor_queue_size

Unprocessed executor job queue size

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101800 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by (queue)(src_executor_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})


worker: executor_queue_growth_rate

Unprocessed executor job queue growth rate over 30m

This value compares the rate of enqueues against the rate of finished jobs for the selected queue.

- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101801 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (queue)(increase(src_executor_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m])) / sum by (queue)(increase(src_executor_processor_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m]))


worker: executor_queued_max_age

Unprocessed executor job queue longest time in queue

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101802 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by (queue)(src_executor_queued_duration_seconds_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})


Worker: Codeintel: lsif_upload record resetter

worker: codeintel_background_upload_record_resets_total

Lsif upload records reset to queued state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_upload_record_resets_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_upload_record_reset_failures_total

Lsif upload records reset to errored state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_upload_record_reset_failures_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_upload_record_reset_errors_total

Lsif upload operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_upload_record_reset_errors_total{job=~"^worker.*"}[5m]))


Worker: Codeintel: lsif_index record resetter

worker: codeintel_background_index_record_resets_total

Lsif index records reset to queued state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_index_record_resets_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_index_record_reset_failures_total

Lsif index records reset to errored state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_index_record_reset_failures_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_index_record_reset_errors_total

Lsif index operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_index_record_reset_errors_total{job=~"^worker.*"}[5m]))


Worker: Codeintel: lsif_dependency_index record resetter

worker: codeintel_background_dependency_index_record_resets_total

Lsif dependency index records reset to queued state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_dependency_index_record_resets_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_dependency_index_record_reset_failures_total

Lsif dependency index records reset to errored state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_dependency_index_record_reset_failures_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_dependency_index_record_reset_errors_total

Lsif dependency index operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102102 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_dependency_index_record_reset_errors_total{job=~"^worker.*"}[5m]))


Worker: Codeinsights: Query Runner Queue

worker: query_runner_worker_queue_size

Code insights query runner queue queue size

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102200 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: max(src_query_runner_worker_total{job=~"^worker.*"})


worker: query_runner_worker_queue_growth_rate

Code insights query runner queue queue growth rate over 30m

This value compares the rate of enqueues against the rate of finished jobs.

- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102201 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_total{job=~"^worker.*"}[30m])) / sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[30m]))


Worker: Codeinsights: insights queue processor

worker: query_runner_worker_handlers

Handler active handlers

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102300 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(src_query_runner_worker_processor_handlers{job=~"^worker.*"})


worker: query_runner_worker_processor_total

Handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102310 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[5m]))


worker: query_runner_worker_processor_99th_percentile_duration

Aggregate successful handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102311 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (le)(rate(src_query_runner_worker_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: query_runner_worker_processor_errors_total

Handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102312 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m]))


worker: query_runner_worker_processor_error_rate

Handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102313 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeinsights: code insights query runner queue record resetter

worker: query_runner_worker_record_resets_total

Insights query runner queue records reset to queued state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102400 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_record_resets_total{job=~"^worker.*"}[5m]))


worker: query_runner_worker_record_reset_failures_total

Insights query runner queue records reset to errored state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102401 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_record_reset_failures_total{job=~"^worker.*"}[5m]))


worker: query_runner_worker_record_reset_errors_total

Insights query runner queue operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102402 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_record_reset_errors_total{job=~"^worker.*"}[5m]))


Worker: Codeinsights: dbstore stats

worker: workerutil_dbworker_store_insights_query_runner_jobs_store_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102500 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102501 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (le)(rate(src_workerutil_dbworker_store_insights_query_runner_jobs_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102502 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102503 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102510 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102511 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_insights_query_runner_jobs_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102512 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102513 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Code Insights queue utilization

worker: insights_queue_unutilized_size

Insights queue size that is not utilized (not processing)

Any value on this panel indicates code insights is not processing queries from its queue. This observable and alert only fire if there are records in the queue and there have been no dequeue attempts for 30 minutes.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102600 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: max(src_query_runner_worker_total{job=~"^worker.*"}) > 0 and on(job) sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*",op="Dequeue"}[5m])) < 1


Worker: Internal service requests

worker: frontend_internal_api_error_responses

Frontend-internal API error responses every 5m by route

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102700 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="worker",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="worker"}[5m]))


Worker: Database connections

worker: max_open_conns

Maximum open

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102800 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="worker"})


worker: open_conns

Established

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102801 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="worker"})


worker: in_use

Used

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102810 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="worker"})


worker: idle

Idle

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102811 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="worker"})


worker: mean_blocked_seconds_per_conn_request

Mean blocked seconds per conn request

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102820 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="worker"}[5m]))


worker: closed_max_idle

Closed by SetMaxIdleConns

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102830 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="worker"}[5m]))


worker: closed_max_lifetime

Closed by SetConnMaxLifetime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102831 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="worker"}[5m]))


worker: closed_max_idle_time

Closed by SetConnMaxIdleTime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102832 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="worker"}[5m]))


Worker: Container monitoring (not available on server)

worker: container_missing

Container missing

This value is the number of times a container has not been seen for more than one minute. If you observe thisvalue change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod worker (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p worker.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' worker (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the worker container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs worker (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: count by(name) ((time() - container_last_seen{name=~"^worker.*"}) > 60)


worker: container_cpu_usage

Container cpu usage total (1m average) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}


worker: container_memory_usage

Container memory usage by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}


worker: fs_io_operations

Filesystem reads and writes rate by instance over 1h

This value indicates the number of filesystem read and write operations by containers of this service.When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102903 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by(name) (rate(container_fs_reads_total{name=~"^worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^worker.*"}[1h]))


Worker: Provisioning indicators (not available on server)

worker: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[1d])


worker: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[1d])


worker: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103010 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[5m])


worker: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103011 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[5m])


worker: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103012 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^worker.*"})


Worker: Golang runtime monitoring

worker: go_goroutines

Maximum active goroutines

A high value here indicates a possible goroutine leak.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by(instance) (go_goroutines{job=~".*worker"})


worker: go_gc_duration_seconds

Maximum go garbage collection duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by(instance) (go_gc_duration_seconds{job=~".*worker"})


Worker: Kubernetes monitoring (only available on Kubernetes)

worker: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by(app) (up{app=~".*worker"}) / count by (app) (up{app=~".*worker"}) * 100


Repo Updater

Manages interaction with code hosts, instructs Gitserver to update repositories.

To see this dashboard, visit /-/debug/grafana/d/repo-updater/repo-updater on your Sourcegraph instance.

Repo Updater: Repositories

repo-updater: syncer_sync_last_time

Time since last sync

A high value here indicates issues synchronizing repo metadata.If the value is persistently high, make sure all external services have valid tokens.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(timestamp(vector(time()))) - max(src_repoupdater_syncer_sync_last_time)


repo-updater: src_repoupdater_max_sync_backoff

Time since oldest sync

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100001 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(src_repoupdater_max_sync_backoff)


repo-updater: src_repoupdater_syncer_sync_errors_total

Site level external service sync error rate

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100002 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by (family) (rate(src_repoupdater_syncer_sync_errors_total{owner!="user",reason!="invalid_npm_path",reason!="internal_rate_limit"}[5m]))


repo-updater: syncer_sync_start

Repo metadata sync was started

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100010 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by (family) (rate(src_repoupdater_syncer_start_sync{family="Syncer.SyncExternalService"}[9h0m0s]))


repo-updater: syncer_sync_duration

95th repositories sync duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100011 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, max by (le, family, success) (rate(src_repoupdater_syncer_sync_duration_seconds_bucket[1m])))


repo-updater: source_duration

95th repositories source duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100012 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, max by (le) (rate(src_repoupdater_source_duration_seconds_bucket[1m])))


repo-updater: syncer_synced_repos

Repositories synced

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100020 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(rate(src_repoupdater_syncer_synced_repos_total[1m]))


repo-updater: sourced_repos

Repositories sourced

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100021 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(rate(src_repoupdater_source_repos_total[1m]))


repo-updater: purge_failed

Repositories purge failed

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100030 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(rate(src_repoupdater_purge_failed[1m]))


repo-updater: sched_auto_fetch

Repositories scheduled due to hitting a deadline

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100040 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(rate(src_repoupdater_sched_auto_fetch[1m]))


repo-updater: sched_manual_fetch

Repositories scheduled due to user traffic

Check repo-updater logs if this value is persistently high.This does not indicate anything if there are no user added code hosts.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100041 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(rate(src_repoupdater_sched_manual_fetch[1m]))


repo-updater: sched_known_repos

Repositories managed by the scheduler

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100050 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(src_repoupdater_sched_known_repos)


repo-updater: sched_update_queue_length

Rate of growth of update queue length over 5 minutes

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100051 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(deriv(src_repoupdater_sched_update_queue_length[5m]))


repo-updater: sched_loops

Scheduler loops

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100052 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(rate(src_repoupdater_sched_loops[1m]))


repo-updater: src_repoupdater_stale_repos

Repos that haven't been fetched in more than 8 hours

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100060 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(src_repoupdater_stale_repos)


repo-updater: sched_error

Repositories schedule error rate

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100061 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(rate(src_repoupdater_sched_error[1m]))


Repo Updater: Permissions

repo-updater: permissions_syncs_scheduled_reason

Number of users/repos scheduled for permissions sync grouped by reason

Indicates the number of users/repos scheduled for permissions sync grouped by reason.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum by (type) (src_repoupdater_perms_syncer_items_sync_scheduled)


repo-updater: permissions_syncs_scheduled_priority

Number of users/repos scheduled for permissions sync grouped by priority

Indicates the number of users/repos scheduled for permissions sync grouped by priority.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum by (priority) (src_repoupdater_perms_syncer_items_sync_scheduled)


repo-updater: user_success_syncs_total

Total number of user permissions syncs

Indicates the total number of user permissions sync completed.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100110 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(src_repoupdater_perms_syncer_success_syncs{type="user"})


repo-updater: user_success_syncs

Number of user permissions syncs [5m]

Indicates the number of users permissions syncs completed.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100111 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(increase(src_repoupdater_perms_syncer_success_syncs{type="user"}[5m]))


repo-updater: user_initial_syncs

Number of first user permissions syncs [5m]

Indicates the number of permissions syncs done for the first time for the user.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100112 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(increase(src_repoupdater_perms_syncer_initial_syncs{type="user"}[5m]))


repo-updater: user_failed_syncs

Number of user permissions failed syncs [5m]

Indicates the number of users permissions syncs failed.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100120 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(increase(src_repoupdater_perms_syncer_failed_syncs{type="user"}[5m]))


repo-updater: repo_success_syncs_total

Total number of repo permissions syncs

Indicates the total number of repo permissions sync completed.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100130 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(src_repoupdater_perms_syncer_success_syncs{type="repo"})


repo-updater: repo_success_syncs

Number of repo permissions syncs over 5m

Indicates the number of repos permissions syncs completed.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100131 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(increase(src_repoupdater_perms_syncer_success_syncs{type="repo"}[5m]))


repo-updater: repo_initial_syncs

Number of first repo permissions syncs over 5m

Indicates the number of permissions syncs done for the first time for the repo.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100132 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(increase(src_repoupdater_perms_syncer_initial_syncs{type="repo"}[5m]))


repo-updater: repo_failed_syncs

Number of repo permissions failed syncs over 5m

Indicates the number of repos permissions syncs failed in last 5 minute.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100140 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum(increase(src_repoupdater_perms_syncer_failed_syncs{type="repo"}[5m]))


repo-updater: users_consecutive_sync_delay

Max duration between two consecutive permissions sync for user

Indicates the max delay between two consecutive permissions sync for a user during the period.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100150 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max(max_over_time (src_repoupdater_perms_syncer_perms_consecutive_sync_delay{type="user"} [1m]))


repo-updater: repos_consecutive_sync_delay

Max duration between two consecutive permissions sync for repo

Indicates the max delay between two consecutive permissions sync for a repo during the period.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100151 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max(max_over_time (src_repoupdater_perms_syncer_perms_consecutive_sync_delay{type="repo"} [1m]))


repo-updater: users_first_sync_delay

Max duration between user creation and first permissions sync

Indicates the max delay between user creation and their permissions sync

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100160 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max(max_over_time(src_repoupdater_perms_syncer_perms_first_sync_delay{type="user"}[1m]))


repo-updater: repos_first_sync_delay

Max duration between repo creation and first permissions sync over 1m

Indicates the max delay between repo creation and their permissions sync

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100161 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max(max_over_time(src_repoupdater_perms_syncer_perms_first_sync_delay{type="repo"}[1m]))


repo-updater: permissions_found_count

Number of permissions found during user/repo permissions sync

Indicates the number permissions found during users/repos permissions sync.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100170 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: sum by (type) (src_repoupdater_perms_syncer_perms_found)


repo-updater: permissions_found_avg

Average number of permissions found during permissions sync per user/repo

Indicates the average number permissions found during permissions sync per user/repo.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100171 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: avg by (type) (src_repoupdater_perms_syncer_perms_found)


repo-updater: perms_syncer_perms

Time gap between least and most up to date permissions

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100180 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max by (type) (src_repoupdater_perms_syncer_perms_gap_seconds)


repo-updater: perms_syncer_stale_perms

Number of entities with stale permissions

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100181 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max by (type) (src_repoupdater_perms_syncer_stale_perms)


repo-updater: perms_syncer_no_perms

Number of entities with no permissions

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100190 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max by (type) (src_repoupdater_perms_syncer_no_perms)


repo-updater: perms_syncer_outdated_perms

Number of entities with outdated permissions

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100191 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max by (type) (src_repoupdater_perms_syncer_outdated_perms)


repo-updater: perms_syncer_sync_duration

95th permissions sync duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: histogram_quantile(0.95, max by (le, type) (rate(src_repoupdater_perms_syncer_sync_duration_seconds_bucket[1m])))


repo-updater: perms_syncer_queue_size

Permissions sync queued items

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max(src_repoupdater_perms_syncer_queue_size)


repo-updater: perms_syncer_sync_errors

Permissions sync error rate

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max by (type) (ceil(rate(src_repoupdater_perms_syncer_sync_errors_total[1m])))


repo-updater: perms_syncer_scheduled_repos_total

Total number of repos scheduled for permissions sync

Indicates how many repositories have been scheduled for a permissions sync.More about repository permissions synchronization here

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Identity and Access Management team.

Technical details

Query: max(rate(src_repoupdater_perms_syncer_schedule_repos_total[1m]))


Repo Updater: External services

repo-updater: src_repoupdater_external_services_total

The total number of external services

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(src_repoupdater_external_services_total)


repo-updater: repoupdater_queued_sync_jobs_total

The total number of queued sync jobs

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(src_repoupdater_queued_sync_jobs_total)


repo-updater: repoupdater_completed_sync_jobs_total

The total number of completed sync jobs

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(src_repoupdater_completed_sync_jobs_total)


repo-updater: repoupdater_errored_sync_jobs_percentage

The percentage of external services that have failed their most recent sync

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100212 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max(src_repoupdater_errored_sync_jobs_percentage)


repo-updater: github_graphql_rate_limit_remaining

Remaining calls to GitHub graphql API before hitting the rate limit

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100220 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by (name) (src_github_rate_limit_remaining_v2{resource="graphql"})


repo-updater: github_rest_rate_limit_remaining

Remaining calls to GitHub rest API before hitting the rate limit

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100221 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by (name) (src_github_rate_limit_remaining_v2{resource="rest"})


repo-updater: github_search_rate_limit_remaining

Remaining calls to GitHub search API before hitting the rate limit

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100222 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by (name) (src_github_rate_limit_remaining_v2{resource="search"})


repo-updater: github_graphql_rate_limit_wait_duration

Time spent waiting for the GitHub graphql API rate limiter

Indicates how long we`re waiting on the rate limit once it has been exceeded

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100230 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="graphql"}[5m]))


repo-updater: github_rest_rate_limit_wait_duration

Time spent waiting for the GitHub rest API rate limiter

Indicates how long we`re waiting on the rate limit once it has been exceeded

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100231 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="rest"}[5m]))


repo-updater: github_search_rate_limit_wait_duration

Time spent waiting for the GitHub search API rate limiter

Indicates how long we`re waiting on the rate limit once it has been exceeded

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100232 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="search"}[5m]))


repo-updater: gitlab_rest_rate_limit_remaining

Remaining calls to GitLab rest API before hitting the rate limit

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100240 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by (name) (src_gitlab_rate_limit_remaining{resource="rest"})


repo-updater: gitlab_rest_rate_limit_wait_duration

Time spent waiting for the GitLab rest API rate limiter

Indicates how long we`re waiting on the rate limit once it has been exceeded

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100241 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by (name) (rate(src_gitlab_rate_limit_wait_duration_seconds{resource="rest"}[5m]))


repo-updater: src_internal_rate_limit_wait_duration_bucket

95th percentile time spent successfully waiting on our internal rate limiter

Indicates how long we`re waiting on our internal rate limiter when communicating with a code host

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100250 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_internal_rate_limit_wait_duration_bucket{failed="false"}[5m])) by (le, urn))


repo-updater: src_internal_rate_limit_wait_error_count

Rate of failures waiting on our internal rate limiter

The rate at which we fail our internal rate limiter.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100251 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (urn) (rate(src_internal_rate_limit_wait_duration_count{failed="true"}[5m]))


Repo Updater: Batches: dbstore stats

repo-updater: batches_dbstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m]))


repo-updater: batches_dbstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^repo-updater.*"}[5m]))


repo-updater: batches_dbstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100302 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))


repo-updater: batches_dbstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100303 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))) * 100


repo-updater: batches_dbstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100310 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m]))


repo-updater: batches_dbstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100311 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^repo-updater.*"}[5m])))


repo-updater: batches_dbstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100312 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))


repo-updater: batches_dbstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100313 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))) * 100


Repo Updater: Batches: service stats

repo-updater: batches_service_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m]))


repo-updater: batches_service_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100401 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^repo-updater.*"}[5m]))


repo-updater: batches_service_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100402 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))


repo-updater: batches_service_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100403 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))) * 100


repo-updater: batches_service_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100410 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m]))


repo-updater: batches_service_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100411 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^repo-updater.*"}[5m])))


repo-updater: batches_service_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100412 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))


repo-updater: batches_service_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100413 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))) * 100


Repo Updater: Codeintel: Coursier invocation stats

repo-updater: codeintel_coursier_total

Aggregate invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))


repo-updater: codeintel_coursier_99th_percentile_duration

Aggregate successful invocations operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100501 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m]))


repo-updater: codeintel_coursier_errors_total

Aggregate invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100502 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))


repo-updater: codeintel_coursier_error_rate

Aggregate invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100503 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100


repo-updater: codeintel_coursier_total

Invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100510 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))


repo-updater: codeintel_coursier_99th_percentile_duration

99th percentile successful invocations operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100511 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m])))


repo-updater: codeintel_coursier_errors_total

Invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100512 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))


repo-updater: codeintel_coursier_error_rate

Invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100513 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100


Repo Updater: Codeintel: npm invocation stats

repo-updater: codeintel_npm_total

Aggregate invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100600 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))


repo-updater: codeintel_npm_99th_percentile_duration

Aggregate successful invocations operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100601 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m]))


repo-updater: codeintel_npm_errors_total

Aggregate invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100602 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))


repo-updater: codeintel_npm_error_rate

Aggregate invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100603 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100


repo-updater: codeintel_npm_total

Invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100610 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))


repo-updater: codeintel_npm_99th_percentile_duration

99th percentile successful invocations operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100611 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m])))


repo-updater: codeintel_npm_errors_total

Invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100612 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))


repo-updater: codeintel_npm_error_rate

Invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100613 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100


Repo Updater: GRPC server metrics

repo-updater: repo_updater_grpc_request_rate_all_methods

Request rate across all methods over 1m

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100700 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(rate(repo_updater_grpc_server_started_total{instance=~${instance:regex}}[1m]))


repo-updater: repo_updater_grpc_request_rate_per_method

Request rate per-method over 1m

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100701 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(rate(repo_updater_grpc_server_started_total{grpc_method=~${method:regex},instance=~${instance:regex}}[1m])) by (grpc_method)


repo-updater: repo_updater_error_percentage_all_methods

Error percentage across all methods over 1m

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100710 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (100.0 * ( (sum(rate(repo_updater_grpc_server_handled_total{grpc_code!="OK",instance=~${instance:regex}}[1m]))) / (sum(rate(repo_updater_grpc_server_handled_total{instance=~${instance:regex}}[1m]))) ))


repo-updater: repo_updater_grpc_error_percentage_per_method

Error percentage per-method over 1m

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100711 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: (100.0 * ( (sum(rate(repo_updater_grpc_server_handled_total{grpc_method=~${method:regex},grpc_code!="OK",instance=~${instance:regex}}[1m])) by (grpc_method)) / (sum(rate(repo_updater_grpc_server_handled_total{grpc_method=~${method:regex},instance=~${instance:regex}}[1m])) by (grpc_method)) ))


repo-updater: repo_updater_p99_response_time_per_method

99th percentile response time per method over 1m

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100720 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(repo_updater_grpc_server_handling_seconds_bucket{grpc_method=~${method:regex},instance=~${instance:regex}}[1m])))


repo-updater: repo_updater_p90_response_time_per_method

90th percentile response time per method over 1m

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100721 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(repo_updater_grpc_server_handling_seconds_bucket{grpc_method=~${method:regex},instance=~${instance:regex}}[1m])))


repo-updater: repo_updater_p75_response_time_per_method

75th percentile response time per method over 1m

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100722 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(repo_updater_grpc_server_handling_seconds_bucket{grpc_method=~${method:regex},instance=~${instance:regex}}[1m])))


repo-updater: repo_updater_grpc_response_stream_message_count_per_method

Average streaming response message count per-method over 1m

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100730 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: ((sum(rate(repo_updater_grpc_server_msg_sent_total{grpc_type="server_stream",instance=~${instance:regex}}[1m])) by (grpc_method))/(sum(rate(repo_updater_grpc_server_started_total{grpc_type="server_stream",instance=~${instance:regex}}[1m])) by (grpc_method)))


repo-updater: repo_updater_grpc_all_codes_per_method

Response codes rate per-method over 1m

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100740 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum(rate(repo_updater_grpc_server_handled_total{grpc_method=~${method:regex},instance=~${instance:regex}}[1m])) by (grpc_method, grpc_code)


Repo Updater: HTTP handlers

repo-updater: healthy_request_rate

Requests per second, by route, when status code is 200

The number of healthy HTTP requests per second to internal HTTP api

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100800 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (route) (rate(src_http_request_duration_seconds_count{app="repo-updater",code=~"2.."}[5m]))


repo-updater: unhealthy_request_rate

Requests per second, by route, when status code is not 200

The number of unhealthy HTTP requests per second to internal HTTP api

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100801 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (route) (rate(src_http_request_duration_seconds_count{app="repo-updater",code!~"2.."}[5m]))


repo-updater: request_rate_by_code

Requests per second, by status code

The number of HTTP requests per second by code

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100802 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (code) (rate(src_http_request_duration_seconds_count{app="repo-updater"}[5m]))


repo-updater: 95th_percentile_healthy_requests

95th percentile duration by route, when status code is 200

The 95th percentile duration by route when the status code is 200

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100810 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="repo-updater",code=~"2.."}[5m])) by (le, route))


repo-updater: 95th_percentile_unhealthy_requests

95th percentile duration by route, when status code is not 200

The 95th percentile duration by route when the status code is not 200

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100811 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="repo-updater",code!~"2.."}[5m])) by (le, route))


Repo Updater: Internal service requests

repo-updater: frontend_internal_api_error_responses

Frontend-internal API error responses every 5m by route

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100900 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="repo-updater",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="repo-updater"}[5m]))


Repo Updater: Database connections

repo-updater: max_open_conns

Maximum open

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101000 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="repo-updater"})


repo-updater: open_conns

Established

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101001 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="repo-updater"})


repo-updater: in_use

Used

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101010 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="repo-updater"})


repo-updater: idle

Idle

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101011 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="repo-updater"})


repo-updater: mean_blocked_seconds_per_conn_request

Mean blocked seconds per conn request

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101020 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="repo-updater"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="repo-updater"}[5m]))


repo-updater: closed_max_idle

Closed by SetMaxIdleConns

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101030 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="repo-updater"}[5m]))


repo-updater: closed_max_lifetime

Closed by SetConnMaxLifetime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101031 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="repo-updater"}[5m]))


repo-updater: closed_max_idle_time

Closed by SetConnMaxIdleTime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101032 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="repo-updater"}[5m]))


Repo Updater: Container monitoring (not available on server)

repo-updater: container_missing

Container missing

This value is the number of times a container has not been seen for more than one minute. If you observe thisvalue change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod repo-updater (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p repo-updater.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' repo-updater (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the repo-updater container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs repo-updater (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101100 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: count by(name) ((time() - container_last_seen{name=~"^repo-updater.*"}) > 60)


repo-updater: container_cpu_usage

Container cpu usage total (1m average) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101101 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}


repo-updater: container_memory_usage

Container memory usage by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101102 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}


repo-updater: fs_io_operations

Filesystem reads and writes rate by instance over 1h

This value indicates the number of filesystem read and write operations by containers of this service.When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101103 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by(name) (rate(container_fs_reads_total{name=~"^repo-updater.*"}[1h]) + rate(container_fs_writes_total{name=~"^repo-updater.*"}[1h]))


Repo Updater: Provisioning indicators (not available on server)

repo-updater: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101200 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}[1d])


repo-updater: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101201 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}[1d])


repo-updater: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101210 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}[5m])


repo-updater: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101211 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}[5m])


repo-updater: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101212 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^repo-updater.*"})


Repo Updater: Golang runtime monitoring

repo-updater: go_goroutines

Maximum active goroutines

A high value here indicates a possible goroutine leak.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101300 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by(instance) (go_goroutines{job=~".*repo-updater"})


repo-updater: go_gc_duration_seconds

Maximum go garbage collection duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101301 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: max by(instance) (go_gc_duration_seconds{job=~".*repo-updater"})


Repo Updater: Kubernetes monitoring (only available on Kubernetes)

repo-updater: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101400 on your Sourcegraph instance.

Managed by the Sourcegraph Repo Management team.

Technical details

Query: sum by(app) (up{app=~".*repo-updater"}) / count by (app) (up{app=~".*repo-updater"}) * 100


Searcher

Performs unindexed searches (diff and commit search, text search for unindexed branches).

To see this dashboard, visit /-/debug/grafana/d/searcher/searcher on your Sourcegraph instance.

searcher: traffic

Requests per second by code over 10m

This graph is the average number of requests per second searcher isexperiencing over the last 10 minutes.

The code is the HTTP Status code. 200 is success. We have a special code"canceled" which is common when doing a large search request and we findenough results before searching all possible repos.

Note: A search query is translated into an unindexed search query per unique(repo, commit). This means a single user query may result in thousands ofrequests to searcher.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum by (code) (rate(searcher_service_request_total{instance=~${instance:regex}}[10m]))


searcher: replica_traffic

Requests per second per replica over 10m

This graph is the average number of requests per second searcher isexperiencing over the last 10 minutes broken down per replica.

The code is the HTTP Status code. 200 is success. We have a special code"canceled" which is common when doing a large search request and we findenough results before searching all possible repos.

Note: A search query is translated into an unindexed search query per unique(repo, commit). This means a single user query may result in thousands ofrequests to searcher.

Refe