Dashboards reference

This document contains a complete reference on Sourcegraph's available dashboards, as well as details on how to interpret the panels and metrics.

To learn more about Sourcegraph's metrics and how to view these dashboards, see our metrics guide.

Frontend

Serves all end-user browser and API requests.

To see this dashboard, visit /-/debug/grafana/d/frontend/frontend on your Sourcegraph instance.

Frontend: Search at a glance

frontend: 99th_percentile_search_request_duration

99th percentile successful search request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.99, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))


frontend: 90th_percentile_search_request_duration

90th percentile successful search request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100001 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))


frontend: hard_timeout_search_responses

Hard timeout search responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100010 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: (sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name!="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name!="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100


frontend: hard_error_search_responses

Hard error search responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100011 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100


frontend: partial_timeout_search_responses

Partial timeout search responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100012 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100


frontend: search_alert_user_suggestions

Search alert user suggestions shown every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100013 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out|no_results__suggest_quotes",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100


frontend: page_load_latency

90th percentile page load latency over all routes over 10m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100020 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.9, sum by(le) (rate(src_http_request_duration_seconds_bucket{route!="raw",route!="blob",route!~"graphql.*"}[10m])))


frontend: blob_load_latency

90th percentile blob load latency over 10m. The 90th percentile of API calls to the blob route in the frontend API is at 5 seconds or more, meaning calls to the blob route, are slow to return a response. The blob API route provides the files and code snippets that the UI displays. When this alert fires, the UI will likely experience delays loading files and code snippets. It is likely that the gitserver and/or frontend services are experiencing issues, leading to slower responses.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100021 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.9, sum by(le) (rate(src_http_request_duration_seconds_bucket{route="blob"}[10m])))


Frontend: Search-based code intelligence at a glance

frontend: 99th_percentile_search_codeintel_request_duration

99th percentile code-intel successful search request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))


frontend: 90th_percentile_search_codeintel_request_duration

90th percentile code-intel successful search request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))


frontend: hard_timeout_search_codeintel_responses

Hard timeout search code-intel responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100110 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: (sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100


frontend: hard_error_search_codeintel_responses

Hard error search code-intel responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100111 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100


frontend: partial_timeout_search_codeintel_responses

Partial timeout search code-intel responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100112 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) * 100


frontend: search_codeintel_alert_user_suggestions

Search code-intel alert user suggestions shown every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100113 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100


Frontend: Search GraphQL API usage at a glance

frontend: 99th_percentile_search_api_request_duration

99th percentile successful search API request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))


frontend: 90th_percentile_search_api_request_duration

90th percentile successful search API request duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))


frontend: hard_error_search_api_responses

Hard error search API responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="other"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="other"}[5m]))


frontend: partial_timeout_search_api_responses

Partial timeout search API responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(increase(src_graphql_search_response{status="partial_timeout",source="other"}[5m])) / sum(increase(src_graphql_search_response{source="other"}[5m]))


frontend: search_api_alert_user_suggestions

Search API alert user suggestions shown every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100212 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out|no_results__suggest_quotes",source="other"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{status="alert",source="other"}[5m]))


Frontend: Site configuration client update latency

frontend: frontend_site_configuration_duration_since_last_successful_update_by_instance

Duration since last successful site configuration update (by instance)

The duration since the configuration client used by the "frontend" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: src_conf_client_time_since_last_successful_update_seconds{instance=~${internalInstance:regex}}


frontend: frontend_site_configuration_duration_since_last_successful_update_by_instance

Maximum duration since last successful site configuration update (all "frontend" instances)

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max(max_over_time(src_conf_client_time_since_last_successful_update_seconds{instance=~${internalInstance:regex}}[1m]))


Frontend: Codeintel: Precise code intelligence usage at a glance

frontend: codeintel_resolvers_total

Aggregate graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_resolvers_99th_percentile_duration

Aggregate successful graphql operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100401 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_resolvers_errors_total

Aggregate graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100402 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_resolvers_error_rate

Aggregate graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100403 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_resolvers_total

Graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100410 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_resolvers_99th_percentile_duration

99th percentile successful graphql operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100411 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_resolvers_errors_total

Graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100412 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_resolvers_error_rate

Graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100413 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: Auto-index enqueuer

frontend: codeintel_autoindex_enqueuer_total

Aggregate enqueuer operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_autoindex_enqueuer_99th_percentile_duration

Aggregate successful enqueuer operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100501 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_autoindex_enqueuer_errors_total

Aggregate enqueuer operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100502 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_autoindex_enqueuer_error_rate

Aggregate enqueuer operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100503 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_autoindex_enqueuer_total

Enqueuer operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100510 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_autoindex_enqueuer_99th_percentile_duration

99th percentile successful enqueuer operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100511 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_autoindex_enqueuer_errors_total

Enqueuer operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100512 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_autoindex_enqueuer_error_rate

Enqueuer operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100513 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: dbstore stats

frontend: codeintel_uploads_store_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100600 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_store_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100601 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_store_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100602 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_store_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100603 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_uploads_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100610 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100611 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_uploads_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100612 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100613 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Workerutil: lsif_indexes dbworker/store stats

frontend: workerutil_dbworker_store_codeintel_index_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100700 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: workerutil_dbworker_store_codeintel_index_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100701 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_index_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: workerutil_dbworker_store_codeintel_index_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100702 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: workerutil_dbworker_store_codeintel_index_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100703 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_index_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: lsifstore stats

frontend: codeintel_uploads_lsifstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100800 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_lsifstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100801 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_lsifstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100802 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_lsifstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100803 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_uploads_lsifstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100810 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_lsifstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100811 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_uploads_lsifstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100812 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploads_lsifstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100813 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: gitserver client

frontend: codeintel_gitserver_total

Aggregate client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_gitserver_99th_percentile_duration

Aggregate successful client operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_gitserver_errors_total

Aggregate client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_gitserver_error_rate

Aggregate client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100903 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_gitserver_total

Client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100910 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_gitserver_99th_percentile_duration

99th percentile successful client operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100911 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_gitserver_errors_total

Client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100912 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_gitserver_error_rate

Client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100913 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: uploadstore stats

frontend: codeintel_uploadstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploadstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploadstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploadstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101003 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_uploadstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101010 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploadstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101011 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_uploadstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101012 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_uploadstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101013 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: dependencies service stats

frontend: codeintel_dependencies_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_dependencies_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101102 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101103 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_dependencies_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101110 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101111 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_dependencies_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101112 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101113 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: dependencies service store stats

frontend: codeintel_dependencies_background_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101201 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101202 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101203 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_dependencies_background_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101210 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101211 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_dependencies_background_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101212 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101213 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: dependencies service background stats

frontend: codeintel_dependencies_background_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101300 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101301 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101302 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101303 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_dependencies_background_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101310 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101311 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_dependencies_background_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101312 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_dependencies_background_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101313 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Codeintel: lockfiles service stats

frontend: codeintel_lockfiles_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101400 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_lockfiles_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101401 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_lockfiles_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_lockfiles_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101402 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_lockfiles_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101403 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: codeintel_lockfiles_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101410 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_lockfiles_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101411 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_lockfiles_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: codeintel_lockfiles_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101412 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: codeintel_lockfiles_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101413 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Gitserver: Gitserver Client

frontend: gitserver_client_total

Aggregate graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101500 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: gitserver_client_99th_percentile_duration

Aggregate successful graphql operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101501 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: gitserver_client_errors_total

Aggregate graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101502 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: gitserver_client_error_rate

Aggregate graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101503 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: gitserver_client_total

Graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101510 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op,scope)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: gitserver_client_99th_percentile_duration

99th percentile successful graphql operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101511 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op,scope)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: gitserver_client_errors_total

Graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101512 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: gitserver_client_error_rate

Graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101513 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Batches: dbstore stats

frontend: batches_dbstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101600 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_dbstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101601 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_dbstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101602 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_dbstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101603 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: batches_dbstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101610 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_dbstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101611 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: batches_dbstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101612 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_dbstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101613 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Batches: service stats

frontend: batches_service_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101700 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_service_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101701 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_service_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101702 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_service_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101703 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: batches_service_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101710 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_service_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101711 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: batches_service_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101712 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_service_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101713 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Batches: Workspace execution dbstore

frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101800 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101801 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101802 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101803 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Batches: HTTP API File Handler

frontend: batches_httpapi_total

Aggregate http handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101900 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_httpapi_99th_percentile_duration

Aggregate successful http handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101901 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_httpapi_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_httpapi_errors_total

Aggregate http handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101902 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_httpapi_error_rate

Aggregate http handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101903 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: batches_httpapi_total

Http handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101910 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_httpapi_99th_percentile_duration

99th percentile successful http handler operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101911 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_httpapi_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: batches_httpapi_errors_total

Http handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101912 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: batches_httpapi_error_rate

Http handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101913 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Out-of-band migrations: up migration invocation (one batch processed)

frontend: oobmigration_total

Migration handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_99th_percentile_duration

Aggregate successful migration handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_oobmigration_duration_seconds_bucket{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_errors_total

Migration handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_error_rate

Migration handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102003 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Out-of-band migrations: down migration invocation (one batch processed)

frontend: oobmigration_total

Migration handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_99th_percentile_duration

Aggregate successful migration handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_oobmigration_duration_seconds_bucket{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_errors_total

Migration handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102102 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: oobmigration_error_rate

Migration handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102103 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Frontend: Zoekt Configuration GRPC server metrics

frontend: zoekt_configuration_grpc_request_rate_all_methods

Request rate across all methods over 2m

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102200 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(grpc_server_started_total{instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m]))


frontend: zoekt_configuration_grpc_request_rate_per_method

Request rate per-method over 2m

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102201 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(grpc_server_started_total{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)


frontend: zoekt_configuration_error_percentage_all_methods

Error percentage across all methods over 2m

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102210 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m]))) ))


frontend: zoekt_configuration_grpc_error_percentage_per_method

Error percentage per-method over 2m

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102211 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~${zoekt_configuration_method:regex},grpc_code!="OK",instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)) ))


frontend: zoekt_configuration_p99_response_time_per_method

99th percentile response time per method over 2m

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102220 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))


frontend: zoekt_configuration_p90_response_time_per_method

90th percentile response time per method over 2m

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102221 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))


frontend: zoekt_configuration_p75_response_time_per_method

75th percentile response time per method over 2m

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102222 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))


frontend: zoekt_configuration_p99_9_response_size_per_method

99.9th percentile total response size per method over 2m

The 99.9th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102230 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))


frontend: zoekt_configuration_p90_response_size_per_method

90th percentile total response size per method over 2m

The 90th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102231 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))


frontend: zoekt_configuration_p75_response_size_per_method

75th percentile total response size per method over 2m

The 75th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102232 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))


frontend: zoekt_configuration_p99_9_invididual_sent_message_size_per_method

99.9th percentile individual sent message size per method over 2m

The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102240 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))


frontend: zoekt_configuration_p90_invididual_sent_message_size_per_method

90th percentile individual sent message size per method over 2m

The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102241 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))


frontend: zoekt_configuration_p75_invididual_sent_message_size_per_method

75th percentile individual sent message size per method over 2m

The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102242 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))


frontend: zoekt_configuration_grpc_response_stream_message_count_per_method

Average streaming response message count per-method over 2m

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102250 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: ((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)))


frontend: zoekt_configuration_grpc_all_codes_per_method

Response codes rate per-method over 2m

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102260 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(grpc_server_handled_total{grpc_method=~${zoekt_configuration_method:regex},instance=~${internalInstance:regex},grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method, grpc_code)


Frontend: Zoekt Configuration GRPC "internal error" metrics

frontend: zoekt_configuration_grpc_clients_error_percentage_all_methods

Client baseline error percentage across all methods over 2m

The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102300 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_code!="OK"}[2m])))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))))))


frontend: zoekt_configuration_grpc_clients_error_percentage_per_method

Client baseline error percentage per-method over 2m

The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102301 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method))))))


frontend: zoekt_configuration_grpc_clients_all_codes_per_method

Client baseline response codes rate per-method over 2m

The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102302 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method, grpc_code))


frontend: zoekt_configuration_grpc_clients_internal_error_percentage_all_methods

Client-observed gRPC internal error percentage across all methods over 2m

The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "zoekt_configuration" clients.

Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error) as opposed to normal application code can be helpful when trying to fix it.

Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102310 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))))))


frontend: zoekt_configuration_grpc_clients_internal_error_percentage_per_method

Client-observed gRPC internal error percentage per-method over 2m

The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "zoekt_configuration" clients.

Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error) as opposed to normal application code can be helpful when trying to fix it.

Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102311 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method))))))


frontend: zoekt_configuration_grpc_clients_internal_error_all_codes_per_method

Client-observed gRPC internal error response code rate per-method over 2m

The rate of gRPC internal-error response codes per method, aggregated across all "zoekt_configuration" clients.

Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error) as opposed to normal application code can be helpful when trying to fix it.

Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102312 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",is_internal_error="true",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method, grpc_code))


Frontend: Internal Api GRPC server metrics

frontend: internal_api_grpc_request_rate_all_methods

Request rate across all methods over 2m

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102400 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(grpc_server_started_total{instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m]))


frontend: internal_api_grpc_request_rate_per_method

Request rate per-method over 2m

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102401 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(grpc_server_started_total{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)


frontend: internal_api_error_percentage_all_methods

Error percentage across all methods over 2m

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102410 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m]))) ))


frontend: internal_api_grpc_error_percentage_per_method

Error percentage per-method over 2m

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102411 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~${internal_api_method:regex},grpc_code!="OK",instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)) ))


frontend: internal_api_p99_response_time_per_method

99th percentile response time per method over 2m

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102420 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))


frontend: internal_api_p90_response_time_per_method

90th percentile response time per method over 2m

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102421 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))


frontend: internal_api_p75_response_time_per_method

75th percentile response time per method over 2m

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102422 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))


frontend: internal_api_p99_9_response_size_per_method

99.9th percentile total response size per method over 2m

The 99.9th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102430 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))


frontend: internal_api_p90_response_size_per_method

90th percentile total response size per method over 2m

The 90th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102431 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))


frontend: internal_api_p75_response_size_per_method

75th percentile total response size per method over 2m

The 75th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102432 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))


frontend: internal_api_p99_9_invididual_sent_message_size_per_method

99.9th percentile individual sent message size per method over 2m

The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102440 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))


frontend: internal_api_p90_invididual_sent_message_size_per_method

90th percentile individual sent message size per method over 2m

The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102441 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))


frontend: internal_api_p75_invididual_sent_message_size_per_method

75th percentile individual sent message size per method over 2m

The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102442 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))


frontend: internal_api_grpc_response_stream_message_count_per_method

Average streaming response message count per-method over 2m

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102450 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: ((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)))


frontend: internal_api_grpc_all_codes_per_method

Response codes rate per-method over 2m

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102460 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(grpc_server_handled_total{grpc_method=~${internal_api_method:regex},instance=~${internalInstance:regex},grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method, grpc_code)


Frontend: Internal Api GRPC "internal error" metrics

frontend: internal_api_grpc_clients_error_percentage_all_methods

Client baseline error percentage across all methods over 2m

The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102500 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))))))


frontend: internal_api_grpc_clients_error_percentage_per_method

Client baseline error percentage per-method over 2m

The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102501 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method))))))


frontend: internal_api_grpc_clients_all_codes_per_method

Client baseline response codes rate per-method over 2m

The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102502 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method, grpc_code))


frontend: internal_api_grpc_clients_internal_error_percentage_all_methods

Client-observed gRPC internal error percentage across all methods over 2m

The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "internal_api" clients.

Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error) as opposed to normal application code can be helpful when trying to fix it.

Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102510 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))))))


frontend: internal_api_grpc_clients_internal_error_percentage_per_method

Client-observed gRPC internal error percentage per-method over 2m

The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "internal_api" clients.

Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error) as opposed to normal application code can be helpful when trying to fix it.

Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102511 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method))))))


frontend: internal_api_grpc_clients_internal_error_all_codes_per_method

Client-observed gRPC internal error response code rate per-method over 2m

The rate of gRPC internal-error response codes per method, aggregated across all "internal_api" clients.

Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error) as opposed to normal application code can be helpful when trying to fix it.

Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102512 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",is_internal_error="true",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method, grpc_code))


Frontend: Internal service requests

frontend: internal_indexed_search_error_responses

Internal indexed search error responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102600 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by(code) (increase(src_zoekt_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_zoekt_request_duration_seconds_count[5m])) * 100


frontend: internal_unindexed_search_error_responses

Internal unindexed search error responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102601 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by(code) (increase(searcher_service_request_total{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(searcher_service_request_total[5m])) * 100


frontend: internalapi_error_responses

Internal API error responses every 5m by route

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102602 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by(category) (increase(src_frontend_internal_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_frontend_internal_request_duration_seconds_count[5m])) * 100


frontend: 99th_percentile_gitserver_duration

99th percentile successful gitserver query duration over 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102610 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.99, sum by (le,category)(rate(src_gitserver_request_duration_seconds_bucket{job=~"(sourcegraph-)?frontend"}[5m])))


frontend: gitserver_error_responses

Gitserver error responses every 5m

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102611 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend",code!~"2.."}[5m])) / ignoring(code) group_left sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend"}[5m])) * 100


frontend: observability_test_alert_warning

Warning test alert metric

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102620 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by(owner) (observability_test_metric_warning)


frontend: observability_test_alert_critical

Critical test alert metric

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102621 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by(owner) (observability_test_metric_critical)


Frontend: Authentication API requests

frontend: sign_in_rate

Rate of API requests to sign-in

Rate (QPS) of requests to sign-in

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102700 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))


frontend: sign_in_latency_p99

99 percentile of sign-in latency

99% percentile of sign-in latency

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102701 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-in",method="post"}[5m])) by (le))


frontend: sign_in_error_rate

Percentage of sign-in requests by http code

Percentage of sign-in requests grouped by http code

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102702 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))*100


frontend: sign_up_rate

Rate of API requests to sign-up

Rate (QPS) of requests to sign-up

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102710 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))


frontend: sign_up_latency_p99

99 percentile of sign-up latency

99% percentile of sign-up latency

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102711 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-up",method="post"}[5m])) by (le))


frontend: sign_up_code_percentage

Percentage of sign-up requests by http code

Percentage of sign-up requests grouped by http code

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102712 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100


frontend: sign_out_rate

Rate of API requests to sign-out

Rate (QPS) of requests to sign-out

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102720 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))


frontend: sign_out_latency_p99

99 percentile of sign-out latency

99% percentile of sign-out latency

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102721 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-out"}[5m])) by (le))


frontend: sign_out_error_rate

Percentage of sign-out requests that return non-303 http code

Percentage of sign-out requests grouped by http code

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102722 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100


frontend: account_failed_sign_in_attempts

Rate of failed sign-in attempts

Failed sign-in attempts per minute

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102730 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(rate(src_frontend_account_failed_sign_in_attempts_total[1m]))


frontend: account_lockouts

Rate of account lockouts

Account lockouts per minute

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102731 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(rate(src_frontend_account_lockouts_total[1m]))


Frontend: Cody API requests

frontend: cody_api_rate

Rate of API requests to cody endpoints (excluding GraphQL)

Rate (QPS) of requests to cody related endpoints. completions.stream is for the conversational endpoints. completions.code is for the code auto-complete endpoints.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102800 on your Sourcegraph instance.

Managed by the Sourcegraph Cody team.

Technical details

Query: sum by (route, code)(irate(src_http_request_duration_seconds_count{route=~"^completions.*"}[5m]))


Frontend: Cloud KMS and cache

frontend: cloudkms_cryptographic_requests

Cryptographic requests to Cloud KMS every 1m

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102900 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_cloudkms_cryptographic_total[1m]))


frontend: encryption_cache_hit_ratio

Average encryption cache hit ratio per workload

  • Encryption cache hit ratio (hits/(hits+misses)) - minimum across all instances of a workload.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102901 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: min by (kubernetes_name) (src_encryption_cache_hit_total/(src_encryption_cache_hit_total+src_encryption_cache_miss_total))


frontend: encryption_cache_evictions

Rate of encryption cache evictions - sum across all instances of a given workload

  • Rate of encryption cache evictions (caused by cache exceeding its maximum size) - sum across all instances of a workload

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102902 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (kubernetes_name) (irate(src_encryption_cache_eviction_total[5m]))


Frontend: Database connections

frontend: max_open_conns

Maximum open

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103000 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="frontend"})


frontend: open_conns

Established

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103001 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="frontend"})


frontend: in_use

Used

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103010 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="frontend"})


frontend: idle

Idle

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103011 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="frontend"})


frontend: mean_blocked_seconds_per_conn_request

Mean blocked seconds per conn request

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103020 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="frontend"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="frontend"}[5m]))


frontend: closed_max_idle

Closed by SetMaxIdleConns

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103030 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="frontend"}[5m]))


frontend: closed_max_lifetime

Closed by SetConnMaxLifetime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103031 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="frontend"}[5m]))


frontend: closed_max_idle_time

Closed by SetConnMaxIdleTime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103032 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="frontend"}[5m]))


Frontend: Container monitoring (not available on server)

frontend: container_missing

Container missing

This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod (frontend|sourcegraph-frontend) (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p (frontend|sourcegraph-frontend).
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' (frontend|sourcegraph-frontend) (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the (frontend|sourcegraph-frontend) container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs (frontend|sourcegraph-frontend) (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103100 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: count by(name) ((time() - container_last_seen{name=~"^(frontend|sourcegraph-frontend).*"}) > 60)


frontend: container_cpu_usage

Container cpu usage total (1m average) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103101 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}


frontend: container_memory_usage

Container memory usage by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103102 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}


frontend: fs_io_operations

Filesystem reads and writes rate by instance over 1h

This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103103 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(name) (rate(container_fs_reads_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]) + rate(container_fs_writes_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]))


Frontend: Provisioning indicators (not available on server)

frontend: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103200 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])


frontend: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103201 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])


frontend: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103210 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])


frontend: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103211 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])


frontend: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103212 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^(frontend|sourcegraph-frontend).*"})


Frontend: Golang runtime monitoring

frontend: go_goroutines

Maximum active goroutines

A high value here indicates a possible goroutine leak.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103300 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by(instance) (go_goroutines{job=~".*(frontend|sourcegraph-frontend)"})


frontend: go_gc_duration_seconds

Maximum go garbage collection duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103301 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by(instance) (go_gc_duration_seconds{job=~".*(frontend|sourcegraph-frontend)"})


Frontend: Kubernetes monitoring (only available on Kubernetes)

frontend: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103400 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(app) (up{app=~".*(frontend|sourcegraph-frontend)"}) / count by (app) (up{app=~".*(frontend|sourcegraph-frontend)"}) * 100


Frontend: Search: Ranking

frontend: total_search_clicks

Total number of search clicks over 6h

The total number of search clicks across all search types over a 6 hour window.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103500 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h]))


frontend: percent_clicks_on_top_search_result

Percent of clicks on top search result over 6h

The percent of clicks that were on the top search result, excluding searches with very few results (3 or fewer).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103501 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum by (ranked) (increase(src_search_ranking_result_clicked_bucket{le="1",resultsLength=">3"}[6h])) / sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h])) * 100


frontend: percent_clicks_on_top_3_search_results

Percent of clicks on top 3 search results over 6h

The percent of clicks that were on the first 3 search results, excluding searches with very few results (3 or fewer).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103502 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum by (ranked) (increase(src_search_ranking_result_clicked_bucket{le="3",resultsLength=">3"}[6h])) / sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h])) * 100


frontend: distribution_of_clicked_search_result_type_over_6h_in_percent

Distribution of clicked search result type over 6h

The distribution of clicked search results by result type. At every point in time, the values should sum to 100.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103510 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(increase(src_search_ranking_result_clicked_count{type="repo"}[6h])) / sum(increase(src_search_ranking_result_clicked_count[6h])) * 100


frontend: percent_zoekt_searches_hitting_flush_limit

Percent of zoekt searches that hit the flush time limit

The percent of Zoekt searches that hit the flush time limit. These searches don`t visit all matches, so they could be missing relevant results, or be non-deterministic.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103511 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(increase(zoekt_final_aggregate_size_count{reason="timer_expired"}[1d])) / sum(increase(zoekt_final_aggregate_size_count[1d])) * 100


Frontend: Email delivery

frontend: email_delivery_failures

Email delivery failure rate over 30 minutes

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103600 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum(increase(src_email_send{success="false"}[30m])) / sum(increase(src_email_send[30m])) * 100


frontend: email_deliveries_total

Total emails successfully delivered every 30 minutes

Total emails successfully delivered.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103610 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum (increase(src_email_send{success="true"}[30m]))


frontend: email_deliveries_by_source

Emails successfully delivered every 30 minutes by source

Emails successfully delivered by source, i.e. product feature.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103611 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (email_source) (increase(src_email_send{success="true"}[30m]))


Frontend: Sentinel queries (only on sourcegraph.com)

frontend: mean_successful_sentinel_duration_over_2h

Mean successful sentinel search duration over 2h

Mean search duration for all successful sentinel queries

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103700 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(rate(src_search_response_latency_seconds_sum{source=~searchblitz., status=success}[2h])) / sum(rate(src_search_response_latency_seconds_count{source=~searchblitz., status=success}[2h]))


frontend: mean_sentinel_stream_latency_over_2h

Mean successful sentinel stream latency over 2h

Mean time to first result for all successful streaming sentinel queries

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103701 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[2h])) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[2h]))


frontend: 90th_percentile_successful_sentinel_duration_over_2h

90th percentile successful sentinel search duration over 2h

90th percentile search duration for all successful sentinel queries

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103710 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum by (le)(label_replace(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[2h]), "source", "$1", "source", "searchblitz_(.*)")))


frontend: 90th_percentile_sentinel_stream_latency_over_2h

90th percentile successful sentinel stream latency over 2h

90th percentile time to first result for all successful streaming sentinel queries

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103711 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum by (le)(label_replace(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[2h]), "source", "$1", "source", "searchblitz_(.*)")))


frontend: mean_successful_sentinel_duration_by_query

Mean successful sentinel search duration by query

Mean search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103720 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(rate(src_search_response_latency_seconds_sum{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_response_latency_seconds_count{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source)


frontend: mean_sentinel_stream_latency_by_query

Mean successful sentinel stream latency by query

Mean time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103721 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source)


frontend: 90th_percentile_successful_sentinel_duration_by_query

90th percentile successful sentinel search duration by query

90th percentile search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103730 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))


frontend: 90th_percentile_successful_stream_latency_by_query

90th percentile successful sentinel stream latency by query

90th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103731 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))


frontend: 90th_percentile_unsuccessful_duration_by_query

90th percentile unsuccessful sentinel search duration by query

90th percentile search duration of unsuccessful sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103740 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~searchblitz.*, status!=success}[$sentinel_sampling_duration])) by (le, source))


frontend: 75th_percentile_successful_sentinel_duration_by_query

75th percentile successful sentinel search duration by query

75th percentile search duration of successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103750 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))


frontend: 75th_percentile_successful_stream_latency_by_query

75th percentile successful sentinel stream latency by query

75th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103751 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.75, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))


frontend: 75th_percentile_unsuccessful_duration_by_query

75th percentile unsuccessful sentinel search duration by query

75th percentile search duration of unsuccessful sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103760 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: histogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~searchblitz.*, status!=success}[$sentinel_sampling_duration])) by (le, source))


frontend: unsuccessful_status_rate

Unsuccessful status rate

The rate of unsuccessful sentinel queries, broken down by failure type.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103770 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum(rate(src_graphql_search_response{source=~"searchblitz.*", status!="success"}[$sentinel_sampling_duration])) by (status)


Frontend: Incoming webhooks

frontend: p95_time_to_handle_incoming_webhooks

P95 time to handle incoming webhooks

						p95 response time to incoming webhook requests from code hosts.

						Increases in response time can point to too much load on the database to keep up with the incoming requests.

						See this documentation page for more details on webhook requests: (https://docs.sourcegraph.com/admin/config/webhooks/incoming)

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103800 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.95, sum (rate(src_http_request_duration_seconds_bucket{route=~"webhooks|github.webhooks|gitlab.webhooks|bitbucketServer.webhooks|bitbucketCloud.webhooks"}[5m])) by (le, route))


Frontend: Search aggregations: proactive and expanded search aggregations

frontend: insights_aggregations_total

Aggregate search aggregations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103900 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: insights_aggregations_99th_percentile_duration

Aggregate successful search aggregations operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103901 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (le)(rate(src_insights_aggregations_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: insights_aggregations_errors_total

Aggregate search aggregations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103902 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: insights_aggregations_error_rate

Aggregate search aggregations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103903 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


frontend: insights_aggregations_total

Search aggregations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103910 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op,extended_mode)(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: insights_aggregations_99th_percentile_duration

99th percentile successful search aggregations operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103911 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op,extended_mode)(rate(src_insights_aggregations_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))


frontend: insights_aggregations_errors_total

Search aggregations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103912 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))


frontend: insights_aggregations_error_rate

Search aggregations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103913 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op,extended_mode)(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100


Git Server

Stores, manages, and operates Git repositories.

To see this dashboard, visit /-/debug/grafana/d/gitserver/gitserver on your Sourcegraph instance.

gitserver: go_routines

Go routines

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: go_goroutines{app="gitserver", instance=~${shard:regex}}


gitserver: cpu_throttling_time

Container CPU throttling time %

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100010 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) ((rate(container_cpu_cfs_throttled_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]) / rate(container_cpu_cfs_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m])) * 100)


gitserver: cpu_usage_seconds

Cpu usage seconds

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100011 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_cpu_usage_seconds_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]))


gitserver: disk_space_remaining

Disk space remaining by instance

Indicates disk space remaining for each gitserver instance, which is used to determine when to start evicting least-used repository clones from disk (default 10%, configured by SRC_REPOS_DESIRED_PERCENT_FREE).

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100020 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (src_gitserver_disk_space_available / src_gitserver_disk_space_total) * 100


gitserver: io_reads_total

I/o reads total

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100030 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (container_label_io_kubernetes_container_name) (rate(container_fs_reads_total{container_label_io_kubernetes_container_name="gitserver"}[5m]))


gitserver: io_writes_total

I/o writes total

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100031 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (container_label_io_kubernetes_container_name) (rate(container_fs_writes_total{container_label_io_kubernetes_container_name="gitserver"}[5m]))


gitserver: io_reads

I/o reads

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100040 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_reads_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]))


gitserver: io_writes

I/o writes

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100041 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_writes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]))


gitserver: io_read_througput

I/o read throughput

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100050 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_reads_bytes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]))


gitserver: io_write_throughput

I/o write throughput

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100051 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_writes_bytes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~${shard:regex}}[5m]))


gitserver: running_git_commands

Git commands running on each gitserver instance

A high value signals load.

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100060 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (instance, cmd) (src_gitserver_exec_running{instance=~${shard:regex}})


gitserver: git_commands_received

Rate of git commands received across all instances

per second rate per command across all instances

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100061 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (cmd) (rate(src_gitserver_exec_duration_seconds_count[5m]))


gitserver: repository_clone_queue_size

Repository clone queue size

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100070 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(src_gitserver_clone_queue)


gitserver: repository_existence_check_queue_size

Repository existence check queue size

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100071 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(src_gitserver_lsremote_queue)


gitserver: echo_command_duration_test

Echo test command duration

A high value here likely indicates a problem, especially if consistently high. You can query for individual commands using sum by (cmd)(src_gitserver_exec_running) in Grafana (/-/debug/grafana) to see if a specific Git Server command might be spiking in frequency.

If this value is consistently high, consider the following:

  • Single container deployments: Upgrade to a Docker Compose deployment which offers better scalability and resource isolation.
  • Kubernetes and Docker Compose: Check that you are running a similar number of git server replicas and that their CPU/memory limits are allocated according to what is shown in the Sourcegraph resource estimator.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100080 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max(src_gitserver_echo_duration_seconds)


gitserver: frontend_internal_api_error_responses

Frontend-internal API error responses every 5m by route

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100081 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="gitserver",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="gitserver"}[5m]))


gitserver: src_gitserver_repo_count

Number of repositories on gitserver

This metric is only for informational purposes. It indicates the total number of repositories on gitserver.

It does not indicate any problems with the instance.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100090 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: src_gitserver_repo_count


Git Server: Gitserver: Gitserver API (powered by internal/observation)

gitserver: gitserver_api_total

Aggregate graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m]))


gitserver: gitserver_api_99th_percentile_duration

Aggregate successful graphql operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (le)(rate(src_gitserver_api_duration_seconds_bucket{job=~"^gitserver.*"}[5m]))


gitserver: gitserver_api_errors_total

Aggregate graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100102 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))


gitserver: gitserver_api_error_rate

Aggregate graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100103 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m])) / (sum(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m])) + sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))) * 100


gitserver: gitserver_api_total

Graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100110 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op)(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m]))


gitserver: gitserver_api_99th_percentile_duration

99th percentile successful graphql operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100111 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_gitserver_api_duration_seconds_bucket{job=~"^gitserver.*"}[5m])))


gitserver: gitserver_api_errors_total

Graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100112 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))


gitserver: gitserver_api_error_rate

Graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100113 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))) * 100


Git Server: Global operation semaphores

gitserver: batch_log_semaphore_wait_99th_percentile_duration

Aggregate successful batch log semaphore operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (le)(rate(src_batch_log_semaphore_wait_duration_seconds_bucket{job=~"^gitserver.*"}[5m]))


Git Server: Gitservice for internal cloning

gitserver: aggregate_gitservice_request_duration

95th percentile gitservice request duration aggregate

A high value means any internal service trying to clone a repo from gitserver is slowed down.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=gitserver, error=false}[5m])) by (le))


gitserver: gitservice_request_duration

95th percentile gitservice request duration per shard

A high value means any internal service trying to clone a repo from gitserver is slowed down.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=gitserver, error=false, instance=~${shard:regex}}[5m])) by (le, instance))


gitserver: aggregate_gitservice_error_request_duration

95th percentile gitservice error request duration aggregate

95th percentile gitservice error request duration aggregate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100310 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=gitserver, error=true}[5m])) by (le))


gitserver: gitservice_request_duration

95th percentile gitservice error request duration per shard

95th percentile gitservice error request duration per shard

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100311 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=gitserver, error=true, instance=~${shard:regex}}[5m])) by (le, instance))


gitserver: aggregate_gitservice_request_rate

Aggregate gitservice request rate

Aggregate gitservice request rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100320 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=gitserver, error=false}[5m]))


gitserver: gitservice_request_rate

Gitservice request rate per shard

Per shard gitservice request rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100321 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=gitserver, error=false, instance=~${shard:regex}}[5m]))


gitserver: aggregate_gitservice_request_error_rate

Aggregate gitservice request error rate

Aggregate gitservice request error rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100330 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=gitserver, error=true}[5m]))


gitserver: gitservice_request_error_rate

Gitservice request error rate per shard

Per shard gitservice request error rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100331 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=gitserver, error=true, instance=~${shard:regex}}[5m]))


gitserver: aggregate_gitservice_requests_running

Aggregate gitservice requests running

Aggregate gitservice requests running

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100340 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(src_gitserver_gitservice_running{type=gitserver})


gitserver: gitservice_requests_running

Gitservice requests running per shard

Per shard gitservice requests running

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100341 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(src_gitserver_gitservice_running{type=gitserver, instance=~${shard:regex}}) by (instance)


Git Server: Gitserver cleanup jobs

gitserver: janitor_running

If the janitor process is running

1, if the janitor process is currently running

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max by (instance) (src_gitserver_janitor_running)


gitserver: janitor_job_duration

95th percentile job run duration

95th percentile job run duration

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100410 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_gitserver_janitor_job_duration_seconds_bucket[5m])) by (le, job_name))


gitserver: janitor_job_failures

Failures over 5m (by job)

the rate of failures over 5m (by job)

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100420 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (job_name) (rate(src_gitserver_janitor_job_duration_seconds_count{success="false"}[5m]))


gitserver: repos_removed

Repositories removed due to disk pressure

Repositories removed due to disk pressure

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100430 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (instance) (rate(src_gitserver_repos_removed_disk_pressure[5m]))


gitserver: non_existent_repos_removed

Repositories removed because they are not defined in the DB

Repositoriess removed because they are not defined in the DB

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100440 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (instance) (increase(src_gitserver_non_existing_repos_removed[5m]))


gitserver: sg_maintenance_reason

Successful sg maintenance jobs over 1h (by reason)

the rate of successful sg maintenance jobs and the reason why they were triggered

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100450 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (reason) (rate(src_gitserver_maintenance_status{success="true"}[1h]))


gitserver: git_prune_skipped

Successful git prune jobs over 1h

the rate of successful git prune jobs over 1h and whether they were skipped

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100460 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (skipped) (rate(src_gitserver_prune_status{success="true"}[1h]))


gitserver: search_latency

Mean time until first result is sent

Mean latency (time to first result) of gitserver search requests

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: rate(src_gitserver_search_latency_seconds_sum[5m]) / rate(src_gitserver_search_latency_seconds_count[5m])


gitserver: search_duration

Mean search duration

Mean duration of gitserver search requests

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100501 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: rate(src_gitserver_search_duration_seconds_sum[5m]) / rate(src_gitserver_search_duration_seconds_count[5m])


gitserver: search_rate

Rate of searches run by pod

The rate of searches executed on gitserver by pod

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100510 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: rate(src_gitserver_search_latency_seconds_count{instance=~${shard:regex}}[5m])


gitserver: running_searches

Number of searches currently running by pod

The number of searches currently executing on gitserver by pod

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100511 on your Sourcegraph instance.

Managed by the Sourcegraph Search team.

Technical details

Query: sum by (instance) (src_gitserver_search_running{instance=~${shard:regex}})


Git Server: Gitserver: Gitserver Client

gitserver: gitserver_client_total

Aggregate graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100600 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_client_total{job=~"^*.*"}[5m]))


gitserver: gitserver_client_99th_percentile_duration

Aggregate successful graphql operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100601 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^*.*"}[5m]))


gitserver: gitserver_client_errors_total

Aggregate graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100602 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))


gitserver: gitserver_client_error_rate

Aggregate graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100603 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^*.*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))) * 100


gitserver: gitserver_client_total

Graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100610 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op,scope)(increase(src_gitserver_client_total{job=~"^*.*"}[5m]))


gitserver: gitserver_client_99th_percentile_duration

99th percentile successful graphql operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100611 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op,scope)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^*.*"}[5m])))


gitserver: gitserver_client_errors_total

Graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100612 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))


gitserver: gitserver_client_error_rate

Graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100613 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_client_total{job=~"^*.*"}[5m])) + sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))) * 100


Git Server: Repos disk I/O metrics

gitserver: repos_disk_reads_sec

Read request rate over 1m (per instance)

The number of read requests that were issued to the device per second.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100700 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_writes_sec

Write request rate over 1m (per instance)

The number of write requests that were issued to the device per second.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100701 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_read_throughput

Read throughput over 1m (per instance)

The amount of data that was read from the device per second.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100710 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_write_throughput

Write throughput over 1m (per instance)

The amount of data that was written to the device per second.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100711 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_read_duration

Average read duration over 1m (per instance)

The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100720 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~node-exporter.}[1m])))))))


gitserver: repos_disk_write_duration

Average write duration over 1m (per instance)

The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100721 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~node-exporter.}[1m])))))))


gitserver: repos_disk_read_request_size

Average read request size over 1m (per instance)

The average size of read requests that were issued to the device.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100730 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~node-exporter.}[1m])))))))


gitserver: repos_disk_write_request_size)

Average write request size over 1m (per instance)

The average size of write requests that were issued to the device.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100731 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~node-exporter.}[1m])))))))


gitserver: repos_disk_reads_merged_sec

Merged read request rate over 1m (per instance)

The number of read requests merged per second that were queued to the device.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100740 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_writes_merged_sec

Merged writes request rate over 1m (per instance)

The number of write requests merged per second that were queued to the device.

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100741 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~node-exporter.*}[1m])))))


gitserver: repos_disk_average_queue_size

Average queue size over 1m (per instance)

The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).

Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100750 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~node-exporter.*}[1m])))))


Git Server: Gitserver GRPC server metrics

gitserver: gitserver_grpc_request_rate_all_methods

Request rate across all methods over 2m

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100800 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(grpc_server_started_total{instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m]))


gitserver: gitserver_grpc_request_rate_per_method

Request rate per-method over 2m

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100801 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(grpc_server_started_total{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)


gitserver: gitserver_error_percentage_all_methods

Error percentage across all methods over 2m

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100810 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m]))) ))


gitserver: gitserver_grpc_error_percentage_per_method

Error percentage per-method over 2m

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100811 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~${gitserver_method:regex},grpc_code!="OK",instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)) ))


gitserver: gitserver_p99_response_time_per_method

99th percentile response time per method over 2m

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100820 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])))


gitserver: gitserver_p90_response_time_per_method

90th percentile response time per method over 2m

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100821 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])))


gitserver: gitserver_p75_response_time_per_method

75th percentile response time per method over 2m

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100822 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])))


gitserver: gitserver_p99_9_response_size_per_method

99.9th percentile total response size per method over 2m

The 99.9th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100830 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])))


gitserver: gitserver_p90_response_size_per_method

90th percentile total response size per method over 2m

The 90th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100831 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])))


gitserver: gitserver_p75_response_size_per_method

75th percentile total response size per method over 2m

The 75th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100832 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])))


gitserver: gitserver_p99_9_invididual_sent_message_size_per_method

99.9th percentile individual sent message size per method over 2m

The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100840 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])))


gitserver: gitserver_p90_invididual_sent_message_size_per_method

90th percentile individual sent message size per method over 2m

The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100841 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])))


gitserver: gitserver_p75_invididual_sent_message_size_per_method

75th percentile individual sent message size per method over 2m

The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100842 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])))


gitserver: gitserver_grpc_response_stream_message_count_per_method

Average streaming response message count per-method over 2m

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100850 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: ((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)))


gitserver: gitserver_grpc_all_codes_per_method

Response codes rate per-method over 2m

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100860 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: sum(rate(grpc_server_handled_total{grpc_method=~${gitserver_method:regex},instance=~${shard:regex},grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method, grpc_code)


Git Server: Gitserver GRPC "internal error" metrics

gitserver: gitserver_grpc_clients_error_percentage_all_methods

Client baseline error percentage across all methods over 2m

The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "gitserver" clients.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100900 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService"}[2m])))))))


gitserver: gitserver_grpc_clients_error_percentage_per_method

Client baseline error percentage per-method over 2m

The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "gitserver" clients.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100901 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}"}[2m])) by (grpc_method))))))


gitserver: gitserver_grpc_clients_all_codes_per_method

Client baseline response codes rate per-method over 2m

The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "gitserver" clients.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100902 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}"}[2m])) by (grpc_method, grpc_code))


gitserver: gitserver_grpc_clients_internal_error_percentage_all_methods

Client-observed gRPC internal error percentage across all methods over 2m

The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "gitserver" clients.

Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "gitserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error) as opposed to normal application code can be helpful when trying to fix it.

Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100910 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService"}[2m])))))))


gitserver: gitserver_grpc_clients_internal_error_percentage_per_method

Client-observed gRPC internal error percentage per-method over 2m

The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "gitserver" clients.

Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "gitserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error) as opposed to normal application code can be helpful when trying to fix it.

Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100911 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}"}[2m])) by (grpc_method))))))


gitserver: gitserver_grpc_clients_internal_error_all_codes_per_method

Client-observed gRPC internal error response code rate per-method over 2m

The rate of gRPC internal-error response codes per method, aggregated across all "gitserver" clients.

Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "gitserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error) as opposed to normal application code can be helpful when trying to fix it.

Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100912 on your Sourcegraph instance.

Managed by the Sourcegraph Search Core team.

Technical details

Query: (sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",is_internal_error="true",grpc_method=~"${gitserver_method:regex}"}[2m])) by (grpc_method, grpc_code))


Git Server: Site configuration client update latency

gitserver: gitserver_site_configuration_duration_since_last_successful_update_by_instance

Duration since last successful site configuration update (by instance)

The duration since the configuration client used by the "gitserver" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101000 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: src_conf_client_time_since_last_successful_update_seconds{instance=~${shard:regex}}


gitserver: gitserver_site_configuration_duration_since_last_successful_update_by_instance

Maximum duration since last successful site configuration update (all "gitserver" instances)

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101001 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max(max_over_time(src_conf_client_time_since_last_successful_update_seconds{instance=~${shard:regex}}[1m]))


Git Server: Codeintel: Coursier invocation stats

gitserver: codeintel_coursier_total

Aggregate invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_coursier_99th_percentile_duration

Aggregate successful invocations operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_coursier_errors_total

Aggregate invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101102 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_coursier_error_rate

Aggregate invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101103 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100


gitserver: codeintel_coursier_total

Invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101110 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_coursier_99th_percentile_duration

99th percentile successful invocations operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101111 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m])))


gitserver: codeintel_coursier_errors_total

Invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101112 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_coursier_error_rate

Invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101113 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100


Git Server: Codeintel: npm invocation stats

gitserver: codeintel_npm_total

Aggregate invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_npm_99th_percentile_duration

Aggregate successful invocations operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101201 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_npm_errors_total

Aggregate invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101202 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_npm_error_rate

Aggregate invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101203 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100


gitserver: codeintel_npm_total

Invocations operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101210 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_npm_99th_percentile_duration

99th percentile successful invocations operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101211 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m])))


gitserver: codeintel_npm_errors_total

Invocations operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101212 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))


gitserver: codeintel_npm_error_rate

Invocations operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101213 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100


Git Server: HTTP handlers

gitserver: healthy_request_rate

Requests per second, by route, when status code is 200

The number of healthy HTTP requests per second to internal HTTP api

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101300 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (route) (rate(src_http_request_duration_seconds_count{app="gitserver",code=~"2.."}[5m]))


gitserver: unhealthy_request_rate

Requests per second, by route, when status code is not 200

The number of unhealthy HTTP requests per second to internal HTTP api

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101301 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (route) (rate(src_http_request_duration_seconds_count{app="gitserver",code!~"2.."}[5m]))


gitserver: request_rate_by_code

Requests per second, by status code

The number of HTTP requests per second by code

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101302 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (code) (rate(src_http_request_duration_seconds_count{app="gitserver"}[5m]))


gitserver: 95th_percentile_healthy_requests

95th percentile duration by route, when status code is 200

The 95th percentile duration by route when the status code is 200

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101310 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="gitserver",code=~"2.."}[5m])) by (le, route))


gitserver: 95th_percentile_unhealthy_requests

95th percentile duration by route, when status code is not 200

The 95th percentile duration by route when the status code is not 200

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101311 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="gitserver",code!~"2.."}[5m])) by (le, route))


Git Server: Database connections

gitserver: max_open_conns

Maximum open

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101400 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="gitserver"})


gitserver: open_conns

Established

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101401 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="gitserver"})


gitserver: in_use

Used

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101410 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="gitserver"})


gitserver: idle

Idle

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101411 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="gitserver"})


gitserver: mean_blocked_seconds_per_conn_request

Mean blocked seconds per conn request

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101420 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="gitserver"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="gitserver"}[5m]))


gitserver: closed_max_idle

Closed by SetMaxIdleConns

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101430 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="gitserver"}[5m]))


gitserver: closed_max_lifetime

Closed by SetConnMaxLifetime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101431 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="gitserver"}[5m]))


gitserver: closed_max_idle_time

Closed by SetConnMaxIdleTime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101432 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="gitserver"}[5m]))


Git Server: Container monitoring (not available on server)

gitserver: container_missing

Container missing

This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod gitserver (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p gitserver.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' gitserver (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the gitserver container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs gitserver (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101500 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: count by(name) ((time() - container_last_seen{name=~"^gitserver.*"}) > 60)


gitserver: container_cpu_usage

Container cpu usage total (1m average) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101501 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}


gitserver: container_memory_usage

Container memory usage by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101502 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}


gitserver: fs_io_operations

Filesystem reads and writes rate by instance over 1h

This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101503 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by(name) (rate(container_fs_reads_total{name=~"^gitserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^gitserver.*"}[1h]))


Git Server: Provisioning indicators (not available on server)

gitserver: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101600 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[1d])


gitserver: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Git Server is expected to use up all the memory it is provided.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101601 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[1d])


gitserver: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101610 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[5m])


gitserver: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Git Server is expected to use up all the memory it is provided.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101611 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[5m])


gitserver: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101612 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^gitserver.*"})


Git Server: Golang runtime monitoring

gitserver: go_goroutines

Maximum active goroutines

A high value here indicates a possible goroutine leak.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101700 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max by(instance) (go_goroutines{job=~".*gitserver"})


gitserver: go_gc_duration_seconds

Maximum go garbage collection duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101701 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max by(instance) (go_gc_duration_seconds{job=~".*gitserver"})


Git Server: Kubernetes monitoring (only available on Kubernetes)

gitserver: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101800 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by(app) (up{app=~".*gitserver"}) / count by (app) (up{app=~".*gitserver"}) * 100


Postgres

Postgres metrics, exported from postgres_exporter (not available on server).

To see this dashboard, visit /-/debug/grafana/d/postgres/postgres on your Sourcegraph instance.

postgres: connections

Active connections

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (job) (pg_stat_activity_count{datname!~"template.*|postgres|cloudsqladmin"}) OR sum by (job) (pg_stat_activity_count{job="codeinsights-db", datname!~"template.*|cloudsqladmin"})


postgres: usage_connections_percentage

Connection in use

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100001 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum(pg_stat_activity_count) by (job) / (sum(pg_settings_max_connections) by (job) - sum(pg_settings_superuser_reserved_connections) by (job)) * 100


postgres: transaction_durations

Maximum transaction durations

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100002 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (job) (pg_stat_activity_max_tx_duration{datname!~"template.*|postgres|cloudsqladmin",job!="codeintel-db"}) OR sum by (job) (pg_stat_activity_max_tx_duration{job="codeinsights-db", datname!~"template.*|cloudsqladmin"})


Postgres: Database and collector status

postgres: postgres_up

Database availability

A non-zero value indicates the database is online.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: pg_up


postgres: invalid_indexes

Invalid indexes (unusable by the query planner)

A non-zero value indicates the that Postgres failed to build an index. Expect degraded performance until the index is manually rebuilt.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (relname)(pg_invalid_index_count)


postgres: pg_exporter_err

Errors scraping postgres exporter

This value indicates issues retrieving metrics from postgres_exporter.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100110 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: pg_exporter_last_scrape_error


postgres: migration_in_progress

Active schema migration

A 0 value indicates that no migration is in progress.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100111 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: pg_sg_migration_status


Postgres: Object size and bloat

postgres: pg_table_size

Table size

Total size of this table

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (relname)(pg_table_bloat_size)


postgres: pg_table_bloat_ratio

Table bloat ratio

Estimated bloat ratio of this table (high bloat = high overhead)

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (relname)(pg_table_bloat_ratio) * 100


postgres: pg_index_size

Index size

Total size of this index

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (relname)(pg_index_bloat_size)


postgres: pg_index_bloat_ratio

Index bloat ratio

Estimated bloat ratio of this index (high bloat = high overhead)

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (relname)(pg_index_bloat_ratio) * 100


Postgres: Provisioning indicators (not available on server)

postgres: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[1d])


postgres: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[1d])


postgres: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100310 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[5m])


postgres: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100311 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[5m])


postgres: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100312 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^(pgsql|codeintel-db|codeinsights).*"})


Postgres: Kubernetes monitoring (only available on Kubernetes)

postgres: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(app) (up{app=~".*(pgsql|codeintel-db|codeinsights)"}) / count by (app) (up{app=~".*(pgsql|codeintel-db|codeinsights)"}) * 100


Precise Code Intel Worker

Handles conversion of uploaded precise code intelligence bundles.

To see this dashboard, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker on your Sourcegraph instance.

Precise Code Intel Worker: Codeintel: LSIF uploads

precise-code-intel-worker: codeintel_upload_queue_size

Unprocessed upload record queue size

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_upload_total{job=~"^precise-code-intel-worker.*"})


precise-code-intel-worker: codeintel_upload_queue_growth_rate

Unprocessed upload record queue growth rate over 30m

This value compares the rate of enqueues against the rate of finished jobs.

- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[30m])) / sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[30m]))


precise-code-intel-worker: codeintel_upload_queued_max_age

Unprocessed upload record queue longest time in queue

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_upload_queued_duration_seconds_total{job=~"^precise-code-intel-worker.*"})


Precise Code Intel Worker: Codeintel: LSIF uploads

precise-code-intel-worker: codeintel_upload_handlers

Handler active handlers

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(src_codeintel_upload_processor_handlers{job=~"^precise-code-intel-worker.*"})


precise-code-intel-worker: codeintel_upload_processor_upload_size

Sum of upload sizes in bytes being processed by each precise code-intel worker instance

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by(instance) (src_codeintel_upload_processor_upload_size{job="precise-code-intel-worker"})


precise-code-intel-worker: codeintel_upload_processor_total

Handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100110 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_upload_processor_99th_percentile_duration

Aggregate successful handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100111 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_upload_processor_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_upload_processor_errors_total

Handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100112 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_upload_processor_error_rate

Handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100113 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Codeintel: dbstore stats

precise-code-intel-worker: codeintel_uploads_store_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_store_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_store_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100202 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_store_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100203 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


precise-code-intel-worker: codeintel_uploads_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))


precise-code-intel-worker: codeintel_uploads_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100212 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100213 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Codeintel: lsifstore stats

precise-code-intel-worker: codeintel_uploads_lsifstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_lsifstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_lsifstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100302 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_lsifstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100303 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


precise-code-intel-worker: codeintel_uploads_lsifstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100310 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_lsifstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100311 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))


precise-code-intel-worker: codeintel_uploads_lsifstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100312 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploads_lsifstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100313 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Workerutil: lsif_uploads dbworker/store stats

precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100401 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_upload_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100402 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100403 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Codeintel: gitserver client

precise-code-intel-worker: codeintel_gitserver_total

Aggregate client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_gitserver_99th_percentile_duration

Aggregate successful client operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100501 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_gitserver_errors_total

Aggregate client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100502 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_gitserver_error_rate

Aggregate client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100503 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


precise-code-intel-worker: codeintel_gitserver_total

Client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100510 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_gitserver_99th_percentile_duration

99th percentile successful client operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100511 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))


precise-code-intel-worker: codeintel_gitserver_errors_total

Client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100512 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_gitserver_error_rate

Client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100513 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Codeintel: uploadstore stats

precise-code-intel-worker: codeintel_uploadstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100600 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100601 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploadstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100602 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploadstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100603 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


precise-code-intel-worker: codeintel_uploadstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100610 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100611 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))


precise-code-intel-worker: codeintel_uploadstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100612 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))


precise-code-intel-worker: codeintel_uploadstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100613 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100


Precise Code Intel Worker: Internal service requests

precise-code-intel-worker: frontend_internal_api_error_responses

Frontend-internal API error responses every 5m by route

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100700 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="precise-code-intel-worker",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="precise-code-intel-worker"}[5m]))


Precise Code Intel Worker: Database connections

precise-code-intel-worker: max_open_conns

Maximum open

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100800 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="precise-code-intel-worker"})


precise-code-intel-worker: open_conns

Established

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100801 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="precise-code-intel-worker"})


precise-code-intel-worker: in_use

Used

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100810 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="precise-code-intel-worker"})


precise-code-intel-worker: idle

Idle

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100811 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="precise-code-intel-worker"})


precise-code-intel-worker: mean_blocked_seconds_per_conn_request

Mean blocked seconds per conn request

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100820 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="precise-code-intel-worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="precise-code-intel-worker"}[5m]))


precise-code-intel-worker: closed_max_idle

Closed by SetMaxIdleConns

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100830 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="precise-code-intel-worker"}[5m]))


precise-code-intel-worker: closed_max_lifetime

Closed by SetConnMaxLifetime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100831 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="precise-code-intel-worker"}[5m]))


precise-code-intel-worker: closed_max_idle_time

Closed by SetConnMaxIdleTime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100832 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="precise-code-intel-worker"}[5m]))


Precise Code Intel Worker: Container monitoring (not available on server)

precise-code-intel-worker: container_missing

Container missing

This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod precise-code-intel-worker (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p precise-code-intel-worker.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' precise-code-intel-worker (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the precise-code-intel-worker container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs precise-code-intel-worker (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: count by(name) ((time() - container_last_seen{name=~"^precise-code-intel-worker.*"}) > 60)


precise-code-intel-worker: container_cpu_usage

Container cpu usage total (1m average) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}


precise-code-intel-worker: container_memory_usage

Container memory usage by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}


precise-code-intel-worker: fs_io_operations

Filesystem reads and writes rate by instance over 1h

This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100903 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by(name) (rate(container_fs_reads_total{name=~"^precise-code-intel-worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^precise-code-intel-worker.*"}[1h]))


Precise Code Intel Worker: Provisioning indicators (not available on server)

precise-code-intel-worker: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])


precise-code-intel-worker: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])


precise-code-intel-worker: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101010 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])


precise-code-intel-worker: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101011 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])


precise-code-intel-worker: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101012 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^precise-code-intel-worker.*"})


Precise Code Intel Worker: Golang runtime monitoring

precise-code-intel-worker: go_goroutines

Maximum active goroutines

A high value here indicates a possible goroutine leak.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by(instance) (go_goroutines{job=~".*precise-code-intel-worker"})


precise-code-intel-worker: go_gc_duration_seconds

Maximum go garbage collection duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by(instance) (go_gc_duration_seconds{job=~".*precise-code-intel-worker"})


Precise Code Intel Worker: Kubernetes monitoring (only available on Kubernetes)

precise-code-intel-worker: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by(app) (up{app=~".*precise-code-intel-worker"}) / count by (app) (up{app=~".*precise-code-intel-worker"}) * 100


Redis

Metrics from both redis databases.

To see this dashboard, visit /-/debug/grafana/d/redis/redis on your Sourcegraph instance.

Redis: Redis Store

redis: redis-store_up

Redis-store availability

A value of 1 indicates the service is currently running

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: redis_up{app="redis-store"}


Redis: Redis Cache

redis: redis-cache_up

Redis-cache availability

A value of 1 indicates the service is currently running

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: redis_up{app="redis-cache"}


Redis: Provisioning indicators (not available on server)

redis: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[1d])


redis: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[1d])


redis: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100210 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[5m])


redis: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100211 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[5m])


redis: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100212 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^redis-cache.*"})


Redis: Provisioning indicators (not available on server)

redis: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[1d])


redis: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[1d])


redis: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100310 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[5m])


redis: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100311 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[5m])


redis: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100312 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^redis-store.*"})


Redis: Kubernetes monitoring (only available on Kubernetes)

redis: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(app) (up{app=~".*redis-cache"}) / count by (app) (up{app=~".*redis-cache"}) * 100


Redis: Kubernetes monitoring (only available on Kubernetes)

redis: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by(app) (up{app=~".*redis-store"}) / count by (app) (up{app=~".*redis-store"}) * 100


Worker

Manages background processes.

To see this dashboard, visit /-/debug/grafana/d/worker/worker on your Sourcegraph instance.

Worker: Active jobs

worker: worker_job_count

Number of worker instances running each job

The number of worker instances running each job type. It is necessary for each job type to be managed by at least one worker instance.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100000 on your Sourcegraph instance.

Technical details

Query: sum by (job_name) (src_worker_jobs{job="worker"})


worker: worker_job_codeintel-upload-janitor_count

Number of worker instances running the codeintel-upload-janitor job

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100010 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum (src_worker_jobs{job="worker", job_name="codeintel-upload-janitor"})


worker: worker_job_codeintel-commitgraph-updater_count

Number of worker instances running the codeintel-commitgraph-updater job

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100011 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum (src_worker_jobs{job="worker", job_name="codeintel-commitgraph-updater"})


worker: worker_job_codeintel-autoindexing-scheduler_count

Number of worker instances running the codeintel-autoindexing-scheduler job

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100012 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum (src_worker_jobs{job="worker", job_name="codeintel-autoindexing-scheduler"})


Worker: Database record encrypter

worker: records_encrypted_at_rest_percentage

Percentage of database records encrypted at rest

Percentage of encrypted database records

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100100 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: (max(src_records_encrypted_at_rest_total) by (tableName)) / ((max(src_records_encrypted_at_rest_total) by (tableName)) + (max(src_records_unencrypted_at_rest_total) by (tableName))) * 100


worker: records_encrypted_total

Database records encrypted every 5m

Number of encrypted database records every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100101 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (tableName)(increase(src_records_encrypted_total{job=~"^worker.*"}[5m]))


worker: records_decrypted_total

Database records decrypted every 5m

Number of encrypted database records every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100102 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (tableName)(increase(src_records_decrypted_total{job=~"^worker.*"}[5m]))


worker: record_encryption_errors_total

Encryption operation errors every 5m

Number of database record encryption/decryption errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100103 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_record_encryption_errors_total{job=~"^worker.*"}[5m]))


Worker: Codeintel: Repository with stale commit graph

worker: codeintel_commit_graph_queue_size

Repository queue size

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_commit_graph_total{job=~"^worker.*"})


worker: codeintel_commit_graph_queue_growth_rate

Repository queue growth rate over 30m

This value compares the rate of enqueues against the rate of finished jobs.

- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100201 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_commit_graph_total{job=~"^worker.*"}[30m])) / sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[30m]))


worker: codeintel_commit_graph_queued_max_age

Repository queue longest time in queue

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100202 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_commit_graph_queued_duration_seconds_total{job=~"^worker.*"})


Worker: Codeintel: Repository commit graph updates

worker: codeintel_commit_graph_processor_total

Update operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100300 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m]))


worker: codeintel_commit_graph_processor_99th_percentile_duration

Aggregate successful update operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100301 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_commit_graph_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_commit_graph_processor_errors_total

Update operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100302 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_commit_graph_processor_error_rate

Update operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100303 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeintel: Dependency index job

worker: codeintel_dependency_index_queue_size

Dependency index job queue size

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100400 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_dependency_index_total{job=~"^worker.*"})


worker: codeintel_dependency_index_queue_growth_rate

Dependency index job queue growth rate over 30m

This value compares the rate of enqueues against the rate of finished jobs.

- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100401 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_index_total{job=~"^worker.*"}[30m])) / sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[30m]))


worker: codeintel_dependency_index_queued_max_age

Dependency index job queue longest time in queue

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100402 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max(src_codeintel_dependency_index_queued_duration_seconds_total{job=~"^worker.*"})


Worker: Codeintel: Dependency index jobs

worker: codeintel_dependency_index_handlers

Handler active handlers

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100500 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(src_codeintel_dependency_index_processor_handlers{job=~"^worker.*"})


worker: codeintel_dependency_index_processor_total

Handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100510 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_index_processor_99th_percentile_duration

Aggregate successful handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100511 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_dependency_index_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_index_processor_errors_total

Handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100512 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_index_processor_error_rate

Handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100513 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeintel: Auto-index scheduler

worker: codeintel_autoindexing_total

Auto-indexing job scheduler operations every 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100600 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))


worker: codeintel_autoindexing_99th_percentile_duration

Aggregate successful auto-indexing job scheduler operation duration distribution over 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100601 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_autoindexing_duration_seconds_bucket{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))


worker: codeintel_autoindexing_errors_total

Auto-indexing job scheduler operation errors every 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100602 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))


worker: codeintel_autoindexing_error_rate

Auto-indexing job scheduler operation error rate over 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100603 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m])) / (sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m])) + sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))) * 100


Worker: Codeintel: dbstore stats

worker: codeintel_uploads_store_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100700 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_store_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100701 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_store_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100702 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_store_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100703 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: codeintel_uploads_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100710 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100711 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: codeintel_uploads_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100712 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100713 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeintel: lsifstore stats

worker: codeintel_uploads_lsifstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100800 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_lsifstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100801 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_lsifstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100802 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_lsifstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100803 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: codeintel_uploads_lsifstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100810 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_lsifstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100811 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: codeintel_uploads_lsifstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100812 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_uploads_lsifstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100813 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Workerutil: lsif_dependency_indexes dbworker/store stats

worker: workerutil_dbworker_store_codeintel_dependency_index_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_codeintel_dependency_index_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_dependency_index_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_codeintel_dependency_index_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_codeintel_dependency_index_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100903 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeintel: gitserver client

worker: codeintel_gitserver_total

Aggregate client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m]))


worker: codeintel_gitserver_99th_percentile_duration

Aggregate successful client operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_gitserver_errors_total

Aggregate client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_gitserver_error_rate

Aggregate client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101003 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: codeintel_gitserver_total

Client operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101010 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m]))


worker: codeintel_gitserver_99th_percentile_duration

99th percentile successful client operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101011 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: codeintel_gitserver_errors_total

Client operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101012 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_gitserver_error_rate

Client operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101013 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeintel: Dependency repository insert

worker: codeintel_dependency_repos_total

Aggregate insert operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_repos_99th_percentile_duration

Aggregate successful insert operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (le)(rate(src_codeintel_dependency_repos_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_repos_errors_total

Aggregate insert operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101102 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_repos_error_rate

Aggregate insert operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101103 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: codeintel_dependency_repos_total

Insert operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101110 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_repos_99th_percentile_duration

99th percentile successful insert operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101111 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,scheme,new)(rate(src_codeintel_dependency_repos_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: codeintel_dependency_repos_errors_total

Insert operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101112 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))


worker: codeintel_dependency_repos_error_rate

Insert operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101113 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m])) / (sum by (scheme,new)(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m])) + sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Gitserver: Gitserver Client

worker: gitserver_client_total

Aggregate graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101200 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_client_total{job=~"^worker.*"}[5m]))


worker: gitserver_client_99th_percentile_duration

Aggregate successful graphql operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101201 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: gitserver_client_errors_total

Aggregate graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101202 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))


worker: gitserver_client_error_rate

Aggregate graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101203 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^worker.*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: gitserver_client_total

Graphql operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101210 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op,scope)(increase(src_gitserver_client_total{job=~"^worker.*"}[5m]))


worker: gitserver_client_99th_percentile_duration

99th percentile successful graphql operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101211 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op,scope)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: gitserver_client_errors_total

Graphql operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101212 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))


worker: gitserver_client_error_rate

Graphql operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101213 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_client_total{job=~"^worker.*"}[5m])) + sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: dbstore stats

worker: batches_dbstore_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101300 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))


worker: batches_dbstore_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101301 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: batches_dbstore_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101302 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))


worker: batches_dbstore_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101303 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: batches_dbstore_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101310 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))


worker: batches_dbstore_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101311 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: batches_dbstore_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101312 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))


worker: batches_dbstore_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101313 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: service stats

worker: batches_service_total

Aggregate service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101400 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_total{job=~"^worker.*"}[5m]))


worker: batches_service_99th_percentile_duration

Aggregate successful service operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101401 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: batches_service_errors_total

Aggregate service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101402 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))


worker: batches_service_error_rate

Aggregate service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101403 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: batches_service_total

Service operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101410 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m]))


worker: batches_service_99th_percentile_duration

99th percentile successful service operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101411 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: batches_service_errors_total

Service operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101412 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))


worker: batches_service_error_rate

Service operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101413 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: Workspace resolver dbstore

worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101500 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101501 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101502 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101503 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: Bulk operation processor dbstore

worker: workerutil_dbworker_store_batches_bulk_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101600 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batches_bulk_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101601 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batches_bulk_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_batches_bulk_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101602 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batches_bulk_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101603 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: Changeset reconciler dbstore

worker: workerutil_dbworker_store_batches_reconciler_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101700 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batches_reconciler_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101701 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batches_reconciler_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_batches_reconciler_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101702 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batches_reconciler_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101703 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: Workspace execution dbstore

worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101800 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101801 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101802 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101803 on your Sourcegraph instance.

Managed by the Sourcegraph Batch Changes team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Batches: Executor jobs

worker: executor_queue_size

Unprocessed executor job queue size

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101900 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by (queue)(src_executor_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})


worker: executor_queue_growth_rate

Unprocessed executor job queue growth rate over 30m

This value compares the rate of enqueues against the rate of finished jobs for the selected queue.

- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101901 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (queue)(increase(src_executor_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m])) / sum by (queue)(increase(src_executor_processor_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m]))


worker: executor_queued_max_age

Unprocessed executor job queue longest time in queue

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101902 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by (queue)(src_executor_queued_duration_seconds_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})


Worker: Codeintel: lsif_upload record resetter

worker: codeintel_background_upload_record_resets_total

Lsif upload records reset to queued state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_upload_record_resets_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_upload_record_reset_failures_total

Lsif upload records reset to errored state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_upload_record_reset_failures_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_upload_record_reset_errors_total

Lsif upload operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_upload_record_reset_errors_total{job=~"^worker.*"}[5m]))


Worker: Codeintel: lsif_index record resetter

worker: codeintel_background_index_record_resets_total

Lsif index records reset to queued state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_index_record_resets_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_index_record_reset_failures_total

Lsif index records reset to errored state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_index_record_reset_failures_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_index_record_reset_errors_total

Lsif index operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102102 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_index_record_reset_errors_total{job=~"^worker.*"}[5m]))


Worker: Codeintel: lsif_dependency_index record resetter

worker: codeintel_background_dependency_index_record_resets_total

Lsif dependency index records reset to queued state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_dependency_index_record_resets_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_dependency_index_record_reset_failures_total

Lsif dependency index records reset to errored state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102201 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_dependency_index_record_reset_failures_total{job=~"^worker.*"}[5m]))


worker: codeintel_background_dependency_index_record_reset_errors_total

Lsif dependency index operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102202 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum(increase(src_codeintel_background_dependency_index_record_reset_errors_total{job=~"^worker.*"}[5m]))


Worker: Codeinsights: Query Runner Queue

worker: query_runner_worker_queue_size

Code insights query runner queue queue size

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102300 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: max(src_query_runner_worker_total{job=~"^worker.*"})


worker: query_runner_worker_queue_growth_rate

Code insights query runner queue queue growth rate over 30m

This value compares the rate of enqueues against the rate of finished jobs.

- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102301 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_total{job=~"^worker.*"}[30m])) / sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[30m]))


Worker: Codeinsights: insights queue processor

worker: query_runner_worker_handlers

Handler active handlers

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102400 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(src_query_runner_worker_processor_handlers{job=~"^worker.*"})


worker: query_runner_worker_processor_total

Handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102410 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[5m]))


worker: query_runner_worker_processor_99th_percentile_duration

Aggregate successful handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102411 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (le)(rate(src_query_runner_worker_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: query_runner_worker_processor_errors_total

Handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102412 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m]))


worker: query_runner_worker_processor_error_rate

Handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102413 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Codeinsights: code insights query runner queue record resetter

worker: query_runner_worker_record_resets_total

Insights query runner queue records reset to queued state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102500 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_record_resets_total{job=~"^worker.*"}[5m]))


worker: query_runner_worker_record_reset_failures_total

Insights query runner queue records reset to errored state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102501 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_record_reset_failures_total{job=~"^worker.*"}[5m]))


worker: query_runner_worker_record_reset_errors_total

Insights query runner queue operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102502 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_query_runner_worker_record_reset_errors_total{job=~"^worker.*"}[5m]))


Worker: Codeinsights: dbstore stats

worker: workerutil_dbworker_store_insights_query_runner_jobs_store_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102600 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102601 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (le)(rate(src_workerutil_dbworker_store_insights_query_runner_jobs_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102602 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102603 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102610 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102611 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_insights_query_runner_jobs_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102612 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_insights_query_runner_jobs_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102613 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Code Insights queue utilization

worker: insights_queue_unutilized_size

Insights queue size that is not utilized (not processing)

Any value on this panel indicates code insights is not processing queries from its queue. This observable and alert only fire if there are records in the queue and there have been no dequeue attempts for 30 minutes.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102700 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: max(src_query_runner_worker_total{job=~"^worker.*"}) > 0 and on(job) sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*",op="Dequeue"}[5m])) < 1


Worker: Internal service requests

worker: frontend_internal_api_error_responses

Frontend-internal API error responses every 5m by route

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102800 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="worker",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="worker"}[5m]))


Worker: Database connections

worker: max_open_conns

Maximum open

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102900 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="worker"})


worker: open_conns

Established

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102901 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="worker"})


worker: in_use

Used

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102910 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="worker"})


worker: idle

Idle

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102911 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="worker"})


worker: mean_blocked_seconds_per_conn_request

Mean blocked seconds per conn request

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102920 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="worker"}[5m]))


worker: closed_max_idle

Closed by SetMaxIdleConns

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102930 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="worker"}[5m]))


worker: closed_max_lifetime

Closed by SetConnMaxLifetime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102931 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="worker"}[5m]))


worker: closed_max_idle_time

Closed by SetConnMaxIdleTime

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102932 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="worker"}[5m]))


Worker: Container monitoring (not available on server)

worker: container_missing

Container missing

This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod worker (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p worker.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' worker (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the worker container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs worker (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103000 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: count by(name) ((time() - container_last_seen{name=~"^worker.*"}) > 60)


worker: container_cpu_usage

Container cpu usage total (1m average) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103001 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}


worker: container_memory_usage

Container memory usage by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103002 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}


worker: fs_io_operations

Filesystem reads and writes rate by instance over 1h

This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103003 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by(name) (rate(container_fs_reads_total{name=~"^worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^worker.*"}[1h]))


Worker: Provisioning indicators (not available on server)

worker: provisioning_container_cpu_usage_long_term

Container cpu usage total (90th percentile over 1d) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103100 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[1d])


worker: provisioning_container_memory_usage_long_term

Container memory usage (1d maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103101 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[1d])


worker: provisioning_container_cpu_usage_short_term

Container cpu usage total (5m maximum) across all cores by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103110 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[5m])


worker: provisioning_container_memory_usage_short_term

Container memory usage (5m maximum) by instance

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103111 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[5m])


worker: container_oomkill_events_total

Container OOMKILL events total by instance

This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103112 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by (name) (container_oom_events_total{name=~"^worker.*"})


Worker: Golang runtime monitoring

worker: go_goroutines

Maximum active goroutines

A high value here indicates a possible goroutine leak.

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103200 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by(instance) (go_goroutines{job=~".*worker"})


worker: go_gc_duration_seconds

Maximum go garbage collection duration

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103201 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: max by(instance) (go_gc_duration_seconds{job=~".*worker"})


Worker: Kubernetes monitoring (only available on Kubernetes)

worker: pods_available_percentage

Percentage pods available

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103300 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by(app) (up{app=~".*worker"}) / count by (app) (up{app=~".*worker"}) * 100


Worker: Own: repo indexer dbstore

worker: workerutil_dbworker_store_own_background_worker_store_total

Aggregate store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103400 on your Sourcegraph instance.

Managed by the Sourcegraph own team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_own_background_worker_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_own_background_worker_store_99th_percentile_duration

Aggregate successful store operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103401 on your Sourcegraph instance.

Managed by the Sourcegraph own team.

Technical details

Query: sum by (le)(rate(src_workerutil_dbworker_store_own_background_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_own_background_worker_store_errors_total

Aggregate store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103402 on your Sourcegraph instance.

Managed by the Sourcegraph own team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_own_background_worker_store_error_rate

Aggregate store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103403 on your Sourcegraph instance.

Managed by the Sourcegraph own team.

Technical details

Query: sum(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_own_background_worker_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100


worker: workerutil_dbworker_store_own_background_worker_store_total

Store operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103410 on your Sourcegraph instance.

Managed by the Sourcegraph own team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_own_background_worker_store_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_own_background_worker_store_99th_percentile_duration

99th percentile successful store operation duration over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103411 on your Sourcegraph instance.

Managed by the Sourcegraph own team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_own_background_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))


worker: workerutil_dbworker_store_own_background_worker_store_errors_total

Store operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103412 on your Sourcegraph instance.

Managed by the Sourcegraph own team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m]))


worker: workerutil_dbworker_store_own_background_worker_store_error_rate

Store operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103413 on your Sourcegraph instance.

Managed by the Sourcegraph own team.

Technical details

Query: sum by (op)(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_own_background_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Own: repo indexer worker queue

worker: own_background_worker_handlers

Handler active handlers

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103500 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(src_own_background_worker_processor_handlers{job=~"^worker.*"})


worker: own_background_worker_processor_total

Handler operations every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103510 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_own_background_worker_processor_total{job=~"^worker.*"}[5m]))


worker: own_background_worker_processor_99th_percentile_duration

Aggregate successful handler operation duration distribution over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103511 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum by (le)(rate(src_own_background_worker_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))


worker: own_background_worker_processor_errors_total

Handler operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103512 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_own_background_worker_processor_errors_total{job=~"^worker.*"}[5m]))


worker: own_background_worker_processor_error_rate

Handler operation error rate over 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103513 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_own_background_worker_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_own_background_worker_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_own_background_worker_processor_errors_total{job=~"^worker.*"}[5m]))) * 100


Worker: Own: own repo indexer record resetter

worker: own_background_worker_record_resets_total

Own repo indexer queue records reset to queued state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103600 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_own_background_worker_record_resets_total{job=~"^worker.*"}[5m]))


worker: own_background_worker_record_reset_failures_total

Own repo indexer queue records reset to errored state every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103601 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_own_background_worker_record_reset_failures_total{job=~"^worker.*"}[5m]))


worker: own_background_worker_record_reset_errors_total

Own repo indexer queue operation errors every 5m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103602 on your Sourcegraph instance.

Managed by the Sourcegraph Code Insights team.

Technical details

Query: sum(increase(src_own_background_worker_record_reset_errors_total{job=~"^worker.*"}[5m]))


Worker: Own: index job scheduler

worker: own_background_index_scheduler_total

Own index job scheduler operations every 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103700 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_own_background_index_scheduler_total{job=~"^worker.*"}[10m]))


worker: own_background_index_scheduler_99th_percentile_duration

99th percentile successful own index job scheduler operation duration over 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103701 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: histogram_quantile(0.99, sum by (le,op)(rate(src_own_background_index_scheduler_duration_seconds_bucket{job=~"^worker.*"}[10m])))


worker: own_background_index_scheduler_errors_total

Own index job scheduler operation errors every 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103702 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_own_background_index_scheduler_errors_total{job=~"^worker.*"}[10m]))


worker: own_background_index_scheduler_error_rate

Own index job scheduler operation error rate over 10m

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103703 on your Sourcegraph instance.

Managed by the Sourcegraph Code intelligence team.

Technical details

Query: sum by (op)(increase(src_own_background_index_scheduler_errors_total{job=~"^worker.*"}[10m])) / (sum by (op)(increase(src_own_background_index_scheduler_total{job=~"^worker.*"}[10m])) + sum by (op)(increase(src_own_background_index_scheduler_errors_total{job=~"^worker.*"}[10m]))) * 100


Worker: Site configuration client update latency

worker: worker_site_configuration_duration_since_last_successful_update_by_instance

Duration since last successful site configuration update (by instance)

The duration since the configuration client used by the "worker" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103800 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: src_conf_client_time_since_last_successful_update_seconds{instance=~${instance:regex}}


worker: worker_site_configuration_duration_since_last_successful_update_by_instance

Maximum duration since last successful site configuration update (all "worker" instances)

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103801 on your Sourcegraph instance.

Managed by the Sourcegraph Cloud DevOps team.

Technical details

Query: max(max_over_time(src_conf_client_time_since_last_successful_update_seconds{instance=~${instance:regex}}[1m]))


Repo Updater

Manages interaction with code hosts, instructs Gitserver to update repositories.

To see this dashboard, visit /-/debug/grafana/d/repo-updater/repo-updater on your Sourcegraph instance.

Repo Updater: Repositories

repo-updater: syncer_sync_last_time

Time since last sync

A high value here indicates issues synchronizing repo metadata. If the value is persistently high, make sure all external services have valid tokens.

This panel has no related alerts.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100000 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max(timestamp(vector(time()))) - max(src_repoupdater_syncer_sync_last_time)


repo-updater: src_repoupdater_max_sync_backoff

Time since oldest sync

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100001 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max(src_repoupdater_max_sync_backoff)


repo-updater: src_repoupdater_syncer_sync_errors_total

Site level external service sync error rate

Refer to the alerts reference for 2 alerts related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100002 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max by (family) (rate(src_repoupdater_syncer_sync_errors_total{owner!="user",reason!="invalid_npm_path",reason!="internal_rate_limit"}[5m]))


repo-updater: syncer_sync_start

Repo metadata sync was started

Refer to the alerts reference for 1 alert related to this panel.

To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100010 on your Sourcegraph instance.

Managed by the Sourcegraph Source team.

Technical details

Query: max by (family) (rate(src_repoupdater_syncer_start_sync{family="Syncer.SyncExternalService"}[9h0m0s]))