Alert solutions

This document contains possible solutions for when you find alerts are firing in Sourcegraph's monitoring. If your alert isn't mentioned here, or if the solution doesn't help, contact us for assistance.

frontend: 99th_percentile_search_request_duration

Descriptions:

  • frontend: 20s+ 99th percentile successful search request duration over 5m

Possible solutions:

  • Get details on the exact queries that are slow by configuring "observability.logSlowSearches": 20, in the site configuration and looking for frontend warning logs prefixed with slow search request for additional details.
  • Check that most repositories are indexed by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.)
  • Kubernetes: Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the indexed-search.Deployment.yaml if regularly hitting max CPU utilization.
  • Docker Compose: Check CPU usage on the Zoekt Web Server dashboard, consider increasing cpus: of the zoekt-webserver container in docker-compose.yml if regularly hitting max CPU utilization.

frontend: 90th_percentile_search_request_duration

Descriptions:

  • frontend: 15s+ 90th percentile successful search request duration over 5m

Possible solutions:

  • Get details on the exact queries that are slow by configuring "observability.logSlowSearches": 15, in the site configuration and looking for frontend warning logs prefixed with slow search request for additional details.
  • Check that most repositories are indexed by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.)
  • Kubernetes: Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the indexed-search.Deployment.yaml if regularly hitting max CPU utilization.
  • Docker Compose: Check CPU usage on the Zoekt Web Server dashboard, consider increasing cpus: of the zoekt-webserver container in docker-compose.yml if regularly hitting max CPU utilization.

frontend: search_alert_user_suggestions

Descriptions:

  • frontend: 50+ search alert user suggestions shown every 5m

Possible solutions:

  • This indicates your user`s are making syntax errors or similar user errors.

frontend: 99th_percentile_search_codeintel_request_duration

Descriptions:

  • frontend: 20s+ 99th percentile code-intel successful search request duration over 5m

Possible solutions:

  • Get details on the exact queries that are slow by configuring "observability.logSlowSearches": 20, in the site configuration and looking for frontend warning logs prefixed with slow search request for additional details.
  • Check that most repositories are indexed by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.)
  • Kubernetes: Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the indexed-search.Deployment.yaml if regularly hitting max CPU utilization.
  • Docker Compose: Check CPU usage on the Zoekt Web Server dashboard, consider increasing cpus: of the zoekt-webserver container in docker-compose.yml if regularly hitting max CPU utilization.

frontend: 90th_percentile_search_codeintel_request_duration

Descriptions:

  • frontend: 15s+ 90th percentile code-intel successful search request duration over 5m

Possible solutions:

  • Get details on the exact queries that are slow by configuring "observability.logSlowSearches": 15, in the site configuration and looking for frontend warning logs prefixed with slow search request for additional details.
  • Check that most repositories are indexed by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.)
  • Kubernetes: Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the indexed-search.Deployment.yaml if regularly hitting max CPU utilization.
  • Docker Compose: Check CPU usage on the Zoekt Web Server dashboard, consider increasing cpus: of the zoekt-webserver container in docker-compose.yml if regularly hitting max CPU utilization.

frontend: search_codeintel_alert_user_suggestions

Descriptions:

  • frontend: 50+ search code-intel alert user suggestions shown every 5m

Possible solutions:

frontend: 99th_percentile_search_api_request_duration

Descriptions:

  • frontend: 50s+ 99th percentile successful search API request duration over 5m

Possible solutions:

  • Get details on the exact queries that are slow by configuring "observability.logSlowSearches": 20, in the site configuration and looking for frontend warning logs prefixed with slow search request for additional details.
  • If your users are requesting many results with a large count: parameter, consider using our search pagination API.
  • Check that most repositories are indexed by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.)
  • Kubernetes: Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the indexed-search.Deployment.yaml if regularly hitting max CPU utilization.
  • Docker Compose: Check CPU usage on the Zoekt Web Server dashboard, consider increasing cpus: of the zoekt-webserver container in docker-compose.yml if regularly hitting max CPU utilization.

frontend: 90th_percentile_search_api_request_duration

Descriptions:

  • frontend: 40s+ 90th percentile successful search API request duration over 5m

Possible solutions:

  • Get details on the exact queries that are slow by configuring "observability.logSlowSearches": 15, in the site configuration and looking for frontend warning logs prefixed with slow search request for additional details.
  • If your users are requesting many results with a large count: parameter, consider using our search pagination API.
  • Check that most repositories are indexed by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.)
  • Kubernetes: Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the indexed-search.Deployment.yaml if regularly hitting max CPU utilization.
  • Docker Compose: Check CPU usage on the Zoekt Web Server dashboard, consider increasing cpus: of the zoekt-webserver container in docker-compose.yml if regularly hitting max CPU utilization.

frontend: search_api_alert_user_suggestions

Descriptions:

  • frontend: 50+ search API alert user suggestions shown every 5m

Possible solutions:

  • This indicates your user`s search API requests have syntax errors or a similar user error. Check the responses the API sends back for an explanation.

frontend: internal_indexed_search_error_responses

Descriptions:

  • frontend: 5+ internal indexed search error responses every 5m

Possible solutions:

  • Check the Zoekt Web Server dashboard for indications it might be unhealthy.

frontend: internal_unindexed_search_error_responses

Descriptions:

  • frontend: 5+ internal unindexed search error responses every 5m

Possible solutions:

  • Check the Searcher dashboard for indications it might be unhealthy.

frontend: internal_api_error_responses

Descriptions:

  • frontend: 25+ internal API error responses every 5m by route

Possible solutions:

  • May not be a substantial issue, check the frontend logs for potential causes.

frontend: container_restarts

Descriptions:

  • frontend: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod frontend (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p frontend.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' frontend (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the frontend container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs frontend (note this will include logs from the previous and currently running container).

frontend: container_memory_usage

Descriptions:

  • frontend: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of frontend container in docker-compose.yml.

frontend: container_cpu_usage

Descriptions:

  • frontend: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the frontend container in docker-compose.yml.

frontend: provisioning_container_cpu_usage_1d

Descriptions:

  • frontend: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the frontend container in docker-compose.yml. If usage is low, consider decreasing the above values.

frontend: provisioning_container_memory_usage_1d

Descriptions:

  • frontend: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of frontend container in docker-compose.yml. If usage is low, consider decreasing the above values.

frontend: provisioning_container_cpu_usage_5m

Descriptions:

  • frontend: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the frontend container in docker-compose.yml.

frontend: provisioning_container_memory_usage_5m

Descriptions:

  • frontend: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of frontend container in docker-compose.yml.

gitserver: disk_space_remaining

Descriptions:

  • gitserver: less than 25% disk space remaining by instance

  • gitserver: less than 15% disk space remaining by instance

Possible solutions:

  • Provision more disk space: Sourcegraph will begin deleting least-used repository clones at 10% disk space remaining which may result in decreased performance, users having to wait for repositories to clone, etc.

gitserver: running_git_commands

Descriptions:

  • gitserver: 50+ running git commands (signals load)

  • gitserver: 100+ running git commands (signals load)

Possible solutions:

  • Check if the problem may be an intermittent and temporary peak using the "Container monitoring" section at the bottom of the Git Server dashboard.
  • Single container deployments: Consider upgrading to a Docker Compose deployment which offers better scalability and resource isolation.
  • Kubernetes and Docker Compose: Check that you are running a similar number of git server replicas and that their CPU/memory limits are allocated according to what is shown in the Sourcegraph resource estimator.

gitserver: repository_clone_queue_size

Descriptions:

  • gitserver: 25+ repository clone queue size

Possible solutions:

gitserver: repository_existence_check_queue_size

Descriptions:

  • gitserver: 25+ repository existence check queue size

Possible solutions:

  • Check the code host status indicator for errors: on the Sourcegraph app homepage, when signed in as an admin click the cloud icon in the top right corner of the page.
  • Check if the issue continues to happen after 30 minutes, it may be temporary.
  • Check the gitserver logs for more information.

gitserver: echo_command_duration_test

Descriptions:

  • gitserver: 1s+ echo command duration test

  • gitserver: 2s+ echo command duration test

Possible solutions:

  • Check if the problem may be an intermittent and temporary peak using the "Container monitoring" section at the bottom of the Git Server dashboard.
  • Single container deployments: Consider upgrading to a Docker Compose deployment which offers better scalability and resource isolation.
  • Kubernetes and Docker Compose: Check that you are running a similar number of git server replicas and that their CPU/memory limits are allocated according to what is shown in the Sourcegraph resource estimator.

gitserver: frontend_internal_api_error_responses

Descriptions:

  • gitserver: 5+ frontend-internal API error responses every 5m by route

Possible solutions:

  • Single-container deployments: Check docker logs $CONTAINER_ID for logs starting with repo-updater that indicate requests to the frontend service are failing.
  • Kubernetes:
    • Confirm that kubectl get pods shows the frontend pods are healthy.
    • Check kubectl logs gitserver for logs indicate request failures to frontend or frontend-internal.
  • Docker Compose:
    • Confirm that docker ps shows the frontend-internal container is healthy.
    • Check docker logs gitserver for logs indicating request failures to frontend or frontend-internal.

gitserver: container_restarts

Descriptions:

  • gitserver: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod gitserver (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p gitserver.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' gitserver (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the gitserver container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs gitserver (note this will include logs from the previous and currently running container).

gitserver: container_memory_usage

Descriptions:

  • gitserver: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of gitserver container in docker-compose.yml.

gitserver: container_cpu_usage

Descriptions:

  • gitserver: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the gitserver container in docker-compose.yml.

gitserver: provisioning_container_cpu_usage_1d

Descriptions:

  • gitserver: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the gitserver container in docker-compose.yml. If usage is low, consider decreasing the above values.

gitserver: provisioning_container_memory_usage_1d

Descriptions:

  • gitserver: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of gitserver container in docker-compose.yml. If usage is low, consider decreasing the above values.

gitserver: provisioning_container_cpu_usage_5m

Descriptions:

  • gitserver: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the gitserver container in docker-compose.yml.

gitserver: provisioning_container_memory_usage_5m

Descriptions:

  • gitserver: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of gitserver container in docker-compose.yml.

github-proxy: container_restarts

Descriptions:

  • github-proxy: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod github-proxy (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p github-proxy.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' github-proxy (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the github-proxy container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs github-proxy (note this will include logs from the previous and currently running container).

github-proxy: container_memory_usage

Descriptions:

  • github-proxy: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of github-proxy container in docker-compose.yml.

github-proxy: container_cpu_usage

Descriptions:

  • github-proxy: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the github-proxy container in docker-compose.yml.

github-proxy: provisioning_container_cpu_usage_1d

Descriptions:

  • github-proxy: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the github-proxy container in docker-compose.yml. If usage is low, consider decreasing the above values.

github-proxy: provisioning_container_memory_usage_1d

Descriptions:

  • github-proxy: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of github-proxy container in docker-compose.yml. If usage is low, consider decreasing the above values.

github-proxy: provisioning_container_cpu_usage_5m

Descriptions:

  • github-proxy: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the github-proxy container in docker-compose.yml.

github-proxy: provisioning_container_memory_usage_5m

Descriptions:

  • github-proxy: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of github-proxy container in docker-compose.yml.

precise-code-intel-bundle-manager: disk_space_remaining

Descriptions:

  • precise-code-intel-bundle-manager: less than 25% disk space remaining by instance

  • precise-code-intel-bundle-manager: less than 15% disk space remaining by instance

Possible solutions:

  • Provision more disk space: Sourcegraph will begin deleting the oldest uploaded bundle files at 10% disk space remaining.

precise-code-intel-bundle-manager: frontend_internal_api_error_responses

Descriptions:

  • precise-code-intel-bundle-manager: 5+ frontend-internal API error responses every 5m by route

Possible solutions:

  • Single-container deployments: Check docker logs $CONTAINER_ID for logs starting with repo-updater that indicate requests to the frontend service are failing.
  • Kubernetes:
    • Confirm that kubectl get pods shows the frontend pods are healthy.
    • Check kubectl logs precise-code-intel-bundle-manager for logs indicate request failures to frontend or frontend-internal.
  • Docker Compose:
    • Confirm that docker ps shows the frontend-internal container is healthy.
    • Check docker logs precise-code-intel-bundle-manager for logs indicating request failures to frontend or frontend-internal.

precise-code-intel-bundle-manager: container_restarts

Descriptions:

  • precise-code-intel-bundle-manager: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod precise-code-intel-bundle-manager (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p precise-code-intel-bundle-manager.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' precise-code-intel-bundle-manager (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the precise-code-intel-bundle-manager container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs precise-code-intel-bundle-manager (note this will include logs from the previous and currently running container).

precise-code-intel-bundle-manager: container_memory_usage

Descriptions:

  • precise-code-intel-bundle-manager: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of precise-code-intel-bundle-manager container in docker-compose.yml.

precise-code-intel-bundle-manager: container_cpu_usage

Descriptions:

  • precise-code-intel-bundle-manager: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the precise-code-intel-bundle-manager container in docker-compose.yml.

precise-code-intel-bundle-manager: provisioning_container_cpu_usage_1d

Descriptions:

  • precise-code-intel-bundle-manager: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the precise-code-intel-bundle-manager container in docker-compose.yml. If usage is low, consider decreasing the above values.

precise-code-intel-bundle-manager: provisioning_container_memory_usage_1d

Descriptions:

  • precise-code-intel-bundle-manager: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of precise-code-intel-bundle-manager container in docker-compose.yml. If usage is low, consider decreasing the above values.

precise-code-intel-bundle-manager: provisioning_container_cpu_usage_5m

Descriptions:

  • precise-code-intel-bundle-manager: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the precise-code-intel-bundle-manager container in docker-compose.yml.

precise-code-intel-bundle-manager: provisioning_container_memory_usage_5m

Descriptions:

  • precise-code-intel-bundle-manager: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of precise-code-intel-bundle-manager container in docker-compose.yml.

precise-code-intel-worker: frontend_internal_api_error_responses

Descriptions:

  • precise-code-intel-worker: 5+ frontend-internal API error responses every 5m by route

Possible solutions:

  • Single-container deployments: Check docker logs $CONTAINER_ID for logs starting with repo-updater that indicate requests to the frontend service are failing.
  • Kubernetes:
    • Confirm that kubectl get pods shows the frontend pods are healthy.
    • Check kubectl logs precise-code-intel-worker for logs indicate request failures to frontend or frontend-internal.
  • Docker Compose:
    • Confirm that docker ps shows the frontend-internal container is healthy.
    • Check docker logs precise-code-intel-worker for logs indicating request failures to frontend or frontend-internal.

precise-code-intel-worker: container_restarts

Descriptions:

  • precise-code-intel-worker: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod precise-code-intel-worker (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p precise-code-intel-worker.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' precise-code-intel-worker (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the precise-code-intel-worker container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs precise-code-intel-worker (note this will include logs from the previous and currently running container).

precise-code-intel-worker: container_memory_usage

Descriptions:

  • precise-code-intel-worker: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of precise-code-intel-worker container in docker-compose.yml.

precise-code-intel-worker: container_cpu_usage

Descriptions:

  • precise-code-intel-worker: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the precise-code-intel-worker container in docker-compose.yml.

precise-code-intel-worker: provisioning_container_cpu_usage_1d

Descriptions:

  • precise-code-intel-worker: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the precise-code-intel-worker container in docker-compose.yml. If usage is low, consider decreasing the above values.

precise-code-intel-worker: provisioning_container_memory_usage_1d

Descriptions:

  • precise-code-intel-worker: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of precise-code-intel-worker container in docker-compose.yml. If usage is low, consider decreasing the above values.

precise-code-intel-worker: provisioning_container_cpu_usage_5m

Descriptions:

  • precise-code-intel-worker: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the precise-code-intel-worker container in docker-compose.yml.

precise-code-intel-worker: provisioning_container_memory_usage_5m

Descriptions:

  • precise-code-intel-worker: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of precise-code-intel-worker container in docker-compose.yml.

precise-code-intel-indexer: frontend_internal_api_error_responses

Descriptions:

  • precise-code-intel-indexer: 5+ frontend-internal API error responses every 5m by route

Possible solutions:

  • Single-container deployments: Check docker logs $CONTAINER_ID for logs starting with repo-updater that indicate requests to the frontend service are failing.
  • Kubernetes:
    • Confirm that kubectl get pods shows the frontend pods are healthy.
    • Check kubectl logs precise-code-intel-indexer for logs indicate request failures to frontend or frontend-internal.
  • Docker Compose:
    • Confirm that docker ps shows the frontend-internal container is healthy.
    • Check docker logs precise-code-intel-indexer for logs indicating request failures to frontend or frontend-internal.

precise-code-intel-indexer: container_restarts

Descriptions:

  • precise-code-intel-indexer: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod precise-code-intel-indexer (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p precise-code-intel-indexer.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' precise-code-intel-indexer (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the precise-code-intel-indexer container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs precise-code-intel-indexer (note this will include logs from the previous and currently running container).

precise-code-intel-indexer: container_memory_usage

Descriptions:

  • precise-code-intel-indexer: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of precise-code-intel-indexer container in docker-compose.yml.

precise-code-intel-indexer: container_cpu_usage

Descriptions:

  • precise-code-intel-indexer: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the precise-code-intel-indexer container in docker-compose.yml.

precise-code-intel-indexer: provisioning_container_cpu_usage_1d

Descriptions:

  • precise-code-intel-indexer: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the precise-code-intel-indexer container in docker-compose.yml. If usage is low, consider decreasing the above values.

precise-code-intel-indexer: provisioning_container_memory_usage_1d

Descriptions:

  • precise-code-intel-indexer: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of precise-code-intel-indexer container in docker-compose.yml. If usage is low, consider decreasing the above values.

precise-code-intel-indexer: provisioning_container_cpu_usage_5m

Descriptions:

  • precise-code-intel-indexer: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the precise-code-intel-indexer container in docker-compose.yml.

precise-code-intel-indexer: provisioning_container_memory_usage_5m

Descriptions:

  • precise-code-intel-indexer: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of precise-code-intel-indexer container in docker-compose.yml.

query-runner: frontend_internal_api_error_responses

Descriptions:

  • query-runner: 5+ frontend-internal API error responses every 5m by route

Possible solutions:

  • Single-container deployments: Check docker logs $CONTAINER_ID for logs starting with repo-updater that indicate requests to the frontend service are failing.
  • Kubernetes:
    • Confirm that kubectl get pods shows the frontend pods are healthy.
    • Check kubectl logs query-runner for logs indicate request failures to frontend or frontend-internal.
  • Docker Compose:
    • Confirm that docker ps shows the frontend-internal container is healthy.
    • Check docker logs query-runner for logs indicating request failures to frontend or frontend-internal.

query-runner: container_restarts

Descriptions:

  • query-runner: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod query-runner (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p query-runner.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' query-runner (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the query-runner container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs query-runner (note this will include logs from the previous and currently running container).

query-runner: container_memory_usage

Descriptions:

  • query-runner: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of query-runner container in docker-compose.yml.

query-runner: container_cpu_usage

Descriptions:

  • query-runner: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the query-runner container in docker-compose.yml.

query-runner: provisioning_container_cpu_usage_1d

Descriptions:

  • query-runner: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the query-runner container in docker-compose.yml. If usage is low, consider decreasing the above values.

query-runner: provisioning_container_memory_usage_1d

Descriptions:

  • query-runner: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of query-runner container in docker-compose.yml. If usage is low, consider decreasing the above values.

query-runner: provisioning_container_cpu_usage_5m

Descriptions:

  • query-runner: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the query-runner container in docker-compose.yml.

query-runner: provisioning_container_memory_usage_5m

Descriptions:

  • query-runner: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of query-runner container in docker-compose.yml.

replacer: frontend_internal_api_error_responses

Descriptions:

  • replacer: 5+ frontend-internal API error responses every 5m by route

Possible solutions:

  • Single-container deployments: Check docker logs $CONTAINER_ID for logs starting with repo-updater that indicate requests to the frontend service are failing.
  • Kubernetes:
    • Confirm that kubectl get pods shows the frontend pods are healthy.
    • Check kubectl logs replacer for logs indicate request failures to frontend or frontend-internal.
  • Docker Compose:
    • Confirm that docker ps shows the frontend-internal container is healthy.
    • Check docker logs replacer for logs indicating request failures to frontend or frontend-internal.

replacer: container_restarts

Descriptions:

  • replacer: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod replacer (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p replacer.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' replacer (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the replacer container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs replacer (note this will include logs from the previous and currently running container).

replacer: container_memory_usage

Descriptions:

  • replacer: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of replacer container in docker-compose.yml.

replacer: container_cpu_usage

Descriptions:

  • replacer: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the replacer container in docker-compose.yml.

replacer: provisioning_container_cpu_usage_1d

Descriptions:

  • replacer: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the replacer container in docker-compose.yml. If usage is low, consider decreasing the above values.

replacer: provisioning_container_memory_usage_1d

Descriptions:

  • replacer: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of replacer container in docker-compose.yml. If usage is low, consider decreasing the above values.

replacer: provisioning_container_cpu_usage_5m

Descriptions:

  • replacer: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the replacer container in docker-compose.yml.

replacer: provisioning_container_memory_usage_5m

Descriptions:

  • replacer: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of replacer container in docker-compose.yml.

repo-updater: frontend_internal_api_error_responses

Descriptions:

  • repo-updater: 5+ frontend-internal API error responses every 5m by route

Possible solutions:

  • Single-container deployments: Check docker logs $CONTAINER_ID for logs starting with repo-updater that indicate requests to the frontend service are failing.
  • Kubernetes:
    • Confirm that kubectl get pods shows the frontend pods are healthy.
    • Check kubectl logs repo-updater for logs indicate request failures to frontend or frontend-internal.
  • Docker Compose:
    • Confirm that docker ps shows the frontend-internal container is healthy.
    • Check docker logs repo-updater for logs indicating request failures to frontend or frontend-internal.

repo-updater: container_restarts

Descriptions:

  • repo-updater: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod repo-updater (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p repo-updater.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' repo-updater (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the repo-updater container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs repo-updater (note this will include logs from the previous and currently running container).

repo-updater: container_memory_usage

Descriptions:

  • repo-updater: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of repo-updater container in docker-compose.yml.

repo-updater: container_cpu_usage

Descriptions:

  • repo-updater: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the repo-updater container in docker-compose.yml.

repo-updater: provisioning_container_cpu_usage_1d

Descriptions:

  • repo-updater: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the repo-updater container in docker-compose.yml. If usage is low, consider decreasing the above values.

repo-updater: provisioning_container_memory_usage_1d

Descriptions:

  • repo-updater: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of repo-updater container in docker-compose.yml. If usage is low, consider decreasing the above values.

repo-updater: provisioning_container_cpu_usage_5m

Descriptions:

  • repo-updater: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the repo-updater container in docker-compose.yml.

repo-updater: provisioning_container_memory_usage_5m

Descriptions:

  • repo-updater: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of repo-updater container in docker-compose.yml.

searcher: frontend_internal_api_error_responses

Descriptions:

  • searcher: 5+ frontend-internal API error responses every 5m by route

Possible solutions:

  • Single-container deployments: Check docker logs $CONTAINER_ID for logs starting with repo-updater that indicate requests to the frontend service are failing.
  • Kubernetes:
    • Confirm that kubectl get pods shows the frontend pods are healthy.
    • Check kubectl logs searcher for logs indicate request failures to frontend or frontend-internal.
  • Docker Compose:
    • Confirm that docker ps shows the frontend-internal container is healthy.
    • Check docker logs searcher for logs indicating request failures to frontend or frontend-internal.

searcher: container_restarts

Descriptions:

  • searcher: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod searcher (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p searcher.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' searcher (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the searcher container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs searcher (note this will include logs from the previous and currently running container).

searcher: container_memory_usage

Descriptions:

  • searcher: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of searcher container in docker-compose.yml.

searcher: container_cpu_usage

Descriptions:

  • searcher: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the searcher container in docker-compose.yml.

searcher: provisioning_container_cpu_usage_1d

Descriptions:

  • searcher: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the searcher container in docker-compose.yml. If usage is low, consider decreasing the above values.

searcher: provisioning_container_memory_usage_1d

Descriptions:

  • searcher: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of searcher container in docker-compose.yml. If usage is low, consider decreasing the above values.

searcher: provisioning_container_cpu_usage_5m

Descriptions:

  • searcher: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the searcher container in docker-compose.yml.

searcher: provisioning_container_memory_usage_5m

Descriptions:

  • searcher: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of searcher container in docker-compose.yml.

symbols: frontend_internal_api_error_responses

Descriptions:

  • symbols: 5+ frontend-internal API error responses every 5m by route

Possible solutions:

  • Single-container deployments: Check docker logs $CONTAINER_ID for logs starting with repo-updater that indicate requests to the frontend service are failing.
  • Kubernetes:
    • Confirm that kubectl get pods shows the frontend pods are healthy.
    • Check kubectl logs symbols for logs indicate request failures to frontend or frontend-internal.
  • Docker Compose:
    • Confirm that docker ps shows the frontend-internal container is healthy.
    • Check docker logs symbols for logs indicating request failures to frontend or frontend-internal.

symbols: container_restarts

Descriptions:

  • symbols: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod symbols (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p symbols.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' symbols (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the symbols container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs symbols (note this will include logs from the previous and currently running container).

symbols: container_memory_usage

Descriptions:

  • symbols: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of symbols container in docker-compose.yml.

symbols: container_cpu_usage

Descriptions:

  • symbols: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the symbols container in docker-compose.yml.

symbols: provisioning_container_cpu_usage_1d

Descriptions:

  • symbols: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the symbols container in docker-compose.yml. If usage is low, consider decreasing the above values.

symbols: provisioning_container_memory_usage_1d

Descriptions:

  • symbols: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of symbols container in docker-compose.yml. If usage is low, consider decreasing the above values.

symbols: provisioning_container_cpu_usage_5m

Descriptions:

  • symbols: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the symbols container in docker-compose.yml.

symbols: provisioning_container_memory_usage_5m

Descriptions:

  • symbols: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of symbols container in docker-compose.yml.

syntect-server: container_restarts

Descriptions:

  • syntect-server: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod syntect-server (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p syntect-server.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' syntect-server (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the syntect-server container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs syntect-server (note this will include logs from the previous and currently running container).

syntect-server: container_memory_usage

Descriptions:

  • syntect-server: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of syntect-server container in docker-compose.yml.

syntect-server: container_cpu_usage

Descriptions:

  • syntect-server: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the syntect-server container in docker-compose.yml.

syntect-server: provisioning_container_cpu_usage_1d

Descriptions:

  • syntect-server: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the syntect-server container in docker-compose.yml. If usage is low, consider decreasing the above values.

syntect-server: provisioning_container_memory_usage_1d

Descriptions:

  • syntect-server: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of syntect-server container in docker-compose.yml. If usage is low, consider decreasing the above values.

syntect-server: provisioning_container_cpu_usage_5m

Descriptions:

  • syntect-server: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the syntect-server container in docker-compose.yml.

syntect-server: provisioning_container_memory_usage_5m

Descriptions:

  • syntect-server: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of syntect-server container in docker-compose.yml.

zoekt-indexserver: container_restarts

Descriptions:

  • zoekt-indexserver: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod zoekt-indexserver (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p zoekt-indexserver.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' zoekt-indexserver (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the zoekt-indexserver container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs zoekt-indexserver (note this will include logs from the previous and currently running container).

zoekt-indexserver: container_memory_usage

Descriptions:

  • zoekt-indexserver: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of zoekt-indexserver container in docker-compose.yml.

zoekt-indexserver: container_cpu_usage

Descriptions:

  • zoekt-indexserver: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the zoekt-indexserver container in docker-compose.yml.

zoekt-indexserver: provisioning_container_cpu_usage_1d

Descriptions:

  • zoekt-indexserver: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the zoekt-indexserver container in docker-compose.yml. If usage is low, consider decreasing the above values.

zoekt-indexserver: provisioning_container_memory_usage_1d

Descriptions:

  • zoekt-indexserver: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of zoekt-indexserver container in docker-compose.yml. If usage is low, consider decreasing the above values.

zoekt-indexserver: provisioning_container_cpu_usage_5m

Descriptions:

  • zoekt-indexserver: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the zoekt-indexserver container in docker-compose.yml.

zoekt-indexserver: provisioning_container_memory_usage_5m

Descriptions:

  • zoekt-indexserver: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of zoekt-indexserver container in docker-compose.yml.

zoekt-webserver: container_restarts

Descriptions:

  • zoekt-webserver: 1+ container restarts every 5m by instance (not available on server)

Possible solutions:

  • Kubernetes:
    • Determine if the pod was OOM killed using kubectl describe pod zoekt-webserver (look for OOMKilled: true) and, if so, consider increasing the memory limit in the relevant Deployment.yaml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using kubectl logs -p zoekt-webserver.
  • Docker Compose:
    • Determine if the pod was OOM killed using docker inspect -f '{{json .State}}' zoekt-webserver (look for "OOMKilled":true) and, if so, consider increasing the memory limit of the zoekt-webserver container in docker-compose.yml.
    • Check the logs before the container restarted to see if there are panic: messages or similar using docker logs zoekt-webserver (note this will include logs from the previous and currently running container).

zoekt-webserver: container_memory_usage

Descriptions:

  • zoekt-webserver: 99%+ container memory usage by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of zoekt-webserver container in docker-compose.yml.

zoekt-webserver: container_cpu_usage

Descriptions:

  • zoekt-webserver: 99%+ container cpu usage total (1m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the zoekt-webserver container in docker-compose.yml.

zoekt-webserver: provisioning_container_cpu_usage_1d

Descriptions:

  • zoekt-webserver: 80%+ or less than 30% container cpu usage total (1d average) across all cores by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider descreasing cpus: of the zoekt-webserver container in docker-compose.yml. If usage is low, consider decreasing the above values.

zoekt-webserver: provisioning_container_memory_usage_1d

Descriptions:

  • zoekt-webserver: 80%+ or less than 30% container memory usage (1d average) by instance (not available on server)

Possible solutions:

If usage is high:

  • Kubernetes: Consider decreasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider decreasing memory: of zoekt-webserver container in docker-compose.yml. If usage is low, consider decreasing the above values.

zoekt-webserver: provisioning_container_cpu_usage_5m

Descriptions:

  • zoekt-webserver: 90%+ container cpu usage total (5m average) across all cores by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing CPU limits in the the relevant Deployment.yaml.
  • Docker Compose: Consider increasing cpus: of the zoekt-webserver container in docker-compose.yml.

zoekt-webserver: provisioning_container_memory_usage_5m

Descriptions:

  • zoekt-webserver: 90%+ container memory usage (5m average) by instance (not available on server)

Possible solutions:

  • Kubernetes: Consider increasing memory limit in relevant Deployment.yaml.
  • Docker Compose: Consider increasing memory: of zoekt-webserver container in docker-compose.yml.