Deploying code intelligence services

Most of the code intelligence logic lives inside of the enterprise frontend, precise-code-intel-worker, and executor-queue services. These services deploy with the rest of the enterprise instance via docker, docker-compose, or Kubernetes configuration.

The executor service, which runs user-supplied code to produce and upload precise code intel indexes, is deployed directly onto compute nodes in its own GCP project. This services requires certain Linux kernel extensions to operate, which are not available within a Kubernetes cluster. The deployment for this service is managed through terraform.