Why are campaigns designed the way they are?
- Declarative API (not imperative). You declare your intent, such as “lint files in all repositories with a
package.jsonfile”. The campaign figures out how to achieve your desired state. The external state (of repositories, changesets, code hosts, access tokens, etc.) can change at any time, and temporary errors frequently occur when reading and writing to code hosts. These factors would make an imperative API very cumbersome because each API client would need to handle the complexity of the distributed system.
- Define a campaign in a file (not some online API). The source of truth of a campaign’s definition is a file that can be stored in version control, reviewed in code review, and re-applied by CI. This is in the same spirit as IaaC (infrastructure as code; e.g., storing your Terraform/Kubernetes/etc. files in Git). We prefer this approach over a (worse) alternative where you define a campaign in a UI with a bunch of text fields, checkboxes, buttons, etc., and need to write a custom API client to import/export the campaign definition.
- Shareable and portable. You can share your campaign specs, and it’s easy for other people to use them. A campaign spec expresses an intent that’s high-level enough to (usually) not be specific to your own particular repositories. You declare and inject configuration and secrets to customize it instead of hard-coding those values.
- Large-scale. You can run campaigns across 10,000s of repositories. It might take a while to compute and push everything, and the current implementation might cap out lower than that, but the fundamental design scales well.
- Accommodates a variety of code hosts and review/merge processes. Specifically, we don’t to limit campaigns to only working for GitHub pull requests. (See current support list.)
Comparison to other distributed systems
Kubernetes is a distributed system with an API that many people are familiar with. Campaigns is also a distributed system. All APIs for distributed systems need to handle a similar set of concerns around robustness, consistency, etc. Here’s a comparison showing how these concerns are handled for a Kubernetes Deployment and a Sourcegraph campaign. In some cases, we’ve found Kubernetes to be a good source of inspiration for the campaigns API, but resembling Kubernetes is not an explicit goal.
|Kubernetes Deployment||Sourcegraph campaign|
|What underlying thing does this API manage?||Pods running on many (possibly unreliable) nodes||Branches and changesets on many repositories that can be rate-limited and externally modified (and our authorization can change)|
|How desired state is computed||
|Desired state consists of...||
|Where is the desired state computed?||The deployment controller (part of the Kubernetes cluster) consults the DeploymentSpec and continuously computes the desired state.||
The Sourcegraph CLI (running on your local machine, not on the Sourcegraph server) consults the campaign spec and computes the desired state when you invoke
Difference vs. Kubernetes: A campaign's desired state is computed locally, not on the server. It requires executing arbitrary commands, which is not yet supported by the Sourcegraph server. See campaigns known issue "Campaign steps are run locally...".
|Reconciling desired state vs. actual state||The "deployment controller" reconciles the resulting PodSpecs against the current actual PodSpecs (and does smart things like rolling deploy).||The "campaign controller" (i.e., our backend) reconciles the resulting ChangesetSpecs against the current actual changesets (and does smart things like gradual roll-out/publishing and auto-merging when checks pass).|
These docs explain more about Kubernetes’ design: