diff --git a/docs/self-hosted/deploy/docker-compose/configuration.mdx b/docs/self-hosted/deploy/docker-compose/configuration.mdx index c1602205d..c39338852 100644 --- a/docs/self-hosted/deploy/docker-compose/configuration.mdx +++ b/docs/self-hosted/deploy/docker-compose/configuration.mdx @@ -54,7 +54,7 @@ See ["Environment variables in Compose"](https://docs.docker.com/compose/environ Sourcegraph supports HTTP tracing to help troubleshoot issues. See [Tracing](/self-hosted/observability/tracing) for details. -The base docker-compose.yaml file enables the bundled [otel-collector](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/deploy-sourcegraph-docker$+file:docker-compose/docker-compose.yaml+content:%22++otel-collector:%22&patternType=keyword) by default, but a tracing backend needs to be deployed or configured to see HTTP traces. +The base docker-compose.yaml file enables the bundled [otel-collector](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/deploy-sourcegraph-docker$+file:docker-compose/docker-compose.yaml+content:%22++otel-collector:%22&patternType=keyword) by default, but a tracing backend needs to be deployed or configured to see traces. To enable tracing on your instance, you'll need to either: @@ -65,7 +65,7 @@ Once a tracing backend has been deployed, see our [Tracing](/self-hosted/observa ### Deploy the bundled Jaeger -To deploy the bundled Jaeger web UI to see HTTP trace data, add [Jaeger's docker-compose.yaml override file](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/main/docker-compose/jaeger/docker-compose.yaml) to your deployment command. +To deploy the bundled Jaeger web UI to see trace data, add [Jaeger's docker-compose.yaml override file](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/main/docker-compose/jaeger/docker-compose.yaml) to your deployment command. ```bash docker compose \ @@ -77,13 +77,13 @@ docker compose \ ### Configure an external tracing backend -The bundled otel-collector can be configured to export HTTP traces to an OTel-compatible backend of your choosing. +The bundled otel-collector can be configured to export traces to an OTEL-compatible backend of your choosing. To customize the otel-collector config file: -- Create a copy of the default config in [otel-collector/config.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/main/otel-collector/config.yaml) -- Follow the [OpenTelemetry collector configuration guidance](/self-hosted/observability/opentelemetry) -- Edit your `docker-compose.override.yaml` file to mount your custom config file to the `otel-collector` container: +- Create a copy of the default config in [otel-collector/config.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/main/otel-collector/config.yaml) +- Follow the [OpenTelemetry collector configuration guidance](/self-hosted/observability/opentelemetry) +- Edit your `docker-compose.override.yaml` file to mount your custom config file to the `otel-collector` container: ```yaml services: @@ -99,10 +99,10 @@ services: Provide your `gitserver` container with SSH / Git configuration needed to connect to some code hosts, by mounting a directory that contains the needed config files into the `gitserver` container, ex. -- `.ssh/config` -- `.ssh/id_rsa.pub` -- `.ssh/id_rsa` -- `.ssh/known_hosts` +- `.ssh/config` +- `.ssh/id_rsa.pub` +- `.ssh/id_rsa` +- `.ssh/known_hosts` You can also provide other files like `.netrc`, `.gitconfig`, etc. at their respective paths, if needed. diff --git a/docs/self-hosted/deploy/kubernetes/configure.mdx b/docs/self-hosted/deploy/kubernetes/configure.mdx index e7aae1c6e..a71d78aa2 100644 --- a/docs/self-hosted/deploy/kubernetes/configure.mdx +++ b/docs/self-hosted/deploy/kubernetes/configure.mdx @@ -109,10 +109,10 @@ cAdvisor requires a service account and certain permissions to access and gather To deploy cAdvisor with privileged access, include the following: -- [monitoring base resources](#monitoring-stack) -- [monitoring/privileged component](#monitoring-stack) -- [privileged component](#privileged) -- [cadvisor component](#deploy-cadvisor) +- [monitoring base resources](#monitoring-stack) +- [monitoring/privileged component](#monitoring-stack) +- [privileged component](#privileged) +- [cadvisor component](#deploy-cadvisor) ```yaml # instances/$INSTANCE_NAME/kustomization.yaml @@ -187,12 +187,12 @@ Once a tracing backend has been deployed, see our [Tracing](/self-hosted/observa ### Deploy the bundled OpenTelemetry Collector and Jaeger -The quickest way to get started with HTTP tracing is by deploying our bundled OTel and Jaeger containers together. +The quickest way to get started with HTTP tracing is by deploying our bundled OTEL and Jaeger containers together. Include the `tracing` component to deploy both OpenTelemetry and Jaeger together. This component also configures the following services: -- `otel-collector` to export to this Jaeger instance -- `grafana` to get metrics from this Jaeger instance +- `otel-collector` to export to this Jaeger instance +- `grafana` to get metrics from this Jaeger instance ```yaml # instances/$INSTANCE_NAME/kustomization.yaml @@ -216,7 +216,7 @@ components: #### Configure a tracing backend -Follow these steps to configure the otel-collector to export traces to an external OTel-compatible backend: +Follow these steps to configure the otel-collector to export traces to an external OTEL-compatible backend: 1. Create a subdirectory called 'patches' within the directory of your overlay 2. Copy and paste the [base/otel-collector/otel-collector.ConfigMap.yaml file](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s@master/-/tree/base/otel-collector/otel-collector.ConfigMap.yaml) to the new [patches subdirectory](/self-hosted/deploy/kubernetes/kustomize/#patches-directory) @@ -352,11 +352,11 @@ $ cp -R components/custom/resources instances/$INSTANCE_NAME/custom-resources For example, the following patches update the resources for: -- gitserver - - increase replica count to 2 - - adjust resources limits and requests -- pgsql - - increase storage size to 500Gi +- gitserver + - increase replica count to 2 + - adjust resources limits and requests +- pgsql + - increase storage size to 500Gi ```yaml # instances/$INSTANCE_NAME/custom-resources/kustomization.yaml @@ -454,8 +454,8 @@ components: The component takes care of creating a new storage class named `sourcegraph` with the following configurations: -- Provisioner: pd.csi.storage.gke.io -- SSD: types: pd-ssd +- Provisioner: pd.csi.storage.gke.io +- SSD: types: pd-ssd It also updates the storage class name for all resources to `sourcegraph`. @@ -467,8 +467,8 @@ It also updates the storage class name for all resources to `sourcegraph`. **Step 2**: Include one of the AWS storage class components in your overlay: [storage-class/aws/eks](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/eks) or [storage-class/aws/ebs](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/ebs) -- The [storage-class/aws/ebs-csi](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/eks) component is configured with the `ebs.csi.aws.com` storage class provisioner for clusters with self-managed Amazon EBS Container Storage Interface driver installed -- The [storage-class/aws/aws-ebs](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/ebs) component is configured with the `kubernetes.io/aws-ebs` storage class provisioner for clusters with the [AWS EBS CSI driver installed as Amazon EKS add-on](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html) +- The [storage-class/aws/ebs-csi](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/eks) component is configured with the `ebs.csi.aws.com` storage class provisioner for clusters with self-managed Amazon EBS Container Storage Interface driver installed +- The [storage-class/aws/aws-ebs](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/ebs) component is configured with the `kubernetes.io/aws-ebs` storage class provisioner for clusters with the [AWS EBS CSI driver installed as Amazon EKS add-on](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html) ```yaml # instances/$INSTANCE_NAME/kustomization.yaml @@ -497,10 +497,10 @@ components: This component creates a new storage class named `sourcegraph` in your cluster with the following configurations: -- provisioner: disk.csi.azure.com -- parameters.storageaccounttype: Premium_LRS - - This configures SSDs and is highly recommended. - - **A Premium VM is required.** +- provisioner: disk.csi.azure.com +- parameters.storageaccounttype: Premium_LRS + - This configures SSDs and is highly recommended. + - **A Premium VM is required.** [Additional documentation](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers) for more information. @@ -688,9 +688,9 @@ data: **Step 3**: Configure the TLS settings of your Ingress by adding the following variables to your [buildConfig.yaml](/self-hosted/deploy/kubernetes/kustomize/#buildconfig-yaml) file: -- **TLS_HOST**: your domain name -- **TLS_INGRESS_CLASS_NAME**: ingress class name required by your cluster-issuer -- **TLS_CLUSTER_ISSUER**: name of the cluster-issuer +- **TLS_HOST**: your domain name +- **TLS_INGRESS_CLASS_NAME**: ingress class name required by your cluster-issuer +- **TLS_CLUSTER_ISSUER**: name of the cluster-issuer Example: @@ -878,13 +878,13 @@ Add a network rule that allows incoming traffic on port 30080 (HTTP) to at least ### Google Cloud Platform Firewall -- Expose the necessary ports. +- Expose the necessary ports. ```bash $ gcloud compute --project=$PROJECT firewall-rules create sourcegraph-frontend-http --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:30080 ``` -- Include the nodeport component to change the type of the `sourcegraph-frontend` service from `ClusterIP` to `NodePort` with the `nodeport` component: +- Include the nodeport component to change the type of the `sourcegraph-frontend` service from `ClusterIP` to `NodePort` with the `nodeport` component: ```yaml # instances/$INSTANCE_NAME/kustomization.yaml @@ -892,19 +892,19 @@ components: - ../../components/network/nodeport/30080 ``` -- Directly applying this change to a running service [will fail](https://github.com/kubernetes/kubernetes/issues/42282). You must first delete the old service before redeploying a new one (with a few seconds of downtime): +- Directly applying this change to a running service [will fail](https://github.com/kubernetes/kubernetes/issues/42282). You must first delete the old service before redeploying a new one (with a few seconds of downtime): ```bash $ kubectl delete svc sourcegraph-frontend ``` -- Find a node name. +- Find a node name. ```bash $ kubectl get pods -l app=sourcegraph-frontend -o=custom-columns=NODE:.spec.nodeName ``` -- Get the EXTERNAL-IP address (will be ephemeral unless you [make it static](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#promote_ephemeral_ip)). +- Get the EXTERNAL-IP address (will be ephemeral unless you [make it static](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#promote_ephemeral_ip)). ```bash $ kubectl get node $NODE -o wide @@ -1079,15 +1079,15 @@ configMapGenerator: Sourcegraph supports specifying an external Redis server with these environment variables: -- **REDIS_CACHE_ENDPOINT**=[redis-cache:6379](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24++REDIS_CACHE_ENDPOINT+AND+REDIS_STORE_ENDPOINT+-file:doc+file:internal&patternType=literal) for caching information. -- **REDIS_STORE_ENDPOINT**=[redis-store:6379](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24++REDIS_CACHE_ENDPOINT+AND+REDIS_STORE_ENDPOINT+-file:doc+file:internal&patternType=literal) for storing information (session data and job queues). +- **REDIS_CACHE_ENDPOINT**=[redis-cache:6379](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24++REDIS_CACHE_ENDPOINT+AND+REDIS_STORE_ENDPOINT+-file:doc+file:internal&patternType=literal) for caching information. +- **REDIS_STORE_ENDPOINT**=[redis-store:6379](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24++REDIS_CACHE_ENDPOINT+AND+REDIS_STORE_ENDPOINT+-file:doc+file:internal&patternType=literal) for storing information (session data and job queues). When using an external Redis server, the corresponding environment variable must also be added to the following services: -- `sourcegraph-frontend` -- `gitserver` -- `searcher` -- `worker` +- `sourcegraph-frontend` +- `gitserver` +- `searcher` +- `worker` **Step 1**: Include the `services/redis` component in your components: diff --git a/docs/self-hosted/deploy/kubernetes/index.mdx b/docs/self-hosted/deploy/kubernetes/index.mdx index 0c47a8613..10a95dd28 100644 --- a/docs/self-hosted/deploy/kubernetes/index.mdx +++ b/docs/self-hosted/deploy/kubernetes/index.mdx @@ -5,8 +5,8 @@ Helm offers a simple deployment process on Kubernetes using well known and stand ## Requirements -- [Helm 3 CLI](https://helm.sh/docs/intro/install/) -- Kubernetes 1.19 or greater +- [Helm 3 CLI](https://helm.sh/docs/intro/install/) +- Kubernetes 1.19 or greater @@ -27,19 +27,19 @@ Our Helm chart has a lot of sensible defaults baked into the values.yaml so that 1. Prepare any required customizations -- Most environments will likely require some customizations to the default Helm chart values. See _[Configuration](#configuration)_ for more information. -- Additionally, resource allocations for individual services may need to be adjusted. See our _[Resource Estimator](/self-hosted/deploy/resource-estimator)_ for more information. +- Most environments will likely require some customizations to the default Helm chart values. See _[Configuration](#configuration)_ for more information. +- Additionally, resource allocations for individual services may need to be adjusted. See our _[Resource Estimator](/self-hosted/deploy/resource-estimator)_ for more information. 2. Review the customized Helm chart -- There are [three mechanisms](#reviewing-changes) that can be used to review any customizations made, this is an optional step, but may be useful the first time you deploy Sourcegraph, for peace of mind. +- There are [three mechanisms](#reviewing-changes) that can be used to review any customizations made, this is an optional step, but may be useful the first time you deploy Sourcegraph, for peace of mind. 3. Follow the relevant platform-specific guide below to deploy Sourcegraph to your environment: -- [Google GKE](#configure-sourcegraph-on-google-kubernetes-engine-gke) -- [AWS EKS](#configure-sourcegraph-on-elastic-kubernetes-service-eks) -- [Azure AKS](#configure-sourcegraph-on-azure-managed-kubernetes-service-aks) -- [Other cloud providers or on-prem](#configure-sourcegraph-on-other-cloud-providers-or-on-prem) +- [Google GKE](#configure-sourcegraph-on-google-kubernetes-engine-gke) +- [AWS EKS](#configure-sourcegraph-on-elastic-kubernetes-service-eks) +- [Azure AKS](#configure-sourcegraph-on-azure-managed-kubernetes-service-aks) +- [Other cloud providers or on-prem](#configure-sourcegraph-on-other-cloud-providers-or-on-prem) ## Quickstart @@ -397,7 +397,7 @@ jaeger: #### Configure OpenTelemetry Collector to use an external tracing backend -To configure the bundled otel-collector to export traces to an external OTel-compatible backend, you you can customize the otel-collector's config file directly in your Helm values `override.yaml` file. +To configure the bundled otel-collector to export traces to an external OTEL-compatible backend, you you can customize the otel-collector's config file directly in your Helm values `override.yaml` file. For the specific configurations to set, see our [OpenTelemetry](/self-hosted/observability/opentelemetry) page. @@ -423,8 +423,8 @@ openTelemetry: config: traces: exporters: - jaeger: - endpoint: '$JAEGER_HOST:14250' + otlp/jaeger: + endpoint: '$JAEGER_HOST:4317' tls: insecure: true ``` @@ -457,8 +457,8 @@ openTelemetry: traces: exportersTlsSecretName: otel-collector-exporters-tls exporters: - jaeger: - endpoint: '$JAEGER_HOST:14250' + otlp/jaeger: + endpoint: '$JAEGER_HOST:4317' tls: cert_file: /tls/file.cert key_file: /tls/file.key @@ -483,10 +483,10 @@ openTelemetry: This section is aimed at providing high-level guidance on deploying Sourcegraph via Helm on major Cloud providers. In general, you need the following to get started: -- A working Kubernetes cluster, v1.19 or higher -- The ability to provision persistent volumes, e.g. have Block Storage [CSI storage driver](https://kubernetes-csi.github.io/docs/drivers.html) installed -- An Ingress Controller installed, e.g. platform native ingress controller, [NGINX Ingress Controller]. -- The ability to create DNS records for Sourcegraph, e.g. `sourcegraph.company.com` +- A working Kubernetes cluster, v1.19 or higher +- The ability to provision persistent volumes, e.g. have Block Storage [CSI storage driver](https://kubernetes-csi.github.io/docs/drivers.html) installed +- An Ingress Controller installed, e.g. platform native ingress controller, [NGINX Ingress Controller]. +- The ability to create DNS records for Sourcegraph, e.g. `sourcegraph.company.com` ### Configure Sourcegraph on Google Kubernetes Engine (GKE) @@ -636,8 +636,8 @@ Now the deployment is complete. More information on configuring the Sourcegraph 1. You need to have a EKS cluster (>=1.19) with the following addons enabled: -- [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) -- [AWS EBS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html) +- [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) +- [AWS EBS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html) You may consider deploying your own Ingress Controller instead of the ALB @@ -727,8 +727,8 @@ Now the deployment is complete. More information on configuring the Sourcegraph #### References -- [Enable TLS with AWS-managed certificate](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/#ssl) -- [Supported AWS load balancer annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations) +- [Enable TLS with AWS-managed certificate](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/#ssl) +- [Supported AWS load balancer annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations) ### Configure Sourcegraph on Azure Managed Kubernetes Service (AKS) @@ -736,8 +736,8 @@ Now the deployment is complete. More information on configuring the Sourcegraph 1. You need to have an AKS cluster (>=1.19) with the following addons enabled: -- [Azure Application Gateway Ingress Controller](https://docs.microsoft.com/en-us/azure/application-gateway/ingress-controller-install-new) -- [Azure Disk CSI driver](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers) +- [Azure Application Gateway Ingress Controller](https://docs.microsoft.com/en-us/azure/application-gateway/ingress-controller-install-new) +- [Azure Disk CSI driver](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers) You may consider using your custom Ingress Controller instead of Application @@ -823,9 +823,9 @@ Now the deployment is complete. More information on configuring the Sourcegraph #### References -- [Expose an AKS service over HTTP or HTTPS using Application Gateway](https://docs.microsoft.com/en-us/azure/application-gateway/ingress-controller-expose-service-over-http-https) -- [Supported Azure Application Gateway Ingress Controller annotations](https://azure.github.io/application-gateway-kubernetes-ingress/annotations/) -- [What is Application Gateway Ingress Controller?](https://docs.microsoft.com/en-us/azure/application-gateway/ingress-controller-overview) +- [Expose an AKS service over HTTP or HTTPS using Application Gateway](https://docs.microsoft.com/en-us/azure/application-gateway/ingress-controller-expose-service-over-http-https) +- [Supported Azure Application Gateway Ingress Controller annotations](https://azure.github.io/application-gateway-kubernetes-ingress/annotations/) +- [What is Application Gateway Ingress Controller?](https://docs.microsoft.com/en-us/azure/application-gateway/ingress-controller-overview) ### Configure Sourcegraph on other Cloud providers or on-prem @@ -833,8 +833,8 @@ Now the deployment is complete. More information on configuring the Sourcegraph 1. You need to have a Kubernetes cluster (>=1.19) with the following components installed: -- [x] Ingress Controller, e.g. Cloud providers-native solution, [NGINX Ingress Controller] -- [x] Block Storage CSI driver +- [x] Ingress Controller, e.g. Cloud providers-native solution, [NGINX Ingress Controller] +- [x] Block Storage CSI driver 2. Your account should have sufficient access privileges, equivalent to the `cluster-admin` ClusterRole. 3. Connect to your cluster (via either the console or the command line using the relevant CLI tool) and ensure the cluster is up and running using: `kubectl get nodes` (several `ready` nodes should be listed) @@ -1012,8 +1012,8 @@ When all pods have restarted and show as Running, you can browse to your Sourceg **Step 1:** Check Upgrade Readiness: -- Check the [upgrade notes](https://sourcegraph.com/changelog/self-hosted/kubernetes) for the version range you're passing through. -- Check the `Site Admin > Updates` page to determine [upgrade readiness](/self-hosted/updates/#upgrade-readiness). +- Check the [upgrade notes](https://sourcegraph.com/changelog/self-hosted/kubernetes) for the version range you're passing through. +- Check the `Site Admin > Updates` page to determine [upgrade readiness](/self-hosted/updates/#upgrade-readiness). **Step 2:** @@ -1033,15 +1033,15 @@ When all pods have restarted and show as Running, you can browse to your Sourceg Scale down `deployments` and `statefulSets` that access the database, _this step prevents services from accessing the database while schema migrations are in process._ The following services must have their replicas scaled to 0: -- Deployments (e.g., `kubectl scale deployment --replicas=0`) -- precise-code-intel-worker -- searcher -- sourcegraph-frontend -- sourcegraph-frontend-internal -- worker -- Stateful sets (e.g., `kubectl scale sts --replicas=0`): -- gitserver -- indexed-search +- Deployments (e.g., `kubectl scale deployment --replicas=0`) +- precise-code-intel-worker +- searcher +- sourcegraph-frontend +- sourcegraph-frontend-internal +- worker +- Stateful sets (e.g., `kubectl scale sts --replicas=0`): +- gitserver +- indexed-search The following convenience commands provide an example of scaling down the necessary services in a single command: diff --git a/docs/self-hosted/observability/opentelemetry.mdx b/docs/self-hosted/observability/opentelemetry.mdx index 5372bb3ed..090dee93e 100644 --- a/docs/self-hosted/observability/opentelemetry.mdx +++ b/docs/self-hosted/observability/opentelemetry.mdx @@ -2,11 +2,11 @@ > This page is a deep dive into OpenTelemetry and customizing it. To get started with HTTP Tracing, see the [Tracing](/self-hosted/observability/tracing) page. -[OpenTelemetry](https://opentelemetry.io/) (OTel) is an industry-standard toolset to handle observability data, ex. metrics, logs, and traces. +[OpenTelemetry](https://opentelemetry.io/) (OTEL) is an industry-standard toolset to handle observability data, ex. metrics, logs, and traces. To handle this data, Sourcegraph deployments include a bundled [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) (otel-collector) container, which can be configured to ingest, process, and export observability data to a backend of your choice. This approach offers great flexibility. -> NOTE: Sourcegraph currently uses OTel for HTTP Traces, and may use it for metrics and logs in the future. +> NOTE: Sourcegraph currently uses OTEL for tracing, and may use it for metrics and logs in the future. For an in-depth explanation of the parts that compose a full collector pipeline, see OpenTelemetry's [documentation](https://opentelemetry.io/docs/collector/configuration/). @@ -16,20 +16,20 @@ Sourcegraph's bundled otel-collector is deployed via Docker image, and is config For details on how to deploy the otel-collector, and where to find its configuration file, refer to the docs page specific to your deployment type: -- [Kubernetes with Helm](/self-hosted/deploy/kubernetes#configure-opentelemetry-collector-to-use-an-external-tracing-backend) -- [Kubernetes with Kustomize](/self-hosted/deploy/kubernetes/configure#deploy-opentelemetry-collector-to-use-an-external-tracing-backend) -- [Docker Compose](/self-hosted/deploy/docker-compose/configuration#configure-an-external-tracing-backend) +- [Kubernetes with Helm](/self-hosted/deploy/kubernetes#configure-opentelemetry-collector-to-use-an-external-tracing-backend) +- [Kubernetes with Kustomize](/self-hosted/deploy/kubernetes/configure#deploy-opentelemetry-collector-to-use-an-external-tracing-backend) +- [Docker Compose](/self-hosted/deploy/docker-compose/configuration#configure-an-external-tracing-backend) ## HTTP Tracing Backends -Sourcegraph containers export HTTP traces in OTel format to the bundled otel-collector. -For more information about HTTP traces, see the [Tracing](/self-hosted/observability/tracing) page. +Sourcegraph containers export traces in OTEL format to the bundled otel-collector. +For more information about traces, see the [Tracing](/self-hosted/observability/tracing) page. -The bundled otel-collector includes the following exporters, which support HTTP traces in OTel format: +The bundled otel-collector includes the following exporters, which support traces in OTEL format: -- [OTLP-compatible backends](#otlp-compatible-backends), ex. Honeycomb, Grafana Tempo -- [Jaeger](#jaeger) -- [Google Cloud](#google-cloud) +- [OTLP-compatible backends](#otlp-compatible-backends), ex. Honeycomb, Grafana Tempo +- [Jaeger](#jaeger) +- [Google Cloud](#google-cloud) Basic configuration for each tracing backend type is described below. @@ -45,7 +45,7 @@ receivers: http: exporters: - logging: # Export HTTP traces as log events + logging: # Export traces as log events loglevel: warn sampling_initial: 5 sampling_thereafter: 200 @@ -161,7 +161,7 @@ service: ## Exporters Exporters send observability data from the otel-collector to the needed backend(s). -Each exporter can support one or more OTel signals. +Each exporter can support one or more OTEL signals. This section outlines some common exporter configurations. For details, see OpenTelemetry's [exporters](https://opentelemetry.io/docs/collector/configuration/#exporters) page. @@ -171,8 +171,8 @@ This section outlines some common exporter configurations. For details, see Open Backends compatible with the [OpenTelemetry Protocol (OTLP)](https://opentelemetry.io/docs/specs/otlp/) include services such as: -- [Honeycomb](https://docs.honeycomb.io/getting-data-in/opentelemetry-overview/) -- [Grafana Tempo](https://grafana.com/blog/2021/04/13/how-to-send-traces-to-grafana-clouds-tempo-service-with-opentelemetry-collector/) +- [Honeycomb](https://docs.honeycomb.io/getting-data-in/opentelemetry-overview/) +- [Grafana Tempo](https://grafana.com/blog/2021/04/13/how-to-send-traces-to-grafana-clouds-tempo-service-with-opentelemetry-collector/) OTLP-compatible backends typically accept the [OTLP gRPC protocol](#otlp-grpc-backends), but may require the [OTLP HTTP protocol](#otlp-http-backends) instead. @@ -209,14 +209,13 @@ If you're looking for information about Sourcegraph's bundled Jaeger instance, h Refer to the [Jaeger](https://opentelemetry.io/docs/languages/js/exporters/#jaeger) documentation for options. -If you must use your own Jaeger instance, and if the bundled otel-collector's basic configuration with the Jaeger OTel exporter enabled meets your needs, configure the otel-collector's startup command to `/usr/bin/otelcol-sourcegraph --config=/etc/otel-collector/configs/jaeger.yaml`. Note that this requires the environment variable `$JAEGER_HOST` to be set on the otel-collector service / container: +If you must use your own Jaeger instance, and if the bundled otel-collector's basic configuration with the Jaeger OTLP exporter enabled meets your needs, configure the otel-collector's startup command to `/usr/bin/otelcol-sourcegraph --config=/etc/otel-collector/configs/jaeger.yaml`. Note that this requires the environment variable `$JAEGER_HOST` to be set on the otel-collector service / container: ```yaml # otel-collector config.yaml exporters: - jaeger: - # Default Jaeger gRPC server - endpoint: '$JAEGER_HOST:14250' + otlp/jaeger: + endpoint: '$JAEGER_HOST:4317' tls: insecure: true ``` @@ -225,7 +224,7 @@ The Sourcegraph frontend automatically proxies Jaeger's web UI to make it availa ### Google Cloud -If you run Sourcegraph in GCP and wish to export your HTTP traces to Google Cloud Trace, otel-collector can use project authentication. See the [Google Cloud Exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/googlecloudexporter/README.md) documentation for available options. +If you run Sourcegraph in GCP and wish to export your traces to Google Cloud Trace, otel-collector can use project authentication. See the [Google Cloud Exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/googlecloudexporter/README.md) documentation for available options. ```yaml exporters: diff --git a/docs/self-hosted/observability/tracing.mdx b/docs/self-hosted/observability/tracing.mdx index 5d6ccd217..6752f3089 100644 --- a/docs/self-hosted/observability/tracing.mdx +++ b/docs/self-hosted/observability/tracing.mdx @@ -1,8 +1,8 @@ # HTTP Tracing -HTTP traces are a powerful debugging tool to help you see how your Sourcegraph requests are processed under the hood - like having X-ray vision into how long each part takes and where errors occur. +traces are a powerful debugging tool to help you see how your Sourcegraph requests are processed under the hood - like having X-ray vision into how long each part takes and where errors occur. -To enable HTTP traces on your Sourcegraph Instance: +To enable traces on your Sourcegraph Instance: 1. Deploy and / or configure a tracing backend @@ -16,9 +16,9 @@ The quickest way to get started with HTTP tracing is to deploy our bundled Jaege To deploy our bundled Jaeger backend, follow the instructions for your deployment type: -- [Kubernetes with Helm](/self-hosted/deploy/kubernetes#enable-the-bundled-jaeger-deployment) -- [Kubernetes with Kustomize](/self-hosted/deploy/kubernetes/configure#deploy-the-bundled-opentelemetry-collector-and-jaeger) -- [Docker Compose](/self-hosted/deploy/docker-compose/configuration#deploy-the-bundled-jaeger) +- [Kubernetes with Helm](/self-hosted/deploy/kubernetes#enable-the-bundled-jaeger-deployment) +- [Kubernetes with Kustomize](/self-hosted/deploy/kubernetes/configure#deploy-the-bundled-opentelemetry-collector-and-jaeger) +- [Docker Compose](/self-hosted/deploy/docker-compose/configuration#deploy-the-bundled-jaeger) Then configure your Site Configuration: @@ -40,14 +40,14 @@ Then configure your Site Configuration: Where: -- `{{ .ExternalURL }}` is the value of the `externalURL` setting in your Sourcegraph instance's Site Configuration -- `{{ .TraceID }}` is the TraceID which gets generated while processing the request +- `{{ .ExternalURL }}` is the value of the `externalURL` setting in your Sourcegraph instance's Site Configuration +- `{{ .TraceID }}` is the TraceID which gets generated while processing the request Once deployed, the Jaeger web UI will be accessible at `/-/debug/jaeger` ### External OpenTelemetry-Compatible Platforms -If you prefer to use an external, OTel-compatible platform, you can configure Sourcegraph to export traces to it instead. See our [OpenTelemetry documentation](/self-hosted/observability/opentelemetry) for further details. +If you prefer to use an external, OTEL-compatible platform, you can configure Sourcegraph to export traces to it instead. See our [OpenTelemetry documentation](/self-hosted/observability/opentelemetry) for further details. Then configure your Site Configuration: @@ -69,7 +69,7 @@ For example, if you export your traces to [Honeycomb](/self-hosted/observability Where: -- `{{ .TraceID }}` is the TraceID which gets generated while processing the request +- `{{ .TraceID }}` is the TraceID which gets generated while processing the request ## How to use traces @@ -99,9 +99,9 @@ The response will include an `x-trace-url` header, which will include a URL to t ## Trace Formats -As the OTel (OpenTelemetry) HTTP trace format has gained broad industry adoption, we've centralized our support for HTTP traces on the OTel format, whether with our bundled Jaeger, or an external backend of your choice. +As the OTEL (OpenTelemetry) trace format has gained broad industry adoption, we've centralized our support for traces on the OTEL format, whether with our bundled Jaeger, or an external backend of your choice. -As Jaeger has also switched to the OTel format, we've removed support for Jaeger's deprecated format. +As Jaeger has also switched to the OTEL format, we've removed support for Jaeger's deprecated format. We've also removed support for Go's net/trace format. ## Basic sampling modes @@ -115,11 +115,11 @@ Three basic sampling modes are available in the `observability.tracing` Site Con } ``` -- `selective` - - Default - - Only exports a trace when the `trace=1` parameter is in the request URL -- `all` - - Exports traces for all requests - - Not recommended, as it can be memory and network intensive, while very few traces are actually needed -- `none` - - Disables tracing +- `selective` + - Default + - Only exports a trace when the `trace=1` parameter is in the request URL +- `all` + - Exports traces for all requests + - Not recommended, as it can be memory and network intensive, while very few traces are actually needed +- `none` + - Disables tracing