Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions docs/self-hosted/deploy/docker-compose/configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ See ["Environment variables in Compose"](https://docs.docker.com/compose/environ

Sourcegraph supports HTTP tracing to help troubleshoot issues. See [Tracing](/self-hosted/observability/tracing) for details.

The base docker-compose.yaml file enables the bundled [otel-collector](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/deploy-sourcegraph-docker$+file:docker-compose/docker-compose.yaml+content:%22++otel-collector:%22&patternType=keyword) by default, but a tracing backend needs to be deployed or configured to see HTTP traces.
The base docker-compose.yaml file enables the bundled [otel-collector](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/deploy-sourcegraph-docker$+file:docker-compose/docker-compose.yaml+content:%22++otel-collector:%22&patternType=keyword) by default, but a tracing backend needs to be deployed or configured to see traces.

To enable tracing on your instance, you'll need to either:

Expand All @@ -65,7 +65,7 @@ Once a tracing backend has been deployed, see our [Tracing](/self-hosted/observa

### Deploy the bundled Jaeger

To deploy the bundled Jaeger web UI to see HTTP trace data, add [Jaeger's docker-compose.yaml override file](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/main/docker-compose/jaeger/docker-compose.yaml) to your deployment command.
To deploy the bundled Jaeger web UI to see trace data, add [Jaeger's docker-compose.yaml override file](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/main/docker-compose/jaeger/docker-compose.yaml) to your deployment command.

```bash
docker compose \
Expand All @@ -77,13 +77,13 @@ docker compose \

### Configure an external tracing backend

The bundled otel-collector can be configured to export HTTP traces to an OTel-compatible backend of your choosing.
The bundled otel-collector can be configured to export traces to an OTEL-compatible backend of your choosing.

To customize the otel-collector config file:

- Create a copy of the default config in [otel-collector/config.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/main/otel-collector/config.yaml)
- Follow the [OpenTelemetry collector configuration guidance](/self-hosted/observability/opentelemetry)
- Edit your `docker-compose.override.yaml` file to mount your custom config file to the `otel-collector` container:
- Create a copy of the default config in [otel-collector/config.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/main/otel-collector/config.yaml)
- Follow the [OpenTelemetry collector configuration guidance](/self-hosted/observability/opentelemetry)
- Edit your `docker-compose.override.yaml` file to mount your custom config file to the `otel-collector` container:

```yaml
services:
Expand All @@ -99,10 +99,10 @@ services:

Provide your `gitserver` container with SSH / Git configuration needed to connect to some code hosts, by mounting a directory that contains the needed config files into the `gitserver` container, ex.

- `.ssh/config`
- `.ssh/id_rsa.pub`
- `.ssh/id_rsa`
- `.ssh/known_hosts`
- `.ssh/config`
- `.ssh/id_rsa.pub`
- `.ssh/id_rsa`
- `.ssh/known_hosts`

You can also provide other files like `.netrc`, `.gitconfig`, etc. at their respective paths, if needed.

Expand Down
70 changes: 35 additions & 35 deletions docs/self-hosted/deploy/kubernetes/configure.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -109,10 +109,10 @@ cAdvisor requires a service account and certain permissions to access and gather

To deploy cAdvisor with privileged access, include the following:

- [monitoring base resources](#monitoring-stack)
- [monitoring/privileged component](#monitoring-stack)
- [privileged component](#privileged)
- [cadvisor component](#deploy-cadvisor)
- [monitoring base resources](#monitoring-stack)
- [monitoring/privileged component](#monitoring-stack)
- [privileged component](#privileged)
- [cadvisor component](#deploy-cadvisor)

```yaml
# instances/$INSTANCE_NAME/kustomization.yaml
Expand Down Expand Up @@ -187,12 +187,12 @@ Once a tracing backend has been deployed, see our [Tracing](/self-hosted/observa

### Deploy the bundled OpenTelemetry Collector and Jaeger

The quickest way to get started with HTTP tracing is by deploying our bundled OTel and Jaeger containers together.
The quickest way to get started with HTTP tracing is by deploying our bundled OTEL and Jaeger containers together.

Include the `tracing` component to deploy both OpenTelemetry and Jaeger together. This component also configures the following services:

- `otel-collector` to export to this Jaeger instance
- `grafana` to get metrics from this Jaeger instance
- `otel-collector` to export to this Jaeger instance
- `grafana` to get metrics from this Jaeger instance

```yaml
# instances/$INSTANCE_NAME/kustomization.yaml
Expand All @@ -216,7 +216,7 @@ components:

#### Configure a tracing backend

Follow these steps to configure the otel-collector to export traces to an external OTel-compatible backend:
Follow these steps to configure the otel-collector to export traces to an external OTEL-compatible backend:

1. Create a subdirectory called 'patches' within the directory of your overlay
2. Copy and paste the [base/otel-collector/otel-collector.ConfigMap.yaml file](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s@master/-/tree/base/otel-collector/otel-collector.ConfigMap.yaml) to the new [patches subdirectory](/self-hosted/deploy/kubernetes/kustomize/#patches-directory)
Expand Down Expand Up @@ -352,11 +352,11 @@ $ cp -R components/custom/resources instances/$INSTANCE_NAME/custom-resources

For example, the following patches update the resources for:

- gitserver
- increase replica count to 2
- adjust resources limits and requests
- pgsql
- increase storage size to 500Gi
- gitserver
- increase replica count to 2
- adjust resources limits and requests
- pgsql
- increase storage size to 500Gi

```yaml
# instances/$INSTANCE_NAME/custom-resources/kustomization.yaml
Expand Down Expand Up @@ -454,8 +454,8 @@ components:

The component takes care of creating a new storage class named `sourcegraph` with the following configurations:

- Provisioner: pd.csi.storage.gke.io
- SSD: types: pd-ssd
- Provisioner: pd.csi.storage.gke.io
- SSD: types: pd-ssd

It also updates the storage class name for all resources to `sourcegraph`.

Expand All @@ -467,8 +467,8 @@ It also updates the storage class name for all resources to `sourcegraph`.

**Step 2**: Include one of the AWS storage class components in your overlay: [storage-class/aws/eks](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/eks) or [storage-class/aws/ebs](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/ebs)

- The [storage-class/aws/ebs-csi](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/eks) component is configured with the `ebs.csi.aws.com` storage class provisioner for clusters with self-managed Amazon EBS Container Storage Interface driver installed
- The [storage-class/aws/aws-ebs](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/ebs) component is configured with the `kubernetes.io/aws-ebs` storage class provisioner for clusters with the [AWS EBS CSI driver installed as Amazon EKS add-on](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html)
- The [storage-class/aws/ebs-csi](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/eks) component is configured with the `ebs.csi.aws.com` storage class provisioner for clusters with self-managed Amazon EBS Container Storage Interface driver installed
- The [storage-class/aws/aws-ebs](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s/-/tree/components/storage-class/aws/ebs) component is configured with the `kubernetes.io/aws-ebs` storage class provisioner for clusters with the [AWS EBS CSI driver installed as Amazon EKS add-on](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html)

```yaml
# instances/$INSTANCE_NAME/kustomization.yaml
Expand Down Expand Up @@ -497,10 +497,10 @@ components:

This component creates a new storage class named `sourcegraph` in your cluster with the following configurations:

- provisioner: disk.csi.azure.com
- parameters.storageaccounttype: Premium_LRS
- This configures SSDs and is highly recommended.
- **A Premium VM is required.**
- provisioner: disk.csi.azure.com
- parameters.storageaccounttype: Premium_LRS
- This configures SSDs and is highly recommended.
- **A Premium VM is required.**

[Additional documentation](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers) for more information.

Expand Down Expand Up @@ -688,9 +688,9 @@ data:

**Step 3**: Configure the TLS settings of your Ingress by adding the following variables to your [buildConfig.yaml](/self-hosted/deploy/kubernetes/kustomize/#buildconfig-yaml) file:

- **TLS_HOST**: your domain name
- **TLS_INGRESS_CLASS_NAME**: ingress class name required by your cluster-issuer
- **TLS_CLUSTER_ISSUER**: name of the cluster-issuer
- **TLS_HOST**: your domain name
- **TLS_INGRESS_CLASS_NAME**: ingress class name required by your cluster-issuer
- **TLS_CLUSTER_ISSUER**: name of the cluster-issuer

Example:

Expand Down Expand Up @@ -878,33 +878,33 @@ Add a network rule that allows incoming traffic on port 30080 (HTTP) to at least

### Google Cloud Platform Firewall

- Expose the necessary ports.
- Expose the necessary ports.

```bash
$ gcloud compute --project=$PROJECT firewall-rules create sourcegraph-frontend-http --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:30080
```

- Include the nodeport component to change the type of the `sourcegraph-frontend` service from `ClusterIP` to `NodePort` with the `nodeport` component:
- Include the nodeport component to change the type of the `sourcegraph-frontend` service from `ClusterIP` to `NodePort` with the `nodeport` component:

```yaml
# instances/$INSTANCE_NAME/kustomization.yaml
components:
- ../../components/network/nodeport/30080
```

- Directly applying this change to a running service [will fail](https://github.com/kubernetes/kubernetes/issues/42282). You must first delete the old service before redeploying a new one (with a few seconds of downtime):
- Directly applying this change to a running service [will fail](https://github.com/kubernetes/kubernetes/issues/42282). You must first delete the old service before redeploying a new one (with a few seconds of downtime):

```bash
$ kubectl delete svc sourcegraph-frontend
```

- Find a node name.
- Find a node name.

```bash
$ kubectl get pods -l app=sourcegraph-frontend -o=custom-columns=NODE:.spec.nodeName
```

- Get the EXTERNAL-IP address (will be ephemeral unless you [make it static](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#promote_ephemeral_ip)).
- Get the EXTERNAL-IP address (will be ephemeral unless you [make it static](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#promote_ephemeral_ip)).

```bash
$ kubectl get node $NODE -o wide
Expand Down Expand Up @@ -1079,15 +1079,15 @@ configMapGenerator:

Sourcegraph supports specifying an external Redis server with these environment variables:

- **REDIS_CACHE_ENDPOINT**=[redis-cache:6379](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24++REDIS_CACHE_ENDPOINT+AND+REDIS_STORE_ENDPOINT+-file:doc+file:internal&patternType=literal) for caching information.
- **REDIS_STORE_ENDPOINT**=[redis-store:6379](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24++REDIS_CACHE_ENDPOINT+AND+REDIS_STORE_ENDPOINT+-file:doc+file:internal&patternType=literal) for storing information (session data and job queues).
- **REDIS_CACHE_ENDPOINT**=[redis-cache:6379](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24++REDIS_CACHE_ENDPOINT+AND+REDIS_STORE_ENDPOINT+-file:doc+file:internal&patternType=literal) for caching information.
- **REDIS_STORE_ENDPOINT**=[redis-store:6379](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24++REDIS_CACHE_ENDPOINT+AND+REDIS_STORE_ENDPOINT+-file:doc+file:internal&patternType=literal) for storing information (session data and job queues).

When using an external Redis server, the corresponding environment variable must also be added to the following services:

- `sourcegraph-frontend`
- `gitserver`
- `searcher`
- `worker`
- `sourcegraph-frontend`
- `gitserver`
- `searcher`
- `worker`

**Step 1**: Include the `services/redis` component in your components:

Expand Down
Loading