diff --git a/code-samples/eventing/bookstore-sample-app/solution/slack-sink/README.md b/code-samples/eventing/bookstore-sample-app/solution/slack-sink/README.md index b64e12f408a..68551c91663 100644 --- a/code-samples/eventing/bookstore-sample-app/solution/slack-sink/README.md +++ b/code-samples/eventing/bookstore-sample-app/solution/slack-sink/README.md @@ -16,7 +16,7 @@ When a CloudEvent with the type `new-review-comment` is sent to the Knative Even Install Apache Camel K operator on your cluster using any of the methods listed in [the official installation docs](https://camel.apache.org/camel-k/2.8.x/installation/installation.html). We will use the installation via Kustomize: -```sh +```bash kubectl create ns camel-k && \ kubectl apply -k github.com/apache/camel-k/install/overlays/kubernetes/descoped?ref=v2.8.0 --server-side ``` @@ -38,7 +38,7 @@ spec: Install it with one command: -```sh +```bash cat <" \ -X POST \ -H "Ce-Id: review1" \ diff --git a/code-samples/serving/knative-routing-go/README.md b/code-samples/serving/knative-routing-go/README.md index bbf236fa536..b06181397a9 100644 --- a/code-samples/serving/knative-routing-go/README.md +++ b/code-samples/serving/knative-routing-go/README.md @@ -27,7 +27,7 @@ the Login service. will refer to it as in the rest of this document 4. Check out the code: -``` +```bash go get -d github.com/knative/docs/code-samples/serving/knative-routing-go ``` @@ -35,7 +35,7 @@ go get -d github.com/knative/docs/code-samples/serving/knative-routing-go To check the domain name, run the following command: -``` +```bash kubectl get cm -n knative-serving config-domain -o yaml ``` @@ -69,7 +69,7 @@ export REPO="docker.io/" 3. Use Docker to build and push your application container: -``` +```bash # Build and push the container on your local machine. docker buildx build --platform linux/arm64,linux/amd64 -t "${REPO}/knative-routing-go" --push . -f code-samples/serving/knative-routing-go/Dockerfile ``` @@ -87,7 +87,7 @@ docker buildx build --platform linux/arm64,linux/amd64 -t "${REPO}/knative-routi - Run this command: - ``` + ```bash perl -pi -e "s@github.com/knative/docs/code-samples/serving@${REPO}@g" code-samples/serving/knative-routing-go/sample.yaml ``` @@ -95,7 +95,7 @@ docker buildx build --platform linux/arm64,linux/amd64 -t "${REPO}/knative-routi Deploy the Knative Serving sample: -``` +```bash kubectl apply --filename code-samples/serving/knative-routing-go/sample.yaml ``` @@ -107,13 +107,13 @@ Kubernetes service with: - Check the shared Gateway: -``` +```bash kubectl get Gateway --namespace knative-serving --output yaml ``` - Check the corresponding Kubernetes service for the shared Gateway: -``` +```bash INGRESSGATEWAY=istio-ingressgateway kubectl get svc $INGRESSGATEWAY --namespace istio-system --output yaml @@ -121,7 +121,7 @@ kubectl get svc $INGRESSGATEWAY --namespace istio-system --output yaml - Inspect the deployed Knative services with: -``` +```bash kubectl get ksvc ``` @@ -141,7 +141,12 @@ export GATEWAY_IP=`kubectl get svc $INGRESSGATEWAY --namespace istio-system \ 2. Find the `Search` service URL with: ```bash -# kubectl get route search-service --output=custom-columns=NAME:.metadata.name,URL:.status.url +kubectl get route search-service --output=custom-columns=NAME:.metadata.name,URL:.status.url +``` + +Example output: + +```text NAME URL search-service http://search-service.default.example.com ``` @@ -166,7 +171,7 @@ You should see: `Login Service is called !` 1. Apply the custom routing rules defined in `routing.yaml` file with: -``` +```bash kubectl apply --filename code-samples/serving/knative-routing-go/routing.yaml ``` @@ -182,7 +187,7 @@ like {{.Name}}-{{.Namespace}}. You can find out the format by running the command: {% endraw %} -``` +```bash kubectl get cm -n knative-serving config-network -o yaml ``` @@ -196,7 +201,7 @@ Then look for the value for `domain-template`. If it is 2. The `routing.yaml` file will generate a new VirtualService `entry-route` for domain `example.com` or your own domain name. View the VirtualService: -``` +```bash kubectl get VirtualService entry-route --output yaml ``` @@ -268,7 +273,7 @@ with a destination address of an externally available service. Using -``` +```bash kubectl label kservice search-service login-service networking.knative.dev/visibility=cluster-local ``` @@ -276,7 +281,7 @@ you label the services as an cluster-local services, removing access via `search and `login-service.default.example.com`. After doing so, your previous routing rule will not be routable anymore. Running -``` +```bash kubectl apply --filename code-samples/serving/knative-routing-go/routing-internal.yaml ``` diff --git a/code-samples/serving/tag-header-based-routing/README.md b/code-samples/serving/tag-header-based-routing/README.md index eb7dfd66f57..839fc8ae557 100644 --- a/code-samples/serving/tag-header-based-routing/README.md +++ b/code-samples/serving/tag-header-based-routing/README.md @@ -25,7 +25,7 @@ with Knative v0.16 and later. This feature is disabled by default. To enable this feature, run the following command: -``` +```bash kubectl patch cm config-features -n knative-serving -p '{"data":{"tag-header-based-routing":"Enabled"}}' ``` @@ -48,14 +48,14 @@ routed to the second Revision. Run the following command to set up the Knative Service and Revisions. -``` +```bash kubectl apply -f code-samples/serving/tag-header-based-routing/sample.yaml ``` ## Check the created resources Check the two created Revisions using the following command -``` +```bash kubectl get revisions ``` @@ -65,13 +65,13 @@ for the Revisions to become ready. Check the Knative Service using the following command -``` +```bash kubectl get ksvc tag-header -oyaml ``` You should see the following block which indicates the tag `rev1` is successfully added to the first Revision. -``` +```yaml - revisionName: tag-header-revision-1 percent: 0 tag: rev1 @@ -84,37 +84,37 @@ You should see the following block which indicates the tag `rev1` is successfull 1. Run the following command to send a request to the first Revision. - ``` + ```bash curl ${INGRESS_IP} -H "Host:tag-header.default.example.com" -H "Knative-Serving-Tag:rev1" ``` where `${INGRESS_IP}` is the IP of your ingress. You should get the following response: - ``` + ```text Hello First Revision! ``` 1. Run the following command to send requests without the `Knative-Serving-Tag` header: - ``` + ```bash curl ${INGRESS_IP} -H "Host:tag-header.default.example.com" ``` You should get the response from the second Revision: - ``` + ```text Hello Second Revision! ``` 1. Run the following command to send requests with an incorrect `Knative-Serving-Tag` header: - ``` + ```bash curl ${INGRESS_IP} -H "Host:tag-header.default.example.com" -H "Knative-Serving-Tag:wrongHeader" ``` You should get the response from the second Revision: - ``` + ```text Hello Second Revision! ``` diff --git a/docs/blog/articles/Building-Stateful-applications-with-Knative-and-Restate.md b/docs/blog/articles/Building-Stateful-applications-with-Knative-and-Restate.md index b4407e3063b..e5b599c1cc9 100644 --- a/docs/blog/articles/Building-Stateful-applications-with-Knative-and-Restate.md +++ b/docs/blog/articles/Building-Stateful-applications-with-Knative-and-Restate.md @@ -203,13 +203,13 @@ func main() { You can now build the container image using your tools, e.g. with `ko`: -```shell +```bash $ ko build main.go -B ``` And deploy it with `kn`: -```shell +```bash $ kn service create signup \ --image $MY_IMAGE_REGISTRY/main.go \ --port h2c:8080 @@ -217,13 +217,13 @@ $ kn service create signup \ Before sending requests, you need to tell Restate about your new service deployment: -```shell +```bash $ restate deployments register http://signup.default.svc ``` And this is it! You're now ready to send requests: -```shell +```bash $ curl http://localhost:8080/Signup/Signup --json '{"username": "slinkydeveloper", "name": "Francesco", "surname": "Guardiani", "password": "Pizza-without-pineapple"}' ``` diff --git a/docs/blog/articles/ai_functions_llama_stack.md b/docs/blog/articles/ai_functions_llama_stack.md index 37f8c385bd1..55a229a2499 100644 --- a/docs/blog/articles/ai_functions_llama_stack.md +++ b/docs/blog/articles/ai_functions_llama_stack.md @@ -31,13 +31,13 @@ For convenience I have created a Github repository that contains scripts for an For local development it is recommended to enable port-forwarding for the Llama Stack server: -```shell +```bash kubectl port-forward service/llamastackdistribution-sample-service 8321:8321 ``` Now your scripts can access it via `localhost:8321`: -```shell +```bash http localhost:8321/v1/version HTTP/1.1 200 OK @@ -58,7 +58,7 @@ _**Note:** The APIs of Llama Stack are fast evolving, but it supports a docs end Once all of the above is running you need to create your [Knative Functions](https://knative.dev/docs/functions/){:target="_blank"} project. We are using the CloudEvent template for the new [functions runtime for Python](https://github.com/knative-extensions/func-python){:target="_blank"}. -```shell +```bash func create -l python -t cloudevents inference-func ``` @@ -207,7 +207,7 @@ _**NOTE:** the docstrings were remove to keep the program compact._ We can now run our function locally by issuing `func run` on the command line. Once it is running there will a system log like below: -```shell +```bash INFO:root:Functions middleware invoking user function INFO:root:Connecting to LLama Stack INFO:httpx:HTTP Request: GET http://localhost:8321/v1/models "HTTP/1.1 200 OK" @@ -221,7 +221,7 @@ Running on host port 8080 Now we can send a CloudEvent to the function, which contains our query for the AI model inference. In a new terminal of the function project we use `func invoke` for this: -```shell +```bash func invoke -f=cloudevent --data='{"query":"Tell me a dad joke!"}' Context Attributes, specversion: 1.0 @@ -239,13 +239,13 @@ We see that the function was returning a different CloudEvent, which contains th To deploy the function to our `kind` cluster you need to install Knative Serving. The [Llama Stack Stack repo](https://github.com/matzew/llama-stack-stack){:target="_blank"} has a script for this as well. Once it is installed simply run: -```shell +```bash func deploy --builder=host --build ``` This builds the function, using the `host` builder, pushes it to the container registry and eventually deploys it as a Knative Serving Service on Kubernetes: -```shell +```bash πŸ™Œ Function built: quay.io//inference-func:latest pushing 100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| (175/121 MB, 24 MB/s) [7s] βœ… Function deployed in namespace "default" and exposed at URL: diff --git a/docs/blog/articles/event-driven-image-bigquery-processing-pipelines.md b/docs/blog/articles/event-driven-image-bigquery-processing-pipelines.md index 8cde58f7001..43bd31a1212 100644 --- a/docs/blog/articles/event-driven-image-bigquery-processing-pipelines.md +++ b/docs/blog/articles/event-driven-image-bigquery-processing-pipelines.md @@ -76,7 +76,7 @@ Ipanema in Rio de Janeiro, to the bucket: After a few seconds, I saw 3 files in my output bucket: -```sh +```bash gsutil ls gs://knative-atamel-images-output gs://knative-atamel-images-output/beach-400x400-watermark.jpeg diff --git a/docs/blog/articles/getting-started-blog-p3.md b/docs/blog/articles/getting-started-blog-p3.md index f3c5d03109d..5a9a680c25c 100644 --- a/docs/blog/articles/getting-started-blog-p3.md +++ b/docs/blog/articles/getting-started-blog-p3.md @@ -34,7 +34,7 @@ v1/ β”œβ”€β”€ apiserver_lifecycle.go ``` -```shell +```bash // +genclient // +genreconciler // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object diff --git a/docs/blog/articles/kubevirt_meets_eventing.md b/docs/blog/articles/kubevirt_meets_eventing.md index 728d9ea3390..63dd501bebe 100644 --- a/docs/blog/articles/kubevirt_meets_eventing.md +++ b/docs/blog/articles/kubevirt_meets_eventing.md @@ -109,7 +109,7 @@ As mentioned above, the `ApiServerSource` is listening for events and forwards t Below is a complete example of a _virtual machine creation_ event, which the `ApiServerSource` wraps into its `dev.knative.apiserver.resource.add` CloudEvent (with some sections omitted for brevity): -```shell +```bash Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.add @@ -245,7 +245,7 @@ Once this is deployed and a new virtual machine is created, the _new_ event payl Example custom (transformed) event payload: -```shell +```bash Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.add @@ -275,7 +275,7 @@ Remember, we want our CMDB database automatically updated based on virtual machi The following output shows a so-far empty PostgreSQL DB. Notce that the columns are matching the jsonata expressions from the configured `EventTransform` CR. -```shell +```bash psql -U postgres -h 10.32.98.110 -p 5432 -d vmdb -c 'SELECT * FROM "virtual_machines"' Password for user postgres: type | id | kind | name | namespace | time | cpucores | cpusockets | memory | storageclass | network @@ -294,7 +294,7 @@ The code for the function can be found on Github here: [KubeVirt PostgreSQL Knat Next we create our `Trigger`s for our function and deploy it to the Kubernetes Cluster: -```shell +```bash func subscribe --filter type=dev.knative.apiserver.resource.add --source broker-apiserversource func subscribe --filter type=dev.knative.apiserver.resource.delete --source broker-apiserversource func deploy @@ -302,13 +302,13 @@ func deploy We can validate the successful creation of the function using `kubectl` or `kn`: -```shell +```bash kubectl get ksvc NAME URL LATESTCREATED LATESTREADY READY REASON kn-py-psql-vmdata-fn https://kn-py-psql-vmdata-fn-kubevirt-eventing.apps.ocp1.stormshift.coe.muc.redhat.com kn-py-psql-vmdata-fn-00001 kn-py-psql-vmdata-fn-00001 True ``` -```shell +```bash kn service list NAME URL LATEST AGE CONDITIONS READY REASON kn-py-psql-vmdata-fn https://kn-py-psql-vmdata-fn-kubevirt-eventing.apps.ocp1.stormshift.coe.muc.redhat.com kn-py-psql-vmdata-fn-00001 5h3m 3 OK / 3 True @@ -316,14 +316,14 @@ kn-py-psql-vmdata-fn https://kn-py-psql-vmdata-fn-kubevirt-eve Also validate the conditions of the Triggers: -```shell +```bash kubectl get triggers NAME BROKER SUBSCRIBER_URI AGE READY REASON trigger-kn-py-psql-vmdata-fn-add broker-apiserversource http://kn-py-psql-vmdata-fn.kubevirt-eventing.svc.cluster.local 5h36m True trigger-kn-py-psql-vmdata-fn-delete broker-apiserversource http://kn-py-psql-vmdata-fn.kubevirt-eventing.svc.cluster.local 5h36m True ``` -```shell +```bash kn trigger list NAME BROKER SINK AGE CONDITIONS READY REASON trigger-kn-py-psql-vmdata-fn-add broker-apiserversource ksvc:kn-py-psql-vmdata-fn 5h38m 7 OK / 7 True @@ -334,7 +334,7 @@ Fasten your seatbelt πŸš€ The complete event-flow is in-place and the function w Example output: -```shell +```bash psql -U postgres -h 10.32.98.110 -p 5432 -d vmdb -c 'SELECT * FROM "virtual_machines"' Password for user postgres: type | id | kind | name | namespace | time | cpucores | cpusockets | memory | storageclass | network diff --git a/docs/blog/articles/llm-agents-demo.md b/docs/blog/articles/llm-agents-demo.md index 588e2e79128..bdae4ad2321 100644 --- a/docs/blog/articles/llm-agents-demo.md +++ b/docs/blog/articles/llm-agents-demo.md @@ -27,7 +27,7 @@ a cluster we have access to following the instructions in the README in [https://github.com/keventmesh/llm-tool-provider](https://github.com/keventmesh/llm-tool-provider). Once we have deployed the chat app from this repository, we are able to access it by running: -```sh +```bash kubectl port-forward svc/chat-app-service 8080:8080 ``` diff --git a/docs/blog/articles/performance-test-with-slos.md b/docs/blog/articles/performance-test-with-slos.md index 4dec3e33143..70e7946e032 100644 --- a/docs/blog/articles/performance-test-with-slos.md +++ b/docs/blog/articles/performance-test-with-slos.md @@ -37,7 +37,7 @@ Iter8 introduces the notion of an [experiment](https://iter8.tools/0.11/getting- Install the Iter8 CLI using `brew` as follows. You can also install using pre-built binaries as described [here](https://iter8.tools/0.11/getting-started/install/). -```shell +```bash brew tap iter8-tools/iter8 brew install iter8@0.11 ``` @@ -51,7 +51,7 @@ Install Knative in your Kubernetes cluster, and deploy your Knative HTTP Service Launch the Iter8 experiment as follows. -```shell +```bash iter8 k launch \ --set "tasks={ready,http,assess}" \ --set ready.ksvc=hello \ @@ -79,12 +79,12 @@ iter8 k launch \ Once the experiment completes (~5 secs), view the experiment report as follows. === "Text" - ```shell + ```bash iter8 k report ``` ??? note "The text report looks like this" - ```shell + ```bash Experiment summary: ******************* @@ -114,7 +114,7 @@ Once the experiment completes (~5 secs), view the experiment report as follows. ``` === "HTML" - ```shell + ```bash iter8 k report -o html > report.html # view in a browser ``` @@ -128,7 +128,7 @@ In this tutorial, we will launch an Iter8 experiment that generates load for a K Use the [Knative (`kn`) CLI](https://knative.dev/docs/client/install-kn/) to update the Knative service deployed in the [above tutorial](#tutorial-performance-test-for-knative-http-service) to a gRPC service as follows. -```shell +```bash kn service update hello \ --image docker.io/grpc/java-example-hostname:latest \ --port h2c:50051 \ @@ -137,7 +137,7 @@ kn service update hello \ Launch the Iter8 experiment as follows. -```shell +```bash iter8 k launch \ --set "tasks={ready,grpc,assess}" \ --set ready.ksvc=hello \ diff --git a/docs/versioned/bookstore/page-0.5/environment-setup.md b/docs/versioned/bookstore/page-0.5/environment-setup.md index e7cd8f4b6f1..4abd64a7e20 100644 --- a/docs/versioned/bookstore/page-0.5/environment-setup.md +++ b/docs/versioned/bookstore/page-0.5/environment-setup.md @@ -23,7 +23,7 @@ We will be fulfilling each requirement with the order above. ## **Clone the Repository** ![Next Step Image](images/image22.png) -```sh +```bash git clone https://github.com/knative/docs.git ``` ???+ bug "Troubleshooting" @@ -129,7 +129,7 @@ You can either [build the image locally](https://docs.docker.com/get-started/02_ When ready, run the following command to deploy the frontend app: -```shell +```bash kubectl apply -f frontend/config/100-front-end-deployment.yaml ``` @@ -143,7 +143,7 @@ service/bookstore-frontend-svc created ???+ success "Verify" Run the following command to check if the pod is running: - ```shell + ```bash kubectl get pods ``` @@ -160,7 +160,7 @@ Follow the respective `minikube` or `kind` instructions to access Kubernetes Ser Check the running Kubernetes Services: -```shell +```bash kubectl get services ``` @@ -192,7 +192,7 @@ You can either [build the image locally](https://docs.docker.com/get-started/02_ When ready, run the following command to deploy the Node.js server: -```shell +```bash kubectl apply -f node-server/config/100-deployment.yaml ``` @@ -207,7 +207,7 @@ service/node-server-svc created Run the following command to check if the pod is running: - ```shell + ```bash kubectl get pods ``` @@ -225,7 +225,7 @@ Follow the respective `minikube` or `kind` instructions to access Kubernetes Ser Check the running Kubernetes Services: -```shell +```bash kubectl get services ``` And you will see the following console output: @@ -253,7 +253,7 @@ If you encounter any issues during the setup process, refer to the troubleshooti To check the logs, use the following command: - ```shell + ```bash kubectl logs ``` diff --git a/docs/versioned/bookstore/page-2/sentiment-analysis-service-for-bookstore-reviews.md b/docs/versioned/bookstore/page-2/sentiment-analysis-service-for-bookstore-reviews.md index 88a535b8940..c539f6635b4 100644 --- a/docs/versioned/bookstore/page-2/sentiment-analysis-service-for-bookstore-reviews.md +++ b/docs/versioned/bookstore/page-2/sentiment-analysis-service-for-bookstore-reviews.md @@ -67,13 +67,13 @@ This workflow ensures a smooth transition from development to deployment within Create a new function using the func CLI: -```sh +```bash func create -l -t cloudevents ``` In this case, we are creating a Python function that handles CloudEvents, so the command will be: -```sh +```bash func create -l python sentiment-analysis-app -t cloudevents ``` @@ -218,13 +218,13 @@ Knative Function will automatically install the dependencies listed here when yo Before we get started, configure the container registry to push the image to the container registry. You can use the following command to configure the container registry: - ```sh + ```bash export FUNC_REGISTRY= ``` In this case, we will use the s2i build by adding the flag `-b=s2i`, and `-v` to see the verbose output. - ```sh + ```bash func build -b=s2i -v ``` @@ -237,7 +237,7 @@ Knative Function will automatically install the dependencies listed here when yo This command will build the function and push the image to the container registry. After the build is complete, you can run the function using the following command: - ```sh + ```bash func run -b=s2i -v ``` @@ -249,7 +249,7 @@ Knative Function will automatically install the dependencies listed here when yo **Solution: You may want to check whether you are in the correct directory. You can use the following command to check the current directory.** - ```sh + ```bash pwd ``` @@ -261,7 +261,7 @@ Knative Function will automatically install the dependencies listed here when yo You will see the following output if the function is running successfully: - ```sh + ```bash ❗function up-to-date. Force rebuild with --build Running on host port 8080 ---> Running application from script (app.sh) ... @@ -269,7 +269,7 @@ Knative Function will automatically install the dependencies listed here when yo Knative Function has an easy way to simulate the CloudEvent, you can use the following command to simulate the CloudEvent and test your function out: - ```sh + ```bash func invoke -f=cloudevent --data='{"reviewText": "I love Knative so much"}' --content-type=application/json --type="new-review-comment" -v ``` @@ -277,7 +277,7 @@ Knative Function will automatically install the dependencies listed here when yo In this case, you will get the full CloudEvent response: - ```sh + ```bash Context Attributes, specversion: 1.0 type: new-review-comment @@ -308,7 +308,7 @@ After you have finished the code, you can deploy the function to the cluster usi !!! note Using `-b=s2i` to specify how the function should be built. -```sh +```bash func deploy -b=s2i -v ``` @@ -316,7 +316,7 @@ func deploy -b=s2i -v When the deployment is complete, you will see the following output: - ```sh + ```bash Function deployed in namespace "default" and exposed at URL: http://sentiment-analysis-app.default.svc.cluster.local ``` @@ -324,13 +324,13 @@ func deploy -b=s2i -v !!! tip You can find the URL of the Knative Function (Knative Service) by running the following command: - ```sh + ```bash kubectl get kservice ``` You will see the URL in the output: - ```sh + ```bash NAME URL LATESTCREATED LATESTREADY READY REASON sentiment-analysis-app http://sentiment-analysis-app.default.svc.cluster.local sentiment-analysis-app-00001 sentiment-analysis-app-00001 True ``` @@ -341,7 +341,7 @@ func deploy -b=s2i -v If you use the following command to query all the pods in the cluster, you will see that the pod is running: -```sh +```bash kubectl get pods ``` @@ -349,7 +349,7 @@ where `-A` is the flag to query all the pods in all namespaces. And you will find that your sentiment analysis app is running: -```sh +```bash NAMESPACE NAME READY STATUS RESTARTS AGE default sentiment-analysis-app-00002-deployment 2/2 Running 0 2m ``` @@ -368,7 +368,7 @@ After deployment, the `func` CLI provides a URL to access your function. You can Simply use Knative Function's command `func invoke` to directly send a CloudEvent to the function on your cluster: -```sh +```bash func invoke -f=cloudevent --data='{"reviewText":"I love Knative so much"}' -v ``` @@ -380,7 +380,7 @@ func invoke -f=cloudevent --data='{"reviewText":"I love Knative so much"}' -v If you see the response, it means that the function is running successfully. - ```sh + ```bash Context Attributes, specversion: 1.0 type: moderated-comment diff --git a/docs/versioned/bookstore/page-3/create-bad-word-filter-service.md b/docs/versioned/bookstore/page-3/create-bad-word-filter-service.md index e86704e6cfa..80914bf576a 100644 --- a/docs/versioned/bookstore/page-3/create-bad-word-filter-service.md +++ b/docs/versioned/bookstore/page-3/create-bad-word-filter-service.md @@ -47,7 +47,7 @@ This workflow ensures a smooth transition from development to deployment within ### **Step 1: Create a Knative Function template** ![Image 6](images/image6.png) -```shell +```bash func create -l python bad-word-filter -t cloudevents ``` @@ -166,14 +166,14 @@ The content of `bad-word-filter/pyproject.toml`: !!! note Please enter `/bad-word-filter` when you are executing the following commands. -```sh +```bash func deploy -b=s2i -v ``` ???+ success "Verify" Expect to see the following message: - ```sh + ```bash Function deployed in namespace "default" and exposed at URL: http://bad-word-filter.default.svc.cluster.local ``` @@ -182,14 +182,14 @@ func deploy -b=s2i -v ![Image 7](images/image7.png) -```sh +```bash func invoke -f=cloudevent --data='{"reviewText":"I love Knative so much"}' -v ``` ???+ success "Verify" Expect to receive a CloudEvent response: - ```sh + ```bash Context Attributes, specversion: 1.0 type: new-review-comment diff --git a/docs/versioned/bookstore/page-5/deploy-database-service.md b/docs/versioned/bookstore/page-5/deploy-database-service.md index ecc7c5dcd40..b3ad0fc8b17 100644 --- a/docs/versioned/bookstore/page-5/deploy-database-service.md +++ b/docs/versioned/bookstore/page-5/deploy-database-service.md @@ -43,7 +43,7 @@ Try to ask in the Knative Slack community [#knative](https://cloud-native.slack. In this section, we will just be simply running a PostgreSQL service. We have all config files ready. Simply run the following command to apply all yamls at once. -```sh +```bash kubectl apply -f db-service ``` diff --git a/docs/versioned/bookstore/page-6/advanced-event-filtering.md b/docs/versioned/bookstore/page-6/advanced-event-filtering.md index 4dc6a73cc43..98c497db86a 100644 --- a/docs/versioned/bookstore/page-6/advanced-event-filtering.md +++ b/docs/versioned/bookstore/page-6/advanced-event-filtering.md @@ -50,7 +50,7 @@ Append the following Trigger configuration to the existing `node-server/config/2 uri: /insert # This is the path where the event will be sent to the subscriber, see /insert in node-server code: index.js ``` -```shell +```bash kubectl apply -f node-server/config/200-broker.yaml ``` diff --git a/docs/versioned/bookstore/page-7/slack-sink-learning-knative-eventing-and-apache-camel-K-integration.md b/docs/versioned/bookstore/page-7/slack-sink-learning-knative-eventing-and-apache-camel-K-integration.md index 979d97e3731..45c4d6cee55 100644 --- a/docs/versioned/bookstore/page-7/slack-sink-learning-knative-eventing-and-apache-camel-K-integration.md +++ b/docs/versioned/bookstore/page-7/slack-sink-learning-knative-eventing-and-apache-camel-K-integration.md @@ -27,7 +27,7 @@ When a CloudEvent with the type `moderated-comment` and with `ce-bad-word-filter Install Apache Camel K operator on your cluster using any of the methods listed in [the official installation docs](https://camel.apache.org/camel-k/2.8.x/installation/installation.html). We will use the installation via Kustomize: -```sh +```bash kubectl create ns camel-k && \ kubectl apply -k 'github.com/apache/camel-k/install/overlays/kubernetes/descoped?ref=v2.8.0' --server-side ``` @@ -49,7 +49,7 @@ spec: Install it with one command: -```sh +```bash cat < ``` 3. Create a root certificate using the previously created `SelfSigned` `ClusterIssuer`: @@ -123,20 +123,20 @@ the release assets, we release the certificates for Eventing servers that can be necessary. 1. Install certificates, run the following command: - ```shell + ```bash kubectl apply -f {{ artifact(repo="eventing",file="eventing-tls-networking.yaml")}} ``` 2. [Optional] If you're using Eventing Kafka components, install certificates for Kafka components by running the following command: - ```shell + ```bash kubectl apply -f {{ artifact(org="knative-extensions",repo="eventing-kafka-broker",file="eventing-kafka-tls-networking.yaml")}} ``` 3. Verify issuers and certificates are ready - ```shell + ```bash kubectl get certificates.cert-manager.io -n knative-eventing ``` Example output: - ```shell + ```bash NAME READY SECRET AGE imc-dispatcher-server-tls True imc-dispatcher-server-tls 14s mt-broker-filter-server-tls True mt-broker-filter-server-tls 14s @@ -342,15 +342,15 @@ ClusterIssuer section](#setup-selfsigned-clusterissuer), you can add the CA to t bundles by running the following commands: 1. Export the CA from the knative-eventing-ca secret in the OpenShift Cert-Manager Operator namespace, cert-manager by default: - ```shell + ```bash $ kubectl get secret -n cert-manager knative-eventing-ca -o=jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt ``` 2. Create a CA trust bundle in the `knative-eventing` namespace: - ```shell + ```bash $ kubectl create configmap -n knative-eventing my-org-selfsigned-ca-bundle --from-file=ca.crt ``` 3. Label the ConfigMap with networking.knative.dev/trust-bundle: "true" label: - ```shell + ```bash $ kubectl label configmap -n knative-eventing my-org-selfsigned-ca-bundle networking.knative.dev/trust-bundle=true ``` @@ -408,7 +408,7 @@ spec: Apply the `default-broker-example.yaml` file into a test namespace `transport-encryption-test`: -```shell +```bash kubectl create namespace transport-encryption-test kubectl apply -n transport-encryption-test -f defautl-broker-example.yaml @@ -416,13 +416,13 @@ kubectl apply -n transport-encryption-test -f defautl-broker-example.yaml Verify that addresses are all `HTTPS`: -```shell +```bash kubectl get brokers.eventing.knative.dev -n transport-encryption-test br -oyaml ``` Example output: -```shell +```bash apiVersion: eventing.knative.dev/v1 kind: Broker metadata: @@ -481,14 +481,14 @@ status: Sending events to the Broker using HTTPS endpoints: -```shell +```bash kubectl run curl -n transport-encryption-test --image=curlimages/curl -i --tty -- sh ``` Save the CA certs from the Broker's `.status.address.CACerts` field into `/tmp/cacerts.pem` -```shell +```bash cat <> /tmp/cacerts.pem -----BEGIN CERTIFICATE----- MIIBbzCCARagAwIBAgIQAur7vdEcreEWSEQatCYlNjAKBggqhkjOPQQDAjAYMRYw @@ -505,14 +505,14 @@ EOF Send the event by running the following command: -```shell +```bash curl -v -X POST -H "content-type: application/json" -H "ce-specversion: 1.0" -H "ce-source: my/curl/command" -H "ce-type: my.demo.event" -H "ce-id: 6cf17c7b-30b1-45a6-80b0-4cf58c92b947" -d '{"name":"Knative Demo"}' --cacert /tmp/cacert s.pem https://broker-ingress.knative-eventing.svc.cluster.local/transport-encryption-test/br ``` Example output: -```shell +```bash * processing: https://broker-ingress.knative-eventing.svc.cluster.local/transport-encryption-test/br * Trying 10.96.174.249:443... * Connected to broker-ingress.knative-eventing.svc.cluster.local (10.96.174.249) port 443 diff --git a/docs/versioned/eventing/sources/rabbitmq-source/README.md b/docs/versioned/eventing/sources/rabbitmq-source/README.md index b4a24a4d5e5..b213c845def 100644 --- a/docs/versioned/eventing/sources/rabbitmq-source/README.md +++ b/docs/versioned/eventing/sources/rabbitmq-source/README.md @@ -95,7 +95,7 @@ For more information about configuring the `RabbitmqCluster` CRD, see the [Rabbi Check the event-display Service to see if it is receiving events. It might take a while for the Source to start sending events to the Sink. -```sh +```bash kubectl -l='serving.knative.dev/service=event-display' logs -c user-container ☁️ cloudevents.Event Context Attributes, @@ -118,19 +118,19 @@ It might take a while for the Source to start sending events to the Sink. 1. Delete the RabbitMQSource: - ```sh + ```bash kubectl delete -f ``` 1. Delete the RabbitMQ credentials secret: - ```sh + ```bash kubectl delete -f ``` 1. Delete the event display Service: - ```sh + ```bash kubectl delete -f ``` diff --git a/docs/versioned/install/installing-backstage-plugins.md b/docs/versioned/install/installing-backstage-plugins.md index 7aeba7330f4..09792b1089f 100644 --- a/docs/versioned/install/installing-backstage-plugins.md +++ b/docs/versioned/install/installing-backstage-plugins.md @@ -31,7 +31,7 @@ Kubernetes cluster. #### Plugin backend controller installation -```shell +```bash VERSION="latest" # or a specific version like knative-v1.15.0 kubectl apply -f https://github.com/knative-extensions/backstage-plugins/releases/${VERSION}/download/eventmesh.yaml ``` diff --git a/docs/versioned/serving/app-security/security-guard.md b/docs/versioned/serving/app-security/security-guard.md index 4620b411c58..d2df9010bc7 100644 --- a/docs/versioned/serving/app-security/security-guard.md +++ b/docs/versioned/serving/app-security/security-guard.md @@ -33,7 +33,7 @@ kubectl get guardians.guard.security.knative.dev Example Output: -```sh +```bash NAME AGE helloworld-go 10h ``` diff --git a/docs/versioned/serving/config-network-adapters.md b/docs/versioned/serving/config-network-adapters.md index cfb58fc80ac..d0d784a7fa0 100644 --- a/docs/versioned/serving/config-network-adapters.md +++ b/docs/versioned/serving/config-network-adapters.md @@ -201,7 +201,7 @@ The Knative tested ingress controllers (Kourier, Contour, and Istio) have the fo Use the following command to determine which ingress controllers are installed and their status. -``` bash +```bash kubectl get pods -n knative-serving ``` diff --git a/docs/versioned/serving/setting-up-custom-ingress-gateway.md b/docs/versioned/serving/setting-up-custom-ingress-gateway.md index 143a94dbf10..fa7bd717f52 100644 --- a/docs/versioned/serving/setting-up-custom-ingress-gateway.md +++ b/docs/versioned/serving/setting-up-custom-ingress-gateway.md @@ -45,13 +45,13 @@ kubectl edit gateway knative-ingress-gateway -n knative-serving Replace the label selector with the label of your service: -``` +```yaml istio: ingressgateway ``` For the example `custom-ingressgateway` service mentioned earlier, it should be updated to: -``` +```yaml istio: custom-gateway ``` @@ -180,7 +180,7 @@ For the example `knative-custom-gateway` mentioned earlier, it should be updated ``` The configuration format should be -``` +```yaml external-gateways: | - name: namespace: diff --git a/docs/versioned/serving/troubleshooting/debugging-application-issues.md b/docs/versioned/serving/troubleshooting/debugging-application-issues.md index eccba931948..4f2365b697f 100644 --- a/docs/versioned/serving/troubleshooting/debugging-application-issues.md +++ b/docs/versioned/serving/troubleshooting/debugging-application-issues.md @@ -19,7 +19,7 @@ This kind of failure is most likely due to either a misconfigured manifest or wrong command. For example, the following output says that you must configure route traffic percent to sum to 100: -``` +```text Error from server (InternalError): error when applying patch: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"serving.knative.dev/v1\",\"kind\":\"Route\",\"metadata\":{\"annotations\":{},\"name\":\"route-example\",\"namespace\":\"default\"},\"spec\":{\"traffic\":[{\"configurationName\":\"configuration-example\",\"percent\":50}]}}\n"}},"spec":{"traffic":[{"configurationName":"configuration-example","percent":50}]}} to: @@ -60,7 +60,7 @@ ready. Please proceed to later sections to diagnose Revision readiness status. Otherwise, run the following command to look at the ClusterIngress created for your Route -``` +```bash kubectl get ingresses.networking.internal.knative.dev --output yaml ``` diff --git a/docs/versioned/serving/webhook-customizations.md b/docs/versioned/serving/webhook-customizations.md index 5af3692942d..0e5b22445d3 100644 --- a/docs/versioned/serving/webhook-customizations.md +++ b/docs/versioned/serving/webhook-customizations.md @@ -11,7 +11,7 @@ The Knative webhook examines resources that are created, read, updated, or delet You can configure the label `webhooks.knative.dev/exclude` to allow namespaces to bypass the Knative webhook. -``` yaml +```yaml apiVersion: v1 kind: Namespace metadata: