From 7b622e0e8bd14e419712734ed927dae788ce871b Mon Sep 17 00:00:00 2001 From: Maximilian Moehl Date: Tue, 8 Jul 2025 07:48:20 +0200 Subject: [PATCH 1/2] doc: typos --- docs/baremetal/architecture/discovery.md | 2 +- docs/iaas/architecture/networking.md | 2 +- docs/iaas/architecture/os-images.md | 2 +- docs/iaas/architecture/scheduling.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/baremetal/architecture/discovery.md b/docs/baremetal/architecture/discovery.md index 255367c..d99d45b 100644 --- a/docs/baremetal/architecture/discovery.md +++ b/docs/baremetal/architecture/discovery.md @@ -141,7 +141,7 @@ apiVersion: metal.ironcore.dev/v1alpha1 kind: ServerBootConfiguration metadata: name: my-server-boot-config - namespace: defauilt + namespace: default spec: serverRef: name: my-server diff --git a/docs/iaas/architecture/networking.md b/docs/iaas/architecture/networking.md index 97efb02..ec41317 100644 --- a/docs/iaas/architecture/networking.md +++ b/docs/iaas/architecture/networking.md @@ -4,7 +4,7 @@ IronCore's virtual networking architecture provides an end-to-end virtual networking solution for provisioned `Machine`s running in data centers, regardless they are baremetal machines or virtual machines. It is designed to enable robust, flexible and performing networking control plane and data plane. -- **Robust**: IronCore's virtual netowrking control plane is mainly implemented using Kubernetes controller model. Thus, it is able to survive component's failure and recover the running states by retrieving the desired networking configuration. +- **Robust**: IronCore's virtual networking control plane is mainly implemented using Kubernetes controller model. Thus, it is able to survive component's failure and recover the running states by retrieving the desired networking configuration. - **Flexible**: Thanks to the modular and layered architecture design, IronCore's virtual networking solution allows developers to implement and interchange components from the most top-level data center management system built upon defined IronCore APIs, to lowest-level packet processing engines depending on the used hardware. - **Performing**: The data plane of IronCore's virtual networking solution is built with the state-of-the-art packet processing framework, [DPDK](https://www.dpdk.org), and currently utilizes the hardware offloading features of [Nvidia's Mellanox SmartNic serials](https://www.nvidia.com/en-us/networking/ethernet-adapters/) to speedup packet processing. With the DPDK's run-to-completion model, IronCore's networking data plane can achieve high performance even in the environment where hardware offloading capability is limited or disabled. diff --git a/docs/iaas/architecture/os-images.md b/docs/iaas/architecture/os-images.md index d23191f..498108e 100644 --- a/docs/iaas/architecture/os-images.md +++ b/docs/iaas/architecture/os-images.md @@ -7,7 +7,7 @@ IronCore are used both for virtual machine creation in the [Infrastructure as a as they represent a full operating system rather than just a single application or service. The core idea behind using OCI as a means to manage operating system images is to leverage any OCI compliant image registry, -to publsish and share operating system images. This can be done by using a public registry or host your own private registry. +to publish and share operating system images. This can be done by using a public registry or host your own private registry. ## OCI Image Format diff --git a/docs/iaas/architecture/scheduling.md b/docs/iaas/architecture/scheduling.md index 2af3b1f..e01427f 100644 --- a/docs/iaas/architecture/scheduling.md +++ b/docs/iaas/architecture/scheduling.md @@ -11,7 +11,7 @@ an entity onto which resources can be scheduled. The announcement of a `Pool` re also provides in the `Pool` status the `AvailableMachineClasses` a `Pool` supports. A `Class` in this context represents a list of resource-specific capabilities that a `Pool` can provide, such as CPU, memory, and storage. -`Pools` and `Classes` are defined for all major resource types in IronCore, including compute, storage. Resources in the +`Pools` and `Classes` are defined for all major resource types in IronCore, including compute and storage. Resources in the `networking` API have no `Pool` concept, as they are not scheduled but rather provided on-demand by the network related components. The details are described in the [networking section](/iaas/architecture/networking). From f5157d9242bca93d0a1c923b0d31b5dc01db9eec Mon Sep 17 00:00:00 2001 From: Maximilian Moehl Date: Tue, 8 Jul 2025 07:49:01 +0200 Subject: [PATCH 2/2] doc: minor rewordings --- docs/iaas/architecture/networking.md | 2 +- docs/iaas/architecture/providers/brokers.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/iaas/architecture/networking.md b/docs/iaas/architecture/networking.md index ec41317..fc1bd37 100644 --- a/docs/iaas/architecture/networking.md +++ b/docs/iaas/architecture/networking.md @@ -6,7 +6,7 @@ IronCore's virtual networking architecture provides an end-to-end virtual networ - **Robust**: IronCore's virtual networking control plane is mainly implemented using Kubernetes controller model. Thus, it is able to survive component's failure and recover the running states by retrieving the desired networking configuration. - **Flexible**: Thanks to the modular and layered architecture design, IronCore's virtual networking solution allows developers to implement and interchange components from the most top-level data center management system built upon defined IronCore APIs, to lowest-level packet processing engines depending on the used hardware. -- **Performing**: The data plane of IronCore's virtual networking solution is built with the state-of-the-art packet processing framework, [DPDK](https://www.dpdk.org), and currently utilizes the hardware offloading features of [Nvidia's Mellanox SmartNic serials](https://www.nvidia.com/en-us/networking/ethernet-adapters/) to speedup packet processing. With the DPDK's run-to-completion model, IronCore's networking data plane can achieve high performance even in the environment where hardware offloading capability is limited or disabled. +- **Performing**: The data plane of IronCore's virtual networking solution is built with the packet processing framework [DPDK](https://www.dpdk.org), and currently utilizes the hardware offloading features of [Nvidia's Mellanox SmartNic serials](https://www.nvidia.com/en-us/networking/ethernet-adapters/) to speedup packet processing. With the DPDK's run-to-completion model, IronCore's networking data plane can achieve high performance even in the environment where hardware offloading capability is limited or disabled. IronCore's virtual networking architecture is illustrated with the following figure. It consists of several components that work together to ensure network out and inbound connectivity and isolation for `Machine` instances. diff --git a/docs/iaas/architecture/providers/brokers.md b/docs/iaas/architecture/providers/brokers.md index 6db3241..be02ddf 100644 --- a/docs/iaas/architecture/providers/brokers.md +++ b/docs/iaas/architecture/providers/brokers.md @@ -8,7 +8,7 @@ Below is an example of how a `machinepoollet` and `machinebroker` will translate ![Brokers](/brokers.png) -Brokers are useful in scenarios where I want to run IronCore not in a single cluster but rather have a federated +Brokers are useful in scenarios where IronCore should not run in a single cluster but rather have a federated environment. For example, in a federated environment, every hypervisor node in a compute cluster would announce it's `MachinePool` inside the compute cluster. A `MachinePoollet`/`MachineBroker` in this compute cluster could now announce a logical `MachinePool` "one level up" as a logical compute pool in e.g. an availability zone cluster. The broker concept