Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/baremetal/architecture/discovery.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ apiVersion: metal.ironcore.dev/v1alpha1
kind: ServerBootConfiguration
metadata:
name: my-server-boot-config
namespace: defauilt
namespace: default
spec:
serverRef:
name: my-server
Expand Down
4 changes: 2 additions & 2 deletions docs/iaas/architecture/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@

IronCore's virtual networking architecture provides an end-to-end virtual networking solution for provisioned `Machine`s running in data centers, regardless they are baremetal machines or virtual machines. It is designed to enable robust, flexible and performing networking control plane and data plane.

- **Robust**: IronCore's virtual netowrking control plane is mainly implemented using Kubernetes controller model. Thus, it is able to survive component's failure and recover the running states by retrieving the desired networking configuration.
- **Robust**: IronCore's virtual networking control plane is mainly implemented using Kubernetes controller model. Thus, it is able to survive component's failure and recover the running states by retrieving the desired networking configuration.
- **Flexible**: Thanks to the modular and layered architecture design, IronCore's virtual networking solution allows developers to implement and interchange components from the most top-level data center management system built upon defined IronCore APIs, to lowest-level packet processing engines depending on the used hardware.
- **Performing**: The data plane of IronCore's virtual networking solution is built with the state-of-the-art packet processing framework, [DPDK](https://www.dpdk.org), and currently utilizes the hardware offloading features of [Nvidia's Mellanox SmartNic serials](https://www.nvidia.com/en-us/networking/ethernet-adapters/) to speedup packet processing. With the DPDK's run-to-completion model, IronCore's networking data plane can achieve high performance even in the environment where hardware offloading capability is limited or disabled.
- **Performing**: The data plane of IronCore's virtual networking solution is built with the packet processing framework [DPDK](https://www.dpdk.org), and currently utilizes the hardware offloading features of [Nvidia's Mellanox SmartNic serials](https://www.nvidia.com/en-us/networking/ethernet-adapters/) to speedup packet processing. With the DPDK's run-to-completion model, IronCore's networking data plane can achieve high performance even in the environment where hardware offloading capability is limited or disabled.

IronCore's virtual networking architecture is illustrated with the following figure. It consists of several components that work together to ensure network out and inbound connectivity and isolation for `Machine` instances.

Expand Down
2 changes: 1 addition & 1 deletion docs/iaas/architecture/os-images.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ IronCore are used both for virtual machine creation in the [Infrastructure as a
as they represent a full operating system rather than just a single application or service.

The core idea behind using OCI as a means to manage operating system images is to leverage any OCI compliant image registry,
to publsish and share operating system images. This can be done by using a public registry or host your own private registry.
to publish and share operating system images. This can be done by using a public registry or host your own private registry.

## OCI Image Format

Expand Down
2 changes: 1 addition & 1 deletion docs/iaas/architecture/providers/brokers.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Below is an example of how a `machinepoollet` and `machinebroker` will translate

![Brokers](/brokers.png)

Brokers are useful in scenarios where I want to run IronCore not in a single cluster but rather have a federated
Brokers are useful in scenarios where IronCore should not run in a single cluster but rather have a federated
environment. For example, in a federated environment, every hypervisor node in a compute cluster would announce it's
`MachinePool` inside the compute cluster. A `MachinePoollet`/`MachineBroker` in this compute cluster could now announce
a logical `MachinePool` "one level up" as a logical compute pool in e.g. an availability zone cluster. The broker concept
Expand Down
2 changes: 1 addition & 1 deletion docs/iaas/architecture/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ an entity onto which resources can be scheduled. The announcement of a `Pool` re
also provides in the `Pool` status the `AvailableMachineClasses` a `Pool` supports. A `Class` in this context represents
a list of resource-specific capabilities that a `Pool` can provide, such as CPU, memory, and storage.

`Pools` and `Classes` are defined for all major resource types in IronCore, including compute, storage. Resources in the
`Pools` and `Classes` are defined for all major resource types in IronCore, including compute and storage. Resources in the
`networking` API have no `Pool` concept, as they are not scheduled but rather provided on-demand by the network related
components. The details are described in the [networking section](/iaas/architecture/networking).

Expand Down
Loading