-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Current State
"Networks" as defined by the current state of this project refers to the
collection of resources needed to deploy a particular blockchain
network/protocol, e.g ethereum/aztec/optimism/etc. The cli offers of this moment
a deployment strategy to the stack cluster via the
obol network install <network> command. Available networks are shown through
the obol networks list command.
How this works is that network configurations are composed in the "networks"
embed folder. These are individual helmfile projects which have a structure of
/internal/embed/networks/<network>/helmfile.yaml.gotmpl. They may include
further charts or other material but MUST include a root helmfile go-template
file. Why a templated helmfile instead of a plain helmfile?
Templating & CLI parsing
# Configuration via environment variables
# Override with: ETHEREUM_NETWORK, ETHEREUM_EXECUTION_CLIENT, ETHEREUM_CONSENSUS_CLIENT
values:
# @enum mainnet,sepolia,holesky,hoodi
# @description Blockchain network to deploy
- network: {{ env "ETHEREUM_NETWORK" | default "mainnet" }}
# @enum reth,geth,nethermind,besu,erigon,ethereumjs
# @description Execution layer client
executionClient: {{ env "ETHEREUM_EXECUTION_CLIENT" | default "reth" }}
# @enum lighthouse,prysm,teku,nimbus,lodestar,grandine
# @description Consensus layer client
consensusClient: {{ env "ETHEREUM_CONSENSUS_CLIENT" | default "lighthouse" }}
---
repositories:
- name: ethereum-helm-charts
url: https://ethpandaops.github.io/ethereum-helm-charts
...
# Rest of helmfile templateThe above is taken from /networks/ethereum/helmfile.gotmpl. As these files are
"embedded" in the binary at build time, it has been possible to construct a
parser which extracts cli args for each network helmfile. The parser filters the
values list for a given helmfile template for environment variables of the form
"<NETWORK_PREFIX>_*". Some work is also there to enrich the produced cli args as
shown in obol network install ethereum --help.
The advantage of doing this is that for a given network, a user may wish to
deploy mainnet-reth-lighthouse, hoodi-geth-prysm to test. Many combinations of
deployments may exist and by defining the values in this way, these effectively
become function parameters.
Installation
When deploying to the cluster, a simple pattern is taken. Process creates a tmp
dir, copies the embedded network files as specified by the user to said
directory. Process assigns the cli arg values to environment values in the
helmfile sync sub-process. This populates the values fields and in the sync
process, the yaml is generated with these new values. Post sync, the temporary
directory is deleted.
Design constraints
Intentional in this design is to isolate specific resources for a given
"network" in it's own helmfile/combination of helm charts. Often, like is the
case with aztec's definition as an example, endpoints to external resources
which may/may not be defined. It is incumbent on obol developers to give sane
defaults in place of these. Specifically these endpoints are for consensus and
execution rpc's for an ethereum network. A user may see this, deploy the
ethereum network and replace the consensus/execution urls in the aztec
definition. This will eventually work but would require a sync time of multiple
days. The issue that arises is that hard coupling of resources will incur the
problems of the dependency on the dependent.
Most of the time, networks will mostly rely on endpoints to other synced nodes
of various different networks. This can be mainly mitigated by the likes of ERPC
which a dictionary of endpoints can be made available for different networks
needed to support.
Obol developers ought to extend the default ERPC configuration with as many
network dependencies needed to support the defined networks. This means, the
aztec definition should default those consensus and execution urls to the local
ERPC/beacon-lb which can proxy their requests to third-party endpoints should no
locally defined endpoint exist. This must be carefully managed and considered in
the event of each new endpoint
Challenges
The context as of now for each defined network is that of a one-shot deployment
and future work ought to define a management lifecycle in the case for each
deployment. Up to recently, it's been the approach to copy the deployment files
to the users OBOL_CONFIG_DIR and have some correspondence between the deployed
resources and the configuration which deployed it. This is generally the case
with the "base" configuration.
There are also three main lifecycle "actions" that should potentially be defined
for each network deployment instance:
- install/deploy - Initial deployment of resources, implemented but subject
to change - edit/update - For a particular existing deployment, mutating that
configuration, fixing errors, upgrading values etc. - deletion Teardown and removal of resources.
Also to note is that for a network, all resource namespaces are given the
<network> as the namespace label.
Example
-
User wishes to deploy ethereum network using hoodi with reth and lighthouse:
obol network install ethereum --network hoodi --executionClient reth --consensusClient lighthouseThis will deploy all resources to the ethereum namespace.
-
User wishes to deploy another ethereum network on mainnet with geth and prysm:
obol network install ethereum --network mainnet --executionClient geth --consensusClient prysmThis will deploy all resources to the ethereum namespace.
-
User wishes to deploy a second ethereum network using hoodi with reth and
lighthouse:obol network install ethereum --network hoodi --executionClient reth --consensusClient lighthouseThis will deploy all resources to the ethereum namespace.
Issue
The user now has 3 deployments, 2 duplicates, existing in the ethereum
namespace.
- How should the user update any one of them?
- How should the user delete any one of them?
The answer to those questions requires a strategy to define uniqueness per
deployment instance such that each networks resource are contained/do not
overlap.
Plan
Unique namespaces per deployment
By enforcing a rule that each networks deployment is uniquely namespaced, it
would isolate all resource's pertaining to it. This allows for easy network
deletion (just delete the namespace in question) and for the clean deployment of
multiple networks of the same/similar configuration.
Naming schemes may be:
- petname (nervous-otter, laughing-elephant)
- date-timestamp
- hash from inputs (this would prevent duplicate deployments)
- user-selection
This would be prefixed by the network in question but should be unique.
NOTE: Potentially labelling/tagging of resources may also make this operable
to the same affect. The goal ultimately is to uniquely silo resources to
individual deployments (which maps to user intents)
Local helmfiles as source of truth
Simplify this:
# /internal/embed/networks/ethereum/helmfile.yaml.gotmpl
# Configuration via environment variables
# Override with: ETHEREUM_NETWORK, ETHEREUM_EXECUTION_CLIENT, ETHEREUM_CONSENSUS_CLIENT
values:
# @enum mainnet,sepolia,holesky,hoodi
# @description Blockchain network to deploy
- network: {{ env "ETHEREUM_NETWORK" | default "mainnet" }}
# @enum reth,geth,nethermind,besu,erigon,ethereumjs
# @description Execution layer client
executionClient: {{ env "ETHEREUM_EXECUTION_CLIENT" | default "reth" }}
# @enum lighthouse,prysm,teku,nimbus,lodestar,grandine
# @description Consensus layer client
consensusClient: {{ env "ETHEREUM_CONSENSUS_CLIENT" | default "lighthouse" }}
---
...to this:
# /internal/embed/networks/ethereum/helmfile.yaml.gotmpl
values:
# @enum mainnet,sepolia,holesky,hoodi
# @default mainnet
# @description Blockchain network to deploy
- network: {{.Network}}
# @enum reth,geth,nethermind,besu,erigon,ethereumjs
# @default geth
# @description Execution layer client
executionClient: {{.ExecutionClient}}
# @enum lighthouse,prysm,teku,nimbus,lodestar,grandine
# @default lighthouse
# @description Consensus layer client
consensusClient: {{.ConsensusClient}}
# @description namespace, e.g ethereum_abcdefg
namespace: {{.namespace}}
---
...I would propose that instead of relying entirely on helmfile environment
variable hijacking is that we make templating two-stage. This would make the
installation steps as follows:
- At build time, the values section is parsed and cli-args are codegen'd similar
to before. - User runs
obol network install ethereum --network hoodi --executionClient reth --consensusClient prysm - The values block is templated ahead of time with fixed values as so:
values: # @enum mainnet,sepolia,holesky,hoodi # @default mainnet # @description Blockchain network to deploy - network: hoodi # @enum reth,geth,nethermind,besu,erigon,ethereumjs # @default geth # @description Execution layer client executionClient: reth # @enum lighthouse,prysm,teku,nimbus,lodestar,grandine # @default lighthouse # @description Consensus layer client consensusClient: prysm --- ...
- Using a unique namespace identifier, the embedded network dir with the
templated helmfile is copied to
$OBOL_CONFIG_DIR/networks/ethereum/<namespace>/* - Process calls
helmfile sync -f $OBOL_CONFIG_DIR/networks/ethereum/<unique_namespace>/helmfile.yaml.gotmpl
which templates the likes of'{{ Values.* }}', generates the yaml and
applies the new state.
Doing this, allows us to give the user control of a particular deployment.
Potentially this could be two commands where there is a install command which
templates all configuration files to the local users config dir and then a
deploy which does the sync against a particular network/namespace