Skip to content

Commit a11058b

Browse files
committed
Add technical documentation
1 parent 71bb7f0 commit a11058b

File tree

2 files changed

+65
-1
lines changed

2 files changed

+65
-1
lines changed

_toc.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,6 @@
22
# Learn more at https://jupyterbook.org/customize/toc.html
33

44
format: jb-book
5-
root: index
5+
root: index
6+
chapters:
7+
- file: worker

worker.md

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
# Worker
2+
3+
The OpenLambda **worker** is the core server-side component of a node. It listens for
4+
incoming HTTP requests, manages container lifecycle, and returns responses to callers.
5+
6+
## Overview
7+
8+
Each worker is a standalone Go binary that exposes a single HTTP endpoint:
9+
10+
```
11+
POST /runLambda/<lambda-name>
12+
```
13+
14+
When a request arrives the worker:
15+
16+
1. Checks whether the lambda's container image is already present on the node; if not, pulls it from the registry.
17+
2. Starts a Linux container from the image.
18+
3. Passes the request payload to the lambda function running inside the container.
19+
4. Waits for the function to return a result, then forwards that result back to the caller.
20+
5. Optionally keeps the container warm for a short period to reduce cold-start latency on subsequent calls.
21+
22+
## Configuration
23+
24+
The worker is configured via a JSON file (default `config.json`) in the working directory.
25+
Key fields:
26+
27+
| Field | Description | Default |
28+
|---|---|---|
29+
| `worker_port` | Port the HTTP server listens on | `8080` |
30+
| `registry` | URL of the lambda registry | `""` |
31+
| `sandbox` | Container backend (`docker` or `sock`) | `docker` |
32+
| `log_output` | Where to write logs (`stdout` or a file path) | `stdout` |
33+
34+
## Starting the Worker
35+
36+
```bash
37+
# From the repo root after building:
38+
./bin/worker --config config.json
39+
```
40+
41+
The worker prints its listening address on startup. You can verify it is running with:
42+
43+
```bash
44+
curl -w "\n" localhost:8080/status
45+
```
46+
47+
## Deploying Multiple Workers
48+
49+
Workers are stateless with respect to routing — each one operates independently. To scale
50+
horizontally, start one worker process per node and place a standard HTTP load balancer
51+
(Nginx, HAProxy, or similar) in front of them. No coordination between workers is required.
52+
53+
```{note}
54+
A centralized **boss** component for cluster-wide management is currently under development.
55+
Until then, manual deployment behind a load balancer is the recommended approach for
56+
multi-node setups.
57+
```
58+
59+
## Further Reading
60+
61+
- [Quickstart guide](doc.htm) — get a single worker running locally in minutes
62+
- [SOCK: Rapid Task Provisioning with Serverless-Optimized Containers](https://www.usenix.org/conference/atc18/presentation/oakes) — the research paper describing the container backend.

0 commit comments

Comments
 (0)