Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 7 additions & 3 deletions .github/workflows/ci-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
python-version: ["3.10", "3.11", "3.12", "3.13"]
os: ["ubuntu-latest", "macOS-latest", "windows-latest"]
backend: ["local", "mongodb", "postgres", "redis"]
backend: ["local", "mongodb", "postgres", "redis", "s3"]
exclude:
# ToDo: take if back when the connection become stable
# or resolve using `InMemoryMongoClient`
Expand Down Expand Up @@ -65,7 +65,7 @@ jobs:

- name: Unit tests (local)
if: matrix.backend == 'local'
run: pytest -m "not mongo and not sql and not redis" --cov=cachier --cov-report=term --cov-report=xml:cov.xml
run: pytest -m "not mongo and not sql and not redis and not s3" --cov=cachier --cov-report=term --cov-report=xml:cov.xml

- name: Setup docker (missing on MacOS)
if: runner.os == 'macOS' && matrix.backend == 'mongodb'
Expand Down Expand Up @@ -135,6 +135,10 @@ jobs:
if: matrix.backend == 'redis'
run: pytest -m redis --cov=cachier --cov-report=term --cov-report=xml:cov.xml

- name: Unit tests (S3)
if: matrix.backend == 's3'
run: pytest -m s3 --cov=cachier --cov-report=term --cov-report=xml:cov.xml

- name: Upload coverage to Codecov (non PRs)
continue-on-error: true
uses: codecov/codecov-action@v5
Expand Down
29 changes: 22 additions & 7 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ Current features
* Cross-machine caching using MongoDB.
* SQL-based caching using SQLAlchemy-supported databases.
* Redis-based caching for high-performance scenarios.
* S3-based caching for cross-machine object storage backends.

* Thread-safety.
* **Per-call max age:** Specify a maximum age for cached values per call.
Expand All @@ -71,7 +72,6 @@ Cachier is **NOT**:
Future features
---------------

* S3 core.
* Multi-core caching.
* `Cache replacement policies <https://en.wikipedia.org/wiki/Cache_replacement_policies>`_

Expand Down Expand Up @@ -580,6 +580,12 @@ Cachier supports Redis-based caching for high-performance scenarios. Redis provi
- ``processing``: Boolean, is value being calculated
- ``completed``: Boolean, is value calculation completed

**S3 Sync/Async Support:**

- Sync functions use direct boto3 calls.
- Async functions are supported via thread-offloaded sync boto3 calls
(delegated mode), not a native async client.

**Limitations & Notes:**

- Requires SQLAlchemy (install with ``pip install SQLAlchemy``)
Expand Down Expand Up @@ -631,6 +637,11 @@ async drivers and require the client or engine type to match the decorated funct
- ``redis_client`` must be a sync client or sync callable for sync functions and
an async callable returning a ``redis.asyncio.Redis`` client for async
functions. Passing a sync callable to an async function raises ``TypeError``.
* - **S3**
- Yes
- Yes (delegated)
- Async support is delegated via thread-offloaded sync boto3 calls
(``asyncio.to_thread``). No async S3 client is required.


Contributing
Expand All @@ -655,13 +666,14 @@ Install in development mode with test dependencies for local cores (memory and p
cd cachier
pip install -e . -r tests/requirements.txt

Each additional core (MongoDB, Redis, SQL) requires additional dependencies. To install all dependencies for all cores, run:
Each additional core (MongoDB, Redis, SQL, S3) requires additional dependencies. To install all dependencies for all cores, run:

.. code-block:: bash

pip install -r tests/requirements_mongodb.txt
pip install -r tests/requirements_redis.txt
pip install -r tests/requirements_postgres.txt
pip install -r tests/requirements_s3.txt

Running the tests
-----------------
Expand Down Expand Up @@ -724,7 +736,7 @@ This script automatically handles Docker container lifecycle, environment variab
.. code-block:: bash

make test-mongo-local # Run MongoDB tests with Docker
make test-all-local # Run all backends with Docker
make test-all-local # Run all backends locally (Docker used for mongo/redis/sql)
make test-mongo-inmemory # Run with in-memory MongoDB (default)

**Option 3: Manual setup**
Expand All @@ -750,18 +762,21 @@ Contributors are encouraged to test against a real MongoDB instance before submi
Testing all backends locally
-----------------------------

To test all cachier backends (MongoDB, Redis, SQL, Memory, Pickle) locally with Docker:
To test all cachier backends (MongoDB, Redis, SQL, S3, Memory, Pickle) locally:

.. code-block:: bash

# Test all backends at once
./scripts/test-local.sh all

# Test only external backends (MongoDB, Redis, SQL)
# Test only external backends that require Docker (MongoDB, Redis, SQL)
./scripts/test-local.sh external

# Test S3 backend only (uses moto, no Docker needed)
./scripts/test-local.sh s3

# Test specific combinations
./scripts/test-local.sh mongo redis
./scripts/test-local.sh mongo redis s3

# Keep containers running for debugging
./scripts/test-local.sh all -k
Expand All @@ -772,7 +787,7 @@ To test all cachier backends (MongoDB, Redis, SQL, Memory, Pickle) locally with
# Test multiple files across all backends
./scripts/test-local.sh all -f tests/test_main.py -f tests/test_redis_core_coverage.py

The unified test script automatically manages Docker containers, installs required dependencies, and runs the appropriate test suites. The ``-f`` / ``--files`` option allows you to run specific test files instead of the entire test suite. See ``scripts/README-local-testing.md`` for detailed documentation.
The unified test script automatically manages Docker containers for MongoDB/Redis/SQL, installs required dependencies (including ``tests/requirements_s3.txt`` for S3), and runs the appropriate test suites. The ``-f`` / ``--files`` option allows you to run specific test files instead of the entire test suite. See ``scripts/README-local-testing.md`` for detailed documentation.


Running pre-commit hooks locally
Expand Down
208 changes: 208 additions & 0 deletions examples/s3_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,208 @@
"""Cachier S3 backend example.

Demonstrates persistent function caching backed by AWS S3 (or any S3-compatible
service). Requires boto3 to be installed::

pip install cachier[s3]

A real S3 bucket (or a local S3-compatible service such as MinIO / localstack)
is needed to run this example. Adjust the configuration variables below to
match your environment.

"""

import time
from datetime import timedelta

try:
import boto3

from cachier import cachier
except ImportError as exc:
print(f"Missing required package: {exc}")
print("Install with: pip install cachier[s3]")
raise SystemExit(1) from exc

# ---------------------------------------------------------------------------
# Configuration - adjust these to your environment
# ---------------------------------------------------------------------------
BUCKET_NAME = "my-cachier-bucket"
REGION = "us-east-1"

# Optional: point to a local S3-compatible service
# ENDPOINT_URL = "http://localhost:9000" # MinIO default
ENDPOINT_URL = None


# ---------------------------------------------------------------------------
# Helper: verify S3 connectivity
# ---------------------------------------------------------------------------


def _check_bucket(client, bucket: str) -> bool:
"""Return True if the bucket is accessible."""
try:
client.head_bucket(Bucket=bucket)
return True
except Exception as exc:
print(f"Cannot access bucket '{bucket}': {exc}")
return False


# ---------------------------------------------------------------------------
# Demos
# ---------------------------------------------------------------------------


def demo_basic_caching():
"""Show basic S3 caching: the first call computes, the second reads cache."""
print("\n=== Basic S3 caching ===")

@cachier(
backend="s3",
s3_bucket=BUCKET_NAME,
s3_region=REGION,
s3_endpoint_url=ENDPOINT_URL,
)
def expensive(n: int) -> int:
"""Simulate an expensive computation."""
print(f" computing expensive({n})...")
time.sleep(1)
return n * n

expensive.clear_cache()

start = time.time()
r1 = expensive(5)
t1 = time.time() - start
print(f"First call: {r1} ({t1:.2f}s)")

start = time.time()
r2 = expensive(5)
t2 = time.time() - start
print(f"Second call: {r2} ({t2:.2f}s) - from cache")

assert r1 == r2
assert t2 < t1
print("Basic caching works correctly.")


def demo_stale_after():
"""Show stale_after: results expire and are recomputed after the timeout."""
print("\n=== Stale-after demo ===")

@cachier(
backend="s3",
s3_bucket=BUCKET_NAME,
s3_region=REGION,
s3_endpoint_url=ENDPOINT_URL,
stale_after=timedelta(seconds=3),
)
def timed(n: int) -> float:
print(f" computing timed({n})...")
return time.time()

timed.clear_cache()
r1 = timed(1)
r2 = timed(1)
assert r1 == r2, "Second call should hit cache"

print("Sleeping 4 seconds so the entry becomes stale...")
time.sleep(4)

r3 = timed(1)
assert r3 > r1, "Should have recomputed after stale period"
print("Stale-after works correctly.")


def demo_client_factory():
"""Show using a callable factory instead of a pre-built client."""
print("\n=== Client factory demo ===")

def make_client():
"""Lazily create a boto3 S3 client."""
kwargs = {"region_name": REGION}
if ENDPOINT_URL:
kwargs["endpoint_url"] = ENDPOINT_URL
return boto3.client("s3", **kwargs)

@cachier(
backend="s3",
s3_bucket=BUCKET_NAME,
s3_client_factory=make_client,
)
def compute(n: int) -> int:
return n + 100

compute.clear_cache()
assert compute(7) == compute(7)
print("Client factory works correctly.")


def demo_cache_management():
"""Show clear_cache and overwrite_cache."""
print("\n=== Cache management demo ===")
call_count = [0]

@cachier(
backend="s3",
s3_bucket=BUCKET_NAME,
s3_region=REGION,
s3_endpoint_url=ENDPOINT_URL,
)
def managed(n: int) -> int:
call_count[0] += 1
return n * 3

managed.clear_cache()
managed(10)
managed(10)
assert call_count[0] == 1, "Should have been called once (cached on second call)"

managed.clear_cache()
managed(10)
assert call_count[0] == 2, "Should have recomputed after cache clear"

managed(10, cachier__overwrite_cache=True)
assert call_count[0] == 3, "Should have recomputed due to overwrite_cache"
print("Cache management works correctly.")


# ---------------------------------------------------------------------------
# Entry point
# ---------------------------------------------------------------------------


def main():
"""Run all S3 backend demos."""
print("Cachier S3 Backend Demo")
print("=" * 50)

client = boto3.client(
"s3",
region_name=REGION,
**({"endpoint_url": ENDPOINT_URL} if ENDPOINT_URL else {}),
)

if not _check_bucket(client, BUCKET_NAME):
print(f"\nCreate the bucket first: aws s3 mb s3://{BUCKET_NAME} --region {REGION}")
raise SystemExit(1)

try:
demo_basic_caching()
demo_stale_after()
demo_client_factory()
demo_cache_management()

print("\n" + "=" * 50)
print("All S3 demos completed successfully.")
print("\nKey benefits of the S3 backend:")
print("- Persistent cache survives process restarts")
print("- Shared across machines without a running service")
print("- Works with any S3-compatible object storage")
finally:
client.close()


if __name__ == "__main__":
main()
20 changes: 20 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,25 @@ dependencies = [
"pympler>=1",
"watchdog>=2.3.1",
]

optional-dependencies.all = [
"boto3>=1.26",
"pymongo>=4",
"redis>=4",
"sqlalchemy>=2",
]
optional-dependencies.mongo = [
"pymongo>=4",
]
optional-dependencies.redis = [
"redis>=4",
]
optional-dependencies.s3 = [
"boto3>=1.26",
]
optional-dependencies.sql = [
"sqlalchemy>=2",
]
urls.Source = "https://github.com/python-cachier/cachier"
# --- setuptools ---

Expand Down Expand Up @@ -177,6 +196,7 @@ markers = [
"pickle: test the pickle core",
"redis: test the Redis core",
"sql: test the SQL core",
"s3: test the S3 core",
"maxage: test the max_age functionality",
"asyncio: marks tests as async",
]
Expand Down
Loading