A Spring Boot library for high-performance, fault-tolerant Kafka message processing. It eliminates boilerplate by providing automatic retries, dead letter queue (DLQ) management, circuit breakers, and batch processing with zero-config resiliency.
The library is optimized for high throughput and low latency. Benchmarks with 500,000 messages (6,000 batch size, 2s window) demonstrate its efficiency.
Latency: ~350ms | Best for: Ultra-low latency, standard workloads

Latency: ~380ms | Best for: Massive scale, thousands of concurrent topics (Java 21+)

Enable Virtual Threads:
spring.threads.virtual.enabled=true- Production-Ready Reliability: Automatic backoff strategies (Exponential, Linear, Fibonacci, Jitter), DLQ management with full metadata, and crash-safe batch processing.
- Operational Excellence: Built-in OpenTelemetry tracing, Micrometer metrics, and a resilience4j Circuit Breaker.
- Zero-Config Deserialization: Smart type inference for POJOs, primitives, and polymorphic events without strict header requirements.
- Developer Tools: Embedded DLQ Dashboard and REST API for message replay and inspection.
Add the dependency to your pom.xml:
<dependency>
<groupId>java.damero</groupId>
<artifactId>kafka-damero</artifactId>
<version>1.0.3</version>
</dependency>Configure Kafka servers in application.properties:
spring.kafka.bootstrap-servers=localhost:9092Annotate your listener with @DameroKafkaListener. The library handles the rest.
@Service
public class OrderListener {
@DameroKafkaListener(
topic = "orders",
dlqTopic = "orders-dlq",
maxAttempts = 3,
delay = 1000,
delayMethod = DelayMethod.EXPO
)
@KafkaListener(topics = "orders", groupId = "order-processor")
public void processOrder(ConsumerRecord<String, OrderEvent> record, Acknowledgment ack) {
OrderEvent order = record.value();
// Your business logic
processPayment(order);
// Acknowledgement is managed automatically on success
ack.acknowledge();
}
}The library uses a smart deserializer that adapts to the payload, removing the need for strict producers.
The library infers the expected type from your @KafkaListener signature.
- POJOs: Auto-deserialized to your class (e.g.,
OrderEvent). - Primitives: Raw JSON values (
true,123) map toBoolean,Integer, etc. - No Headers Needed: Works with producers (Python, Node.js) that do not send Java
__TypeId__headers.
To accept multiple message types in a single consumer, use ConsumerRecord<String, Object>:
@KafkaListener(topics = "mixed-events")
public void handleMixed(ConsumerRecord<String, Object> record) {
if (record.value() instanceof OrderEvent) { ... }
else if (record.value() instanceof UserEvent) { ... }
}DLQ messages are internally wrapped with metadata (attempts, timestamps). When replayed to your consumer, the library automatically unwraps them, so your listener always receives the original domain object.
Optimized for high throughput (10x-50x faster). Messages are acknowledged only when the full batch succeeds.
@DameroKafkaListener(
topic = "analytics",
batchCapacity = 5000, // Process when 5000 messages accumulated
batchWindowLength = 10000, // Or every 10 seconds
fixedWindow = false // false = Max Throughput, true = Predictable Rate
)Get full visibility into retries, DLQ routing, and batch latency.
- Add
opentelemetry-sdkandopentelemetry-exporter-otlpdependencies. - Enable in annotation:
openTelemetry = true. - Configure exporter in properties:
otel.exporter.otlp.endpoint=http://localhost:4317.
Route specific exceptions to dedicated DLQ topics (e.g., maintain strict ordering for timeouts but separate validation errors).
dlqRoutes = {
@DlqExceptionRoutes(exception = ValidationException.class, dlqExceptionTopic = "dlq-invalid", skipRetry = true),
@DlqExceptionRoutes(exception = TimeoutException.class, dlqExceptionTopic = "dlq-retry", skipRetry = false)
}Automatically stop processing when error rates spike.
enableCircuitBreaker = true,
circuitBreakerFailureThreshold = 50, // Open circuit after 50 failures
circuitBreakerWaitDuration = 60000 // Wait 60s before half-open checkVisual interface for inspecting and replaying DLQ messages.
- URL:
http://localhost:8080/dlq/dashboard - API:
GET /dlq?topic={topic}: List messagesPOST /dlq/replay/{topic}?forceReplay=true: Replay all messagesPOST /dlq/replay/{topic}?skipValidation=true: Replay ignoring validation
For multi-instance deployments, the library supports Redis for distributed deduplication and retry tracking. It degrades gracefully to local Caffeine cache if Redis is unavailable.
spring.data.redis.host=localhost
spring.data.redis.port=6379The library follows Spring Boot standards. You can override any internal bean (e.g., retryOrchestrator, dlqRouter) by defining your own with the same name. Disable auto-configuration if needed:
custom.kafka.auto-config.enabled=false