Complete Guide to Setting Up EFK Stack with Kafka, Redis, Beats, and Spring Boot for Microservices Logging

This diagram illustrates a centralized logging system for microservices using the EFK Stack (Elasticsearch, Fluentd, Kibana), Kafka, Redis, and Beats. Here's a brief breakdown of the flow:

  1. Microservices (Spring Boot):
    Each microservice generates logs, which are collected by Beats (e.g., Filebeat or Metricbeat).

  2. Beats:
    Beats agents forward the log data to Kafka.

  3. Kafka:
    Kafka acts as a buffer and ensures reliable delivery of log messages to the next stage.

  4. Redis:
    Redis can act as a caching layer or intermediate queue to handle the log flow efficiently.

  5. Fluentd:
    Fluentd processes, transforms, and enriches log data before forwarding it to Elasticsearch.

  6. Elasticsearch:
    Stores and indexes the log data for search and analysis.

  7. Kibana:
    Provides a user-friendly interface for visualizing and analyzing logs from Elasticsearch.


Here’s a step-by-step guide to implementing an end-to-end centralized logging system for  Spring Boot microservices using the provided architecture. This includes creating microservices, configuring logging with ELK Stack, Kafka, and Fluentd.


Step 1: Set Up EFK Stack

1.1 Elasticsearch

  • Install Elasticsearch locally or use a cloud-hosted version (Elastic Cloud).
  • Configure elasticsearch.yml for network binding and authentication.

1.2 Kibana

  • Install Kibana and connect it to Elasticsearch in kibana.yml:
    elasticsearch.hosts: ["http://localhost:9200"]

1.3 Fluentd

  • Install Fluentd and set up a configuration file (fluent.conf) to forward logs to Elasticsearch:
    <source>
      @type kafka
      brokers localhost:9092
      topics spring-boot-logs
      format json
    </source>
    
    <match **>
      @type elasticsearch
      host localhost
      port 9200
      logstash_format true
    </match>

1.4 Kafka

  • Install Kafka and start Zookeeper and Kafka brokers.

Step 2: Create 3 Spring Boot Microservices

2.1 Common Setup

  • Use Spring Initializr (start.spring.io) to create 3 projects:
    • Order Service
    • Payment Service
    • Notification Service
  • Add dependencies:
    • Spring Boot Starter Web
    • Spring Boot Starter Actuator
    • Spring Boot Starter Kafka
    • Logback (for JSON logging)

2.2 Configure Centralized Logging

Update application.yml for all services:

logging:
  level:
    root: INFO
  pattern:
    console: '{"timestamp":"%d{yyyy-MM-dd HH:mm:ss}","level":"%5p","service":"%X{service}","thread":"%t","logger":"%c","message":"%m"}%n'

Add service context in the code:

@Component
public class ServiceInterceptor implements HandlerInterceptor {
    @PostConstruct
    public void setup() {
        MDC.put("service", "OrderService"); // Replace with PaymentService or NotificationService
    }
}

2.3 Create Basic Endpoints

Order Service:

@RestController
@RequestMapping("/orders")
public class OrderController {
    @PostMapping
    public String createOrder(@RequestBody String order) {
        log.info("Order created: {}", order);
        return "Order Created";
    }
}

Payment Service:

@RestController
@RequestMapping("/payments")
public class PaymentController {
    @PostMapping
    public String processPayment(@RequestBody String payment) {
        log.info("Payment processed: {}", payment);
        return "Payment Processed";
    }
}

Notification Service:

@RestController
@RequestMapping("/notifications")
public class NotificationController {
    @PostMapping
    public String sendNotification(@RequestBody String notification) {
        log.info("Notification sent: {}", notification);
        return "Notification Sent";
    }
}

Step 3: Kafka Integration

3.1 Add Kafka Configuration

Add Kafka dependencies to pom.xml:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
</dependency>

Configure Kafka in application.yml:

spring:
  kafka:
    bootstrap-servers: localhost:9092
    producer:
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer

3.2 Send Logs to Kafka

In logback-spring.xml, add Kafka Appender:

<configuration>
    <appender name="KAFKA" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <topic>spring-boot-logs</topic>
        <producerConfig>
            bootstrap.servers=localhost:9092
        </producerConfig>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%msg%n</pattern>
        </encoder>
    </appender>
    <root level="info">
        <appender-ref ref="KAFKA" />
    </root>
</configuration>

Step 4: Fluentd Configuration

  • Configure Fluentd to consume logs from Kafka and send them to Elasticsearch (as shown in Step 1.3).

Step 5: Testing and Verification

  1. Run All Microservices: Start Order, Payment, and Notification services.
  2. Log Requests: Make requests to their respective endpoints using Postman or Curl.
  3. Verify Logs:
    • Check Kafka topics for logs using kafka-console-consumer.
    • Ensure logs are forwarded to Elasticsearch and visualize them in Kibana.

Step 6: Optional Enhancements

  • Add a Distributed Tracing System (e.g., Spring Cloud Sleuth + Zipkin) for better observability.
  • Include Log Enrichment with metadata (e.g., request ID, trace ID).
  • Deploy all components using Docker Compose or Kubernetes for production readiness.

This implementation sets up a complete, scalable logging solution for your Spring Boot microservices.

When to Use This Architecture

  1. High Traffic Microservices: Ideal for large systems with many microservices generating high log volumes.

  2. Real-time Monitoring: Needed for real-time log analysis and quick issue detection.

  3. Scalable Infrastructure: Useful when the system grows and requires scalable logging.

  4. Distributed Systems: Aggregates logs from multiple microservices for easier debugging.

  5. Structured Logs: Efficient when logs are in structured formats (e.g., JSON) that need fast querying.

  6. Improved Debugging: Helps trace requests across services for distributed debugging.

When NOT to Use This Architecture

  1. Small Applications: Overkill for small or monolithic apps.

  2. Low Log Volume: Unnecessary for applications with minimal log traffic.

  3. Short-term Projects: Not needed for prototypes or projects without scalability demands.

  4. Limited Resources: Too resource-intensive for teams with limited capacity.

  5. Simple Debugging: Unnecessary for apps with basic logging needs.


Buy Now – Unlock Your Microservices Mastery for Only $9!

Get your copy now for just $9! and start building resilient and scalable microservices with the help of Microservices with Spring Boot 3 and Spring Cloud.

Popular posts from this blog

Learn Java 8 streams with an example - print odd/even numbers from Array and List

Java Stream API - How to convert List of objects to another List of objects using Java streams?

Registration and Login with Spring Boot + Spring Security + Thymeleaf

Java, Spring Boot Mini Project - Library Management System - Download

ReactJS, Spring Boot JWT Authentication Example

Top 5 Java ORM tools - 2024

Java - Blowfish Encryption and decryption Example

Spring boot video streaming example-HTML5

Google Cloud Storage + Spring Boot - File Upload, Download, and Delete