Structured logs, correlation IDs and production debugging

Spring Boot Logging Done Right

Logs aren’t failing you. Your logging strategy is.

Modern systems require structured, contextual, searchable logging. If your logs are still glorified System.out.println statements wrapped in log.info(), you’re flying blind.

Let’s fix that.

⚡ TL;DR (Quick Recap)

  • Use SLF4J as a facade and let Spring Boot auto-configure Logback.
  • Adopt structured JSON logging (built-in in Spring Boot 3.4).
  • Use MDC + Correlation IDs + Micrometer tracing for cross-service debugging.
  • Log intentionally. Not everything deserves INFO.

Use SLF4J as the Facade, Logback as the Engine

Never bind directly to java.util.logging or Log4j APIs.

Use SLF4J as the abstraction layer. Spring Boot auto-configures Logback — lean into that.

private static final Logger log =
LoggerFactory.getLogger(OrderService.class);

This keeps your business code decoupled from the logging implementation.

Stop Using System.out.println

This isn’t just stylistic advice.

System.out.println:

  • Has no log levels
  • Cannot integrate with log aggregators properly
  • Cannot be filtered or tuned at runtime
  • Breaks structured logging entirely

If it’s in your production code, remove it.

Structured Logging: Logs as Data, Not Strings

This is the single biggest improvement you can make.

Old Way

log.info("Order {} confirmed for customer {}", orderId, customerId);

That’s just text.

You can’t aggregate by customerId.
You can’t build dashboards.

Modern Way (Fluent Structured API)

log.atInfo()
.setMessage("Order confirmed")
.addKeyValue("orderId", orderId) // toString() called on object
.addKeyValue("customerId", customerId) // null-safe in SLF4J 2.x — logs "null"
.addKeyValue("totalAmount", amount)
.log();

Now your logs become structured JSON fields.

  • Fluent API → structured event model
  • JSON output → requires logging.structured.format.console=logstash|ecs

You can:

  • Filter by orderId
  • Count orders per customer
  • Alert on totalAmount > 10_000

That’s operational power.

Enable Native Structured Logging (Spring Boot 3.4)

If you’re starting fresh and don’t need custom field providers, native structured logging removes a dependency. For complex pipelines, logstash-logback-encoder still has more knobs.

Now with a native structured logging? One property.

# Requires Spring Boot ≥ 3.4.0
logging.structured.format.console=logstash

Or Elastic Common Schema:

logging.structured.format.console=ecs

That’s it.

Parameterized Logging — Never Concatenate Strings

This is about performance and correctness.

Bad

log.debug("User: " + user);

The string is built even if DEBUG is disabled.

Good

log.debug("User: {}", user);
// Even better if User contains sensitive fields:
log.debug("User: {}", user.getId()); // log only what you need

Best (Structured + Parameterized)

log.atInfo()
.setMessage("Products retrieved for page {}")
.addArgument(page)
.addKeyValue("operation", "getProducts")
.addKeyValue("page", page)
.addKeyValue("size", size)
.addKeyValue("totalElements", totalElements)
.addKeyValue("returnedCount", content.size())
.log();

Human-readable. Machine-queryable.

Correlation IDs + MDC: The Production Superpower

In distributed systems, one user action hits multiple services. Without correlation IDs, debugging becomes archaeology.

Add Correlation ID via MDC

@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class CorrelationIdFilter extends OncePerRequestFilter {

private static final String CORRELATION_HEADER = "X-Correlation-ID";
private static final String MDC_KEY = "correlationId";

@Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response,
FilterChain chain)
throws ServletException, IOException {
String correlationId = Optional
.ofNullable(request.getHeader(CORRELATION_HEADER))
.filter(s -> !s.isBlank())
.orElse(UUID.randomUUID().toString());
MDC.put(MDC_KEY, correlationId);
response.setHeader(CORRELATION_HEADER, correlationId);
try {
chain.doFilter(request, response);
} finally {
MDC.remove(MDC_KEY);
}
}
}
  • Reads X-Correlation-ID
  • Generates one if missing
  • Stores it in MDC
  • Propagates it downstream
  • Clears MDC after request

Now every log line contains that ID — if your log pattern or structured encoder includes MDC fields.

Log Levels Matter (More Than You Think)

Logging everything as INFO is noise.

Use them intentionally:

  • ERROR — Someone needs to be woken up.
  • WARN — Degraded but recoverable.
  • INFO — Significant business events.
  • DEBUG — Developer diagnostics.
  • TRACE — Rarely enabled.

Document your team’s conventions. Enforce them in reviews.

Per-Package Log Levels (Targeted Debugging)

In application.yml:

logging:
level:
com.myapp.service: DEBUG
org.springframework.web: WARN
org.hibernate.SQL: DEBUG

Fine-grained control prevents log floods and helps isolate issues quickly.

Runtime Log Changes with Actuator

No redeploy needed.

# Secure actuator endpoints in application.yml:
# management.endpoints.web.exposure.include=health,info,loggers
# management.endpoint.loggers.access=restricted # Spring Boot 3.4+
# OR use Spring Security to require ACTUATOR_ADMIN role

curl -X POST http://localhost:8080/actuator/loggers/com.myapp.service \
-H "Content-Type: application/json" \
-d '{"configuredLevel": "DEBUG"}'

---------------------------------------
management:
endpoints:
web:
exposure:
include: loggers
endpoint:
loggers:
enabled: true

Instant insight. Zero downtime.

Never Log Sensitive Data

This is non-negotiable.

Never log:

  • Passwords
  • Credit cards
  • Tokens
  • National IDs
  • Full PII objects

Ensure toString() methods do not leak secrets. Mask aggressively. Audit regularly.

Log Less in Code, More in Infrastructure

Spring Boot + Micrometer already provide:

  • HTTP request metrics
  • JVM metrics
  • DB query metrics
  • External HTTP timings

Metrics ≠ logs.

Types:

  • Metrics → aggregated signals
  • Logs → event-level diagnostics
  • Traces → request path visibility

They solve different problems.

Don’t duplicate what the framework already emits. Application logs should focus on business events, not plumbing.

Spring Boot 3.4 vs logstash-logback-encoder

If you’re starting fresh:

  • Use native structured logging.

If you’re migrating:

  • No rush.
  • The encoder still works.
  • Migrate to reduce dependencies and standardize on SLF4J fluent API.

The built-in approach covers 90% of real-world needs with less configuration.

Final Takeaways

Your logs are either:

  • A searchable operational database
  • A wall of expensive noise

Structured logging with SLF4J fluent API, MDC, correlation IDs, and Spring Boot 3.4+ native JSON support turns debugging from guesswork into science.

Enable structured logging today. Add one correlation ID filter. Replace one concatenated log with a structured event. Start small. Replace one log today. The next time production fails, you won’t be grepping text. You’ll be querying data.

You can find example of code on GitHub.

Originally posted on marconak-matej.medium.com.