Spring Boot Actuator includes a number of additional features to help us monitor and manage our application when we push it to production. We can choose to manage and monitor our application by using HTTP endpoints or with JMX. Auditing, health, and metrics gathering can also be automatically applied to our application.
Spring Boot Actuator uses Micrometer, an application metrics facade that supports external application monitoring systems like Prometheus, Elastic, Datadog, Graphite and many more.
Getting Started
To start using Spring Boot Actuator and Micrometer, we need to add them as dependencies to our pom.xml file:
Firstly, create a function to return distance. This function creates an equation between two points.
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> <version>2.1.0.RELEASE</version> </dependency> <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> <version>1.1.2</version> </dependency>
After dependencies are imported, Spring Boot configures PrometheusMeterRegistry that will collect and export metrics for Prometheus server.
Actuator Default Endpoints
Spring Boot Actuator endpoints let us monitor and interact with our application. It includes a number of built-in endpoints and enables us to add our own.
Here are the most common endpoints that Spring Boot Actuator offers out of the box:
/health Provides basic application health information.
/info Displays arbitrary application info.
/metrics Shows ‘metrics’ information for the current application.
/env Exposes properties from Spring’s ConfigurableEnvironment.
By default all endpoints except /shutdown endpoint are enabled. If we want to enable or disable endpoints (i.e. shutdown) we can use:
management.endpoint.shutdown.enabled=true
Or we can enable or disable all endpoints by default:
management.endpoints.enabled-by-default=false
Also we have the option to decide which endpoints will be exposed. We can do this by using following example:
management.endpoints.web.exposure.include=health,info,metrics,prometheus management.endpoints.jmx.exposure.exclude=*
This means that health, info and metrics endpoints are exposed over HTTP and none are exposed over JMX.
By calling /actuator endpoint (i.e. http://localhost:8080/actuator) we can see all enabled endpoints.
{ "_links": { "self": { "href": “http://localhost:8080/actuator", "templated": false }, "health": { "href": "http://localhost:8080/actuator/health", "templated": false }, "prometheus": { "href": "http://localhost:8080/actuator/prometheus", "templated": false }, "metrics": { "href": "http://localhost:8080/actuator/metrics", "templated": false }, "info": { "href": "http://localhost:8080/actuator/info", "templated": false } } }
In order to enable Prometheus to gather metrics, we need to expose /prometheus endpoint which it can use. As we can see from the top response, we have already did it using management.endpoints.web.exposure.include property.
Now by calling /prometheus endpoint (i.e. http://localhost:8080/actuator/prometheus) we can see all collected metrics:
… # HELP tomcat_servlet_request_max_seconds # TYPE tomcat_servlet_request_max_seconds gauge tomcat_servlet_request_max_seconds{name="default",} 0.0 tomcat_servlet_request_max_seconds{name="dispatcherServlet",} 0.104 # HELP tomcat_threads_config_max_threads # TYPE tomcat_threads_config_max_threads gauge tomcat_threads_config_max_threads{name="http-nio-8080",} 200.0 # HELP tomcat_sessions_expired_sessions_total # TYPE tomcat_sessions_expired_sessions_total counter tomcat_sessions_expired_sessions_total 0.0 # HELP tomcat_sessions_active_max_sessions # TYPE tomcat_sessions_active_max_sessions gauge tomcat_sessions_active_max_sessions 0.0 # HELP hikaricp_connections_min Min connections # TYPE hikaricp_connections_min gauge hikaricp_connections_min{pool="HikariPool-1",} 10.0 ##Other metrics … (Omitted for brevity)
Example of custom metrics
Micrometer, as part of Spring Boot, provides a lot of default metrics, e.g. JVM metrics, CPU metrics, Tomcat metrics, but sometimes we need to collect custom metrics. In the following example DataSource status is monitored.
In order to expose DataSource custom metric, following steps are implemented:
1.Defined dataSource, meterRegistry and dataSourceStatusProbe beans (ActuatorConfig.class)
2.Defined dataSource metric collector (DataSourceStatusProbe.class)
ActuatorConfig configures all needed beans.
@Lazy @Component public class ActuatorConfig { @Autowired private DataSource dataSource; @Autowired private MeterRegistry meterRegistry; @Bean DataSourceStatusProbe dataSourceStatusProbe(DataSource dataSource) { return new DataSourceStatusProbe(dataSource); } }
DataSourceStatusProbe is implementing specific logic for collecting dataSource status metrics via MeterBinde bindTo() method and depending on dataSource status () method. MeterBinders register one or more metrics to provide informations about the state of the application.
public class DataSourceStatusProbe implements MeterBinder { private final String name; private final String description; private final Iterable<Tag> tags; private static final String SELECT_1 = "SELECT 1;"; private static final int QUERY_TIMEOUT = 1; private static final double UP = 1.0; private static final double DOWN = 0.0; private final DataSource dataSource; public DataSourceStatusProbe(final DataSource dataSource) { Objects.requireNonNull(dataSource, "dataSource cannot be null"); this.dataSource = dataSource; this.name = "data_source"; this.description = "DataSource status"; this.tags = tags(dataSource); } private boolean status() { try(Connection connection = dataSource.getConnection()) { PreparedStatement statement = connection.prepareStatement(SELECT_1); statement.setQueryTimeout(QUERY_TIMEOUT); statement.executeQuery(); return true; } catch (SQLException ignored) { return false; } } @Override public void bindTo(final MeterRegistry meterRegistry) { Gauge.builder(name, this, value -> value.status() ? UP : DOWN) .description(description) .tags(tags) .baseUnit("status") .register(meterRegistry); } protected static Iterable<Tag> tags(DataSource dataSource) { Objects.requireNonNull(dataSource, "dataSource cannot be null"); try { return Tags.of(Tag.of("url", dataSource.getConnection().getMetaData().getURL())); } catch (SQLException e) { throw new RuntimeException(e); } } }
Abstract controller can be introduced in order to reuse flow logic between different probes.
Now if we call /prometheus endpoint, we will see that our DataSource status metric is also collected.
… # HELP system_cpu_usage The "recent cpu usage" for the whole system # TYPE system_cpu_usage gauge system_cpu_usage 0.0 # HELP tomcat_servlet_error_total # TYPE tomcat_servlet_error_total counter tomcat_servlet_error_total{name="default",} 0.0 tomcat_servlet_error_total{name="dispatcherServlet",} 0.0 # HELP data_source_status DataSource status # TYPE data_source_status gauge data_source_status{url="jdbc:postgresql://db:5432/postgres",} 1.0 # HELP tomcat_global_received_bytes_total # TYPE tomcat_global_received_bytes_total counter tomcat_global_received_bytes_total{name="http-nio-8090",} 0.0 ##Other metrics … (Omitted for brevity)
Micrometer also provides MeterFilter which can be used to decide if one or multiple metrics will be added to MetricRegistry. We can create custom filters with provided MeterFilter methods. If we add following MeterFilter bean implementation to our ActuatorConfig.class we will exclude all metrics starting with ‘tomcat’:
@Bean private MeterFilter excludeTomcatFilter() { return MeterFilter.denyNameStartsWith("tomcat"); }
Now in the /prometheus response we can see that all tomcat related metrics are excluded and ignored.