Crux of the Flux — Dive into the Spring WebFlux

What is Spring WebFlux, and where does it shine?

Amit Nathani
Adobe Tech Blog

--

A person diving into the ocean

In our previous blog post about reactive spring, we discussed the fundamentals of reactive programming.

If you haven’t read it yet, we encourage you to check it out.

This post will focus more on Spring WebFlux. Spring WebFlux uses the project reactor, which implements the reactive stream APIs provided by reactive stream org. Reactive stream provides a standard for asynchronous stream processing with non-blocking backpressure.

Reactive streams mainly provide three simple interfaces.

Subscriber (with four methods):

  1. public void onSubscribe(Subscription s);
  2. public void onNext(T t);
  3. public void onError(Throwable t);
  4. public void onComplete();

Publisher(with only one method):

  1. public void subscribe(Subscriber<? super T> s);

Subscription (with two methods):

  1. public void request(long n);
  2. public void cancel();

But what are subscribers and publishers?

Subscribers will subscribe to the publisher by calling the publisher’s subscribe method. Subscribers will also use the subscription to signal desired data and to cancel the demand.

The publisher will publish the data to all the subscribers using their onNext() method, and it will call onComplete() and onError() to notify the end of the data and the error, respectively. Project Reactor provides two core publishers, Mono and Flux; Mono represents 0..1 and Flux represents 0..N sequences of data.

And what is Flux?

Prior to exploring Flux, let’s first recall the concept of streams in Java. In Java, a stream is a collection of data that enables various operations like filter, map, reduce, and more. For example:

List<Integer> squaredList = List.of(1, 2, 3, 4, 5).stream().map(i -> i * i).collect(Collectors.toList());

Flux is also a collection of data, which can contain zero or more elements that arrive over time. Flux is quite similar to Java streams, but with an added time component. In Java streams, the data is available immediately, while in Flux, the data can arrive over time. For instance:

Flux<Integer> dataStream = Flux.from(dataPublisher).map(i -> i * i).collectList();

Here, dataPublisher emits data over time as it becomes available, and Flux processes it accordingly.

Streams in Java offer similar functionality to Flux but with the additional capability of handling data arriving over time.

Operators:

Operators in Reactor are functions that allow us to transform data easily. For instance, the map operator can be used to convert data into a new form. For example:

Flux<Integer> squaredFlux = Flux.fromIterable(List.of(1, 2, 3, 4)).map(i -> i * i).collectList();

In this example, we use the map operator to transform the data into squares.

Project Reactor provides a rich set of operators for both Mono and Flux, and you can find them in the official documentation here: (https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Operators.html)

Here are a few more examples of operators:

filter: The filter operator allows us to filter data based on specific conditions. For example:

Flux<Integer> evenNumbers = Flux.fromIterable(List.of(1, 2, 3, 4)).filter(i -> i % 2 == 0).collectList();

This will filter the even integers from the given list.

switchIfEmpty: The switchIfEmpty operator is used when the publisher does not emit any data. For example:

Flux<Integer> dataFlux = Flux.fromIterable(List.of()).switchIfEmpty(Mono.just(0)) .collectList();

If the initial Flux is empty, the switchIfEmpty will be called, and the provided Mono (in this case, Mono.just(0)) will be used as a fallback.

Reactor’s extensive collection of operators makes it a powerful library for handling reactive data transformations and stream processing.

Controllers:

Spring WebFlux offers two distinct approaches for creating controllers: Annotation-based controllers, similar to Spring Boot, and Functional-style controllers.

In the Functional-style approach, Spring WebFlux provides two functional interfaces, namely HandlerFunction and RouterFunctions, which handles requests and routes them to appropriate functions.

For instance, consider the following example:

RouterFunction<ServerResponse> route = route()
.GET("/person/{id}", accept(APPLICATION_JSON), handler::getPerson)
.GET("/person", accept(APPLICATION_JSON), handler::listPeople)
.POST("/person", handler::createPerson).build();

In this example, we define a RouterFunction to handle various HTTP methods and their respective routes. The request is mapped to the appropriate functions within the PersonHandler class.

Here’s an excerpt of the PersonHandler class:

public class PersonHandler {

public Mono<ServerResponse> listPeople(ServerRequest request) {
// ...
}

public Mono<ServerResponse> createPerson(ServerRequest request) {
// ...
}

public Mono<ServerResponse> getPerson(ServerRequest request) {
// ...
}
}

The PersonHandler class contains individual functions that handle specific HTTP requests, such as listPeople, createPerson, and getPerson. Each function is responsible for processing the request and returning a Mono<ServerResponse>.

The functional-style approach provides a more programmatic and functional way to define routes and handlers, offering flexibility and conciseness in designing web endpoints with Spring WebFlux.

WebClient:

WebClient is a cutting-edge, non-blocking, and reactive client that facilitates HTTP requests. It is the preferred client introduced by Spring, following the deprecation of RestTemplate.

WebClient can be utilized both for non-blocking calls and in place of RestTemplate for blocking calls. Spring actively recommends using WebClient as the modern and preferred library for handling HTTP interactions.

Server:

In Spring WebFlux, the default server is Netty. However, you have the flexibility to switch to a different web server, such as Tomcat, Undertow, Jetty, or a servlet container. Achieving this is as straightforward as excluding Netty and adding the desired web server’s specific dependency.

WebFlux offers a straightforward interface called HttpHandler with just one method:

Mono<Void> handle(ServerHttpRequest request, ServerHttpResponse response);

All runtimes must implement this interface to handle incoming HTTP requests and responses. This simplicity allows for easy integration and customization when using different web servers with Spring WebFlux.

Backpressure:

Back pressure is a fundamental concept that governs the rate of data transmission between a producer and a consumer. It allows the consumer to control the pace at which data is delivered to it. In Spring WebFlux, back pressure is supported through Project Reactor.

With back pressure, the consumer can utilize the request mechanism to indicate its readiness to handle a specific number of requests (n). This way, the producer can adjust its data transmission rate accordingly, ensuring a balanced flow of information between the two components. Spring WebFlux, with the help of Project Reactor, facilitates this back pressure mechanism, enabling efficient and controlled data processing in reactive applications.

Suppose you have a data source that emits a continuous stream of data at a high rate, and you want to process this data using a reactive flow with back pressure. In this case, you can use Flux to handle the data stream and control back pressure using the limitRate() operator.

Here’s an example:

import reactor.core.publisher.Flux;

public class BackPressureExample {

public static void main(String[] args) {
// Simulate a data source emitting a continuous stream of data at a high rate
Flux<Integer> dataSource = Flux.range(1, Integer.MAX_VALUE);

// Process the data with back pressure using limitRate operator
dataSource.limitRate(10) // Control the number of elements emitted per request (back pressure)
.doOnNext(data -> { // Simulate processing delay
try {
Thread.sleep(100);
}
catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Processed: " + data);
}
).subscribe();
}
}

In this example, the Flux.range(1, Integer.MAX_VALUE) simulates a continuous stream of data. The limitRate(10) operator specifies that the data processing should be limited to 10 elements per request, effectively controlling the back pressure.

As a result, you’ll notice that the data processing rate is limited, and the doOnNext method will print "Processed: <data>" with a slight delay between each processed element. This demonstrates how back pressure helps in regulating the rate of data consumption and prevents overwhelming the subscriber with an excessive number of elements to process.

When to use Spring WebFlux?

Project suitability is a crucial aspect to consider when choosing Spring WebFlux for your application. While WebFlux excels in non-blocking reactive programming, it may not be the best fit for all requirements.

If your service involves heavy CPU-bound tasks, Spring WebFlux might not be the optimal choice. In such scenarios, blocking threads could hinder performance, making the non-blocking nature of WebFlux less advantageous.

On the other hand, if your service mainly deals with IO-intensive operations and requires minimal CPU work, Spring WebFlux can be a lifesaver. By leveraging non-blocking techniques, WebFlux can efficiently handle a large number of concurrent requests without blocking threads, ensuring better scalability and responsiveness.

Ultimately, the decision to use Spring WebFlux depends on the nature of your application’s workload and the balance between CPU-bound and IO-bound tasks. For IO-intensive workloads, WebFlux can be a powerful and efficient solution. For CPU-intensive tasks, it might be more prudent to explore alternative approaches.

Project Loom and the Future of Spring web-flux:

There are mainly two types of threads: user-level threads and kernel-level threads. User-level threads are managed by the user application and run in user space, while kernel-level threads are managed by the operating system and run in kernel space.

User-level threads are lightweight and easy to create and manage, whereas kernel-level threads are heavyweight and require OS and hardware resources. Typically, one or many user-level threads are mapped to one kernel-level thread for execution.

In the early days of Java, user-level threads (known as green threads) were used, and several of them were mapped to a single kernel-level thread because multi-core processors were not prevalent. However, as multi-core processors became more popular, Java switched to using one user-level thread mapped to one kernel-level thread.

Now, with the need for better concurrency in high-throughput concurrent systems, Project Loom is reintroducing green threads. As we know that Kernel threads are heavyweight and not that extensible as per the demand of high throughput concurrent systems.

In Project Loom, green threads will be managed by Java, and there will be a many-to-many mapping between green threads and kernel threads, with a significantly higher number of green threads compared to kernel threads. Theoretically, the number of green threads can be in millions and all will be mapped with a handful of kernel threads.

Now, you might be thinking: what if a green thread gets blocked on IO, would not it block the kernel thread and in turn, all other green threads mapped to that kernel thread?

Here is what Project Loom says:

To handle the potential blocking of kernel threads due to blocking I/O calls by green threads, Project Loom adapts core-library APIs in the JDK to work with virtual threads. When a synchronous I/O operation is called on a virtual thread, Project Loom performs a non-blocking I/O operation under the covers, allowing the virtual thread to be blocked until the operation completes. So, basically, blocking I/O calls will be transformed into non-blocking I/O calls under the hood.

Both Project Loom and Reactive Spring can be beneficial for services that make numerous I/O calls, as they handle concurrency more efficiently. However, they may not offer significant improvements for CPU-intensive tasks. Reactive Spring, based on Project Reactor, provides many useful operators and built-in support for back pressure. On the other hand, Project Loom is still in its early stages of development and is available in preview mode.

References:

https://docs.spring.io/spring-framework/reference/web/webflux.html

https://github.com/reactor/reactor

https://www.reactive-streams.org/

https://cr.openjdk.org/~rpressler/loom/loom/sol1_part1.html

--

--