2

Massive performance without headaches

 3 years ago
source link: https://quarkus.io/blog/resteasy-reactive-faq/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Imperative and Reactive: the elevator pitch

In our quest to understand why RESTEasy Reactive is important and how it differs from RESTEasy Classic, it helps to paraphrase a very important message that was first introduced here.

In general, Java web applications use imperative programming combined with blocking IO operations. This is incredibly popular because it is easier to reason about the code. Things get executed sequentially. When the application receives a request, the framework associates this request to a worker thread. When the request processing needs to interact with a database or another remote service, it relies on blocking IO. The thread is blocked waiting for the answer, making the communication synchronous. With this model one request is not affected by another as they are run on different threads. Even when one thread is waiting, other requests running on different threads are not slowed down significantly.

However, with this model, you need one thread for every concurrent request, which places a limit on the achievable concurrency. On the other side, the reactive execution model embraces asynchronous development models and non-blocking IO. With this model, multiple requests can be handled by the same thread. When the processing of a request can no longer make progress (because it requests a remote service, or interacts with a database for example), it uses non-blocking IO. Instead of blocking the thread, it schedules the operation and passes a continuation which would be invoked after the completion of the operation[1]. This releases the thread immediately, which can then be used to serve another request. When the result of the IO operation is available, the processing of the request is resumed and continues its execution.

This model enables the usage of a single IO thread to handle multiple requests. There are three significant benefits.

  • First, the response time is smaller because it does not have to jump to another thread.

  • Second, it reduces memory consumption as it decreases the usage of threads.

  • Third, your concurrency is no longer limited by the number of threads.

The reactive model uses the hardware resources more efficiently, but…​ a significant pitfall lurks. If the processing of a request starts to block, things can go south really quickly as no other request can be handled. To avoid this, you need to learn how to write asynchronous and non-blocking code, how to schedule operations, how to write continuations, how to chain actions. Basically, we need a way to structure asynchronous processing, and use non-blocking IO. No doubt, that consists of a paradigm shift. In Quarkus, we want to make the shift as easy as possible, so RESTEasy Reactive allows you to choose whether an endpoint is blocking or non-blocking (an application is free to mix and match blocking and non-blocking methods at will). So don’t be intimidated by the reactive word, the infrastructure is reactive, but your code can be either reactive or imperative. That’s what we mean by unification of reactive and imperative.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK