HTTP request rate limiting with Micronaut & Resilience4j
source link: https://www.tuicool.com/articles/hit/jYVfm23
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Continuing with my previous post JWT authentication with Micronaut , let's try to implement some simple rate limiting for our application using Micronaut's caching and Resilience4j. You can find the source code at our Github: 98elements/micronaut-jwt-demo .
Step 1: Add dependency
Let's start by adding a dependency on Resilience4j , that's going to provide us with battle-tested rate limiting implementation.
// build.gradle dependencies { ... compile "io.github.resilience4j:resilience4j-ratelimiter:0.13.2" }
Step 2: Create Micronaut filter
Let's add a Micronaut filter that will be run for all requests to our application.
// RateLimitingFilter.java package com._98elements.mnjwtdemo.ratelimiting; import io.micronaut.http.HttpRequest; import io.micronaut.http.MutableHttpResponse; import io.micronaut.http.annotation.Filter; import io.micronaut.http.filter.FilterOrderProvider; import io.micronaut.http.filter.OncePerRequestHttpServerFilter; import io.micronaut.http.filter.ServerFilterChain; import org.reactivestreams.Publisher; @Filter("/**") public class RateLimitingFilter extends OncePerRequestHttpServerFilter implements FilterOrderProvider { @Override protected Publisher<MutableHttpResponse<?>> doFilterOnce(HttpRequest<?> request, ServerFilterChain chain) { return null; // TODO: implement } @Override public int getOrder() { return 0; // TODO: implement } }
For now it doesn't do much. Let's break it down and start adding more things.
@Filter
annotates the class as a filter (similarly to Spring). We want to protect all endpoints, so let's use /**
pattern. We inherit after OncePerRequestHttpServerFilter
to make sure this filter runs only once per request and implement FilterOrderProvider
to allow us to specify the position in filter chain, because we want this filter to be run after security, so we can access current user.
Now let's add @ConfigurationProperties
annotated class that's going to specify the rate limiting rule, add some sensible values to application.yml
and configure Micronaut caching (we're going to need this later).
// RateLimitingProperties.java package com._98elements.mnjwtdemo.ratelimiting; import io.micronaut.context.annotation.ConfigurationProperties; import javax.validation.constraints.Min; import javax.validation.constraints.NotNull; import java.time.Duration; @ConfigurationProperties("rate-limiter") class RateLimitingProperties { @NotNull Duration timeoutDuration; @NotNull Duration limitRefreshPeriod; @Min(1) @NotNull Integer limitForPeriod; }
# application.yml ... --- rate-limiter: timeout-duration: 100ms limit-refresh-period: 5s limit-for-period: 5 micronaut: caches: rate-limiter: expire-after-access: 10m
Now, moving back to out filter, we can inject properties and configured cache thanks to Micronaut dependency injection. Let's also setup Resilience4j's RateLimiterConfig
and specify the order of our filter.
// RateLimitingFilter.java package com._98elements.mnjwtdemo.ratelimiting; import io.github.resilience4j.ratelimiter.RateLimiterConfig; import io.github.resilience4j.ratelimiter.internal.AtomicRateLimiter; import io.micronaut.cache.SyncCache; import io.micronaut.core.order.Ordered; import io.micronaut.http.HttpRequest; import io.micronaut.http.MutableHttpResponse; import io.micronaut.http.annotation.Filter; import io.micronaut.http.filter.FilterOrderProvider; import io.micronaut.http.filter.OncePerRequestHttpServerFilter; import io.micronaut.http.filter.ServerFilterChain; import org.reactivestreams.Publisher; import javax.inject.Named; @Filter("/**") public class RateLimitingFilter extends OncePerRequestHttpServerFilter implements FilterOrderProvider { private final SyncCache<AtomicRateLimiter> limiters; private final RateLimiterConfig config; public RateLimitingFilter(@Named("rate-limiter") SyncCache<AtomicRateLimiter> limiters, RateLimitingProperties properties) { this.limiters = limiters; this.config = RateLimiterConfig.custom() .limitRefreshPeriod(properties.limitRefreshPeriod) .limitForPeriod(properties.limitForPeriod) .timeoutDuration(properties.timeoutDuration) .build(); } @Override protected Publisher<MutableHttpResponse<?>> doFilterOnce(HttpRequest<?> request, ServerFilterChain chain) { return null; // TODO: implement } @Override public int getOrder() { return Ordered.LOWEST_PRECEDENCE; } }
Now, for our doFilterOnce
method:
@Override protected Publisher<MutableHttpResponse<?>> doFilterOnce(HttpRequest<?> request, ServerFilterChain chain) { var key = getKey(request); var limiter = getLimiter(key); if (limiter.getPermission(config.getTimeoutDuration())) { return chain.proceed(request); } else { return createOverLimitResponse(limiter.getDetailedMetrics()); } }
For our case, key should be based on IP (if user is not authenticated) or his email and that's how getKey
should be implemented.
private String getKey(HttpRequest<?> request) { return request.getUserPrincipal() .map(Principal::getName) .orElseGet(() -> request.getRemoteAddress().getAddress().getHostAddress()); }
Resilience4j stores information of how many tickets are left for users in RateLimiter
objects. We want to create or retrieve one based on key derived from HTTP request.
private AtomicRateLimiter getLimiter(String key) { return limiters.get(key, AtomicRateLimiter.class, () -> new AtomicRateLimiter(key, config) ); }
Then, when we have a RateLimiter
, we try to get a permission. If we succeed, we proceed with the filter chain. In case our user issued too many requests, we return 429 Too Many Requests
error (take a look here https://httpstatuses.com/429
), created by the createOverLimitResponse
method.
private Publisher<MutableHttpResponse<?>> createOverLimitResponse(AtomicRateLimiterMetrics metrics) { var secondsToWait = Duration.ofNanos(metrics.getNanosToWait()).toSeconds(); var message = "Maximum request rate exceeded. Wait " + secondsToWait + "s before issuing a new request"; var body = new ErrorResponse(message); return Flowable.just( HttpResponse.status(HttpStatus.TOO_MANY_REQUESTS) .header(HttpHeaders.RETRY_AFTER, String.valueOf(secondsToWait)) .body(body) ); }
In case you wonder, ErrorResponse
is just a simple wrapper around an error message:
// ErrorResponse.java package com._98elements.mnjwtdemo; import com.fasterxml.jackson.annotation.JsonProperty; public class ErrorResponse { @JsonProperty("error") private final String error; public ErrorResponse(String error) { this.error = error; } public String getError() { return error; } }
Step 3: Testing
Let's use siege and ngrep to check how our application behaves under pressure. Hint: on Mac both are available using Homebrew.
To start our app execute:
$ ./gradlew run > Task :compileJava Note: Creating bean classes for 6 type elements > Task :run 12:33:20.971 [main] INFO io.micronaut.runtime.Micronaut - Startup completed in 1037ms. Server Running: http://localhost:8080
Let's start sniffing on port 8080
using ngrep
(sudo is necessary) and bomb the server with requests using siege
(3 concurrent users, each issuing 10 requests from the same IP):
$ sudo ngrep -W byline -d any host localhost and port 8080 interface: any filter: ( host localhost and port 8080 ) and (ip || ip6)
$ siege -c 3 -r 10 "http://localhost:8080/login POST" [alert] Zip encoding disabled; siege requires zlib support to enable it ** SIEGE 4.0.4 ** Preparing 3 concurrent users for battle. The server is now under siege... HTTP/1.1 400 0.50 secs: 195 bytes ==> POST http://localhost:8080/login ... HTTP/1.1 400 0.03 secs: 195 bytes ==> POST http://localhost:8080/login ... HTTP/1.1 429 0.11 secs: 79 bytes ==> POST http://localhost:8080/login
After depleting the pool of permissions server refused to process more requests and returned HTTP 429 Too Many Requests
.
T 127.0.0.1:8080 -> 127.0.0.1:52735 [AP] #11102 HTTP/1.1 429 Too Many Requests. Retry-After: 4. Date: Thu, 14 Feb 2019 14:25:58 GMT. content-type: application/json. content-length: 79. connection: close. . {"error":"Maximum request rate exceeded. Wait 4s before issuing a new request"}
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK