Hi all,
I want Nginx to limit the rate of new TLS connections and the total (or
per-worker) number of all client-facing connections, so that under a
sudden surge of requests, existing connections can get enough share of
CPU to be served properly, while excessive connections are rejected and
retried against other servers in the cluster.
I am running Nginx on a managed Kubernetes cluster, so tuning kernel
parameters or configuring layer 4 firewall is not an option.
To serve existing connections well, worker_connections can not be used,
because it also affects connections with proxied servers.
Is there a way to implement these measures in Nginx configuration?
Hello!
On Sat, Nov 18, 2023 at 02:44:20PM +0800, Zero King wrote:
> I want Nginx to limit the rate of new TLS connections and the total (or
> per-worker) number of all client-facing connections, so that under a
> sudden surge of requests, existing connections can get enough share of
> CPU to be served properly, while excessive connections are rejected and
> retried against other servers in the cluster.
>
> I am running Nginx on a managed Kubernetes cluster, so tuning kernel
> parameters or configuring layer 4 firewall is not an option.
>
> To serve existing connections well, worker_connections can not be used,
> because it also affects connections with proxied servers.
>
> Is there a way to implement these measures in Nginx configuration?
No, nginx does not provide a way to limit rate of new connections
and/or total number of established connections. Instead, firewall is
expected to be used for such tasks.
--
Maxim Dounin
http://mdounin.ru/
> sudden surge of requests, existing connections can get enough share of CPU to be served properly, while excessive connections are rejected
While you can't limit the connections (before the TLS handshake) there is a module to limit the requests per client/ip https://nginx.org/en/docs/http/ngx_http_limit_req_module.html
(and with limit_req_status 444; you can effectively close the connection without returning any response).
rr
Hello,
A self contained solution would be to double proxy, first through nginx stream server and then locally back to nginx http server (with proxy_pass via unix socket, or to localhost on a different port).
You can implement your own custom rate limiting logic in the stream server with NJS (js_access) and use the new js_shared_dict_zone (which is shared between workers) for persistently storing rate calculations.
You'd have additional overhead from the stream tcp proxy and the njs, but it shouldn't be too great (at least compared to overhead of TLS handshakes).
Regards,
Jordan Carter.
________________________________________
From: nginx <nginx-bounces at nginx.org> on behalf of Zero King <l2dy at aosc.io>
Sent: Saturday, November 18, 2023 6:44 AM
To: nginx at nginx.org
Subject: Limiting number of client TLS connections
Hi all,
I want Nginx to limit the rate of new TLS connections and the total (or
per-worker) number of all client-facing connections, so that under a
sudden surge of requests, existing connections can get enough share of
CPU to be served properly, while excessive connections are rejected and
retried against other servers in the cluster.
I am running Nginx on a managed Kubernetes cluster, so tuning kernel
parameters or configuring layer 4 firewall is not an option.
To serve existing connections well, worker_connections can not be used,
because it also affects connections with proxied servers.
Is there a way to implement these measures in Nginx configuration?
_______________________________________________
nginx mailing list
nginx at nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
Hi Maxim,
Thanks for your reply!
In our case, layer-4 firewall is difficult to introduce in the request
path. Would you consider rate limiting in Nginx a valid feature request?
On 19/11/23 08:11, Maxim Dounin wrote:
> Hello!
>
> On Sat, Nov 18, 2023 at 02:44:20PM +0800, Zero King wrote:
>
>> I want Nginx to limit the rate of new TLS connections and the total (or
>> per-worker) number of all client-facing connections, so that under a
>> sudden surge of requests, existing connections can get enough share of
>> CPU to be served properly, while excessive connections are rejected and
>> retried against other servers in the cluster.
>>
>> I am running Nginx on a managed Kubernetes cluster, so tuning kernel
>> parameters or configuring layer 4 firewall is not an option.
>>
>> To serve existing connections well, worker_connections can not be used,
>> because it also affects connections with proxied servers.
>>
>> Is there a way to implement these measures in Nginx configuration?
> No, nginx does not provide a way to limit rate of new connections
> and/or total number of established connections. Instead, firewall is
> expected to be used for such tasks.
>
Hello!
On Mon, Nov 20, 2023 at 11:29:39PM +0800, Zero King wrote:
> In our case, layer-4 firewall is difficult to introduce in the request
> path. Would you consider rate limiting in Nginx a valid feature request?
Firewall is expected to be much more effective solution compared
to nginx (which has to work with already established connections
at the application level). It might be a better idea to actually
introduce a firewall if you need such limits (or, rather, make it
possible to configure the one most likely already present).
--
Maxim Dounin
http://mdounin.ru/
Hi Jordan,
Thanks for your suggestion. I will give it a try and also try to push
our K8s team to implement a firewall if possible.
On 20/11/23 10:33, J Carter wrote:
> Hello,
>
> A self contained solution would be to double proxy, first through nginx stream server and then locally back to nginx http server (with proxy_pass via unix socket, or to localhost on a different port).
>
> You can implement your own custom rate limiting logic in the stream server with NJS (js_access) and use the new js_shared_dict_zone (which is shared between workers) for persistently storing rate calculations.
>
> You'd have additional overhead from the stream tcp proxy and the njs, but it shouldn't be too great (at least compared to overhead of TLS handshakes).
>
> Regards,
> Jordan Carter.
>
> ________________________________________
> From: nginx <nginx-bounces at nginx.org> on behalf of Zero King <l2dy at aosc.io>
> Sent: Saturday, November 18, 2023 6:44 AM
> To: nginx at nginx.org
> Subject: Limiting number of client TLS connections
>
> Hi all,
>
> I want Nginx to limit the rate of new TLS connections and the total (or
> per-worker) number of all client-facing connections, so that under a
> sudden surge of requests, existing connections can get enough share of
> CPU to be served properly, while excessive connections are rejected and
> retried against other servers in the cluster.
>
> I am running Nginx on a managed Kubernetes cluster, so tuning kernel
> parameters or configuring layer 4 firewall is not an option.
>
> To serve existing connections well, worker_connections can not be used,
> because it also affects connections with proxied servers.
>
> Is there a way to implement these measures in Nginx configuration?
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
No problem at all :)
One other suggestion if you do go down the double proxy + njs route. Keep an eye on the
nginx-devel mailing list (or nginx release notes) for this patch series
https://mailman.nginx.org/pipermail/nginx-devel/2023-November/QUTQYBNAHLMQMGTKQK57IXDXD23VVIQO.html
The last patch in the series will make proxying from stream to http significantly
more efficient, if merged.
On Sat, 25 Nov 2023 16:03:37 +0800
Zero King <l2dy at aosc.io> wrote:
> Hi Jordan,
>
> Thanks for your suggestion. I will give it a try and also try to push
> our K8s team to implement a firewall if possible.
>
> On 20/11/23 10:33, J Carter wrote:
> > Hello,
> >
> > A self contained solution would be to double proxy, first through nginx stream server
> > and then locally back to nginx http server (with proxy_pass via unix socket, or to
> > localhost on a different port).
> >
> > You can implement your own custom rate limiting logic in the stream server with NJS
> > (js_access) and use the new js_shared_dict_zone (which is shared between workers) for
> > persistently storing rate calculations.
> >
> > You'd have additional overhead from the stream tcp proxy and the njs, but it
> > shouldn't be too great (at least compared to overhead of TLS handshakes).
> >
> > Regards,
> > Jordan Carter.
> >
> > ________________________________________
> > From: nginx <nginx-bounces at nginx.org> on behalf of Zero King <l2dy at aosc.io>
> > Sent: Saturday, November 18, 2023 6:44 AM
> > To: nginx at nginx.org
> > Subject: Limiting number of client TLS connections
> >
> > Hi all,
> >
> > I want Nginx to limit the rate of new TLS connections and the total (or
> > per-worker) number of all client-facing connections, so that under a
> > sudden surge of requests, existing connections can get enough share of
> > CPU to be served properly, while excessive connections are rejected and
> > retried against other servers in the cluster.
> >
> > I am running Nginx on a managed Kubernetes cluster, so tuning kernel
> > parameters or configuring layer 4 firewall is not an option.
> >
> > To serve existing connections well, worker_connections can not be used,
> > because it also affects connections with proxied servers.
> >
> > Is there a way to implement these measures in Nginx configuration?
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > https://mailman.nginx.org/mailman/listinfo/nginx
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > https://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx