Requesting a Nginx Variable - "client_time_taken" (similar to

D
  • 1 Oct '23
Dear Team,

We're using Nginx 1.22.1 open source version as Load Balancer and tomcat servers as upstream.

We got a complaint from one of our clients stating the request time is too long (~30 minutes for ~10MB uploads) from our server for few MBs of Request Upload.

On checking, we found the requests are reaching the upstream after this said ~30 minutes delay as checked from tomcat logs. (So slowness is not in the tomcat server)

Also, found the request body is buffered to the temporary file (client_body_buffer_size 16K default value is used)

We created a separate location block for this particular client URL path (say. abc.service.com/customer1/) and

We persisted the temporary client body buffer file (using the directive client_body_in_file_only on) and found the ~30 minutes delay matched with the (temp buffer file last modified time - file creation time)

We assume the client is slow on the following basis,

1. Temporary buffer file time as said above
2. Requests of other Clients of similar requests body sizes are served faster in a few seconds.

3. There is no hardware issue in the Nginx server as checked from atopsar and other commands.

Need from Nginx Developers/community:

Currently, there is no straightforward way to measure the time taken by client to upload the request body. 

1. A variable similar to request_time, upstream_response_time can be helpful to easily log this time taken by client.
    So it will be easy to prove to the client where the slowness is.

2. Also, is there a timeout for the whole request? 

    (say request should be timed out if it is more than 15 minutes)

Thanks & Regards,

Devarajan D.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20231001/d7a48831/attachment.htm>
M
  • 1 Oct '23
Hello!

On Sun, Oct 01, 2023 at 08:20:23PM +0530, Devarajan D via nginx wrote:

> Currently, there is no straightforward way to measure the time 
> taken by client to upload the request body. 
> 
> 1. A variable similar to request_time, upstream_response_time 
> can be helpful to easily log this time taken by client.
>     So it will be easy to prove to the client where the slowness 
> is.

In general, $request_time minus $upstream_response_time is the 
slowness introduced by the client.  (In some cases, 
$upstream_response_time might also depend on the client behaviour, 
such as with "proxy_request_buffering off;" or with 
"proxy_buffering off;" and/or due to proxy_max_temp_file_size 
reached.)

Further, $request_time can be saved at various request processing 
stages, such as after reading request headers via the "set" 
directive, or via a map when sending the response headers.  This 
provides mostly arbitrary time measurements if you need it.

For detailed investigation on what happens with the particular 
client, debugging log is the most efficient instrument, notably 
the "debug_connection" directive which makes it possible to 
activate debug logging only for a particular client 
(http://nginx.org/r/debug_connection).

> 2. Also, is there a timeout for the whole request? 
> 
>     (say request should be timed out if it is more than 15 
> minutes)

No.

-- 
Maxim Dounin
http://mdounin.ru/
D
  • 2 Oct '23
Dear Maxim Dounin, Team & Community,

Thank you for your suggestions.

Would be helpful if you could suggest the following,

> In general, $request_time minus $upstream_response_time is the slowness introduced by the client. 

1. It's true most of the time. But clients are not willing to accept unless they see a log from server side. (Say the client server itself is running in another hosing services like amazon EC2 instance)

> Further, $request_time can be saved at various request processing stages, such as after reading request headers via the "set"

directive, or via a map when sending the response headers. This provides mostly arbitrary time measurements if you need it. 

2. How do we get control in nginx configuration when the last byte of request body is received from the client

> For detailed investigation on what happens with the particular client, debugging log is the most efficient instrument, notably the "debug_connection" directive which makes it possible to 

activate debug logging only for a particular client 

This debug log would definitely help to check the last byte of the request body !



3. But is it recommended to used nginx built with --with-debug in production environments

4. We receive such slow requests infrequently. Enabling debug log is producing a huge amount of logs/per request (2MB of log file per 10 MB request body upload) and it becomes hard to identify the slow request in that. Thats why it is mentioned as no straightforward way to measure the time taken by client to send the request body completely. 

> Is there a timeout for the whole request? 

5. How to prevent attacks like slow-loris DDos from exhausting the client connections when using the open-source version. Timeouts such as client_body_timeout are not much helpful for such attacks.

Thanks & Regards,

Devarajan D.

---- On Mon, 02 Oct 2023 03:46:40 +0530 Maxim Dounin <mdounin at mdounin.ru> wrote ---

Hello!

On Sun, Oct 01, 2023 at 08:20:23PM +0530, Devarajan D via nginx wrote:

> Currently, there is no straightforward way to measure the time 
> taken by client to upload the request body. 
> 
> 1. A variable similar to request_time, upstream_response_time 
> can be helpful to easily log this time taken by client.
>     So it will be easy to prove to the client where the slowness 
> is.

In general, $request_time minus $upstream_response_time is the 
slowness introduced by the client.  (In some cases, 
$upstream_response_time might also depend on the client behaviour, 
such as with "proxy_request_buffering off;" or with 
"proxy_buffering off;" and/or due to proxy_max_temp_file_size 
reached.)

Further, $request_time can be saved at various request processing 
stages, such as after reading request headers via the "set" 
directive, or via a map when sending the response headers.  This 
provides mostly arbitrary time measurements if you need it.

For detailed investigation on what happens with the particular 
client, debugging log is the most efficient instrument, notably 
the "debug_connection" directive which makes it possible to 
activate debug logging only for a particular client 
(http://nginx.org/r/debug_connection).

> 2. Also, is there a timeout for the whole request? 
> 
>     (say request should be timed out if it is more than 15 
> minutes)

No.

-- 
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
mailto:nginx at nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20231002/c5967e5a/attachment-0001.htm>
M
  • 3 Oct '23
Hello!

On Mon, Oct 02, 2023 at 03:25:15PM +0530, Devarajan D via nginx wrote:

> > In general, $request_time minus $upstream_response_time is the 
> > slowness introduced by the client. 
> 
> 1. It's true most of the time. But clients are not willing to 
> accept unless they see a log from server side. (Say the client 
> server itself is running in another hosing services like amazon 
> EC2 instance)

Well, $request_time and $upstream_response_time are logs from 
server side.  Introducing yet another variable which will 
calculate the difference just to convince your clients is not 
something I would reasonably expect to happen.

> > Further, $request_time can be saved at various request 
> > processing stages, such as after reading request headers via 
> > the "set"  directive, or via a map when sending the response 
> > headers. This provides mostly arbitrary time measurements if 
> > you need it. 
> 
> 2. How do we get control in nginx configuration when the last 
> byte of request body is received from the client

In simple proxying configurations, nginx starts to read the 
request body when control reaches the proxy module (so you can 
save start time with a simple "set" in the relevant location), and 
when the request body is completely read, nginx will create the 
request to the upstream server (so you can save this time by 
accessing a map in proxy_set_header).

> > For detailed investigation on what happens with the particular 
> > client, debugging log is the most efficient instrument, 
> > notably the "debug_connection" directive which makes it 
> > possible to activate debug logging only for a particular client 
> 
> This debug log would definitely help to check the last byte of 
> the request body !
> 
> 3. But is it recommended to used nginx built with --with-debug 
> in production environments

The "--with-debug" is designed to be used in production 
environments.  It incurs some extra costs, and therefore not the 
default, and on loaded servers it might be a good idea to use 
nginx compiled without "--with-debug" unless you are debugging 
something.  But unless debugging is actually activated in the 
configuration, the difference is negligible.

> 4. We receive such slow requests infrequently. Enabling debug 
> log is producing a huge amount of logs/per request (2MB of log 
> file per 10 MB request body upload) and it becomes hard to 
> identify the slow request in that. Thats why it is mentioned as 
> no straightforward way to measure the time taken by client to 
> send the request body completely. 

As previously suggested, using $request_time minus 
$upstream_response_time (or even just $request_time) makes it 
trivial to identify requests to look into.

> > > Is there a timeout for the whole request? 
> 
> 5. How to prevent attacks like slow-loris DDos from exhausting 
> the client connections when using the open-source version. 
> Timeouts such as client_body_timeout are not much helpful for 
> such attacks.

Stopping DDoS attacks is generally a hard problem, and timeouts 
are not an effective solution either.  Not to mention that in many 
practical cases total timeout on the request body reading cannot 
be less than several hours, making such timeouts irrelevant.

For trivial in-nginx protection from Slowloris-like attacks 
involving request body, consider using limit_conn 
(http://nginx.org/r/limit_conn).

[...]

-- 
Maxim Dounin
http://mdounin.ru/