My current nginx setup always kill the TCP connection after 5 minutes of
inactivity, i.e no transaction.
[From wireshark, nginx send RST to upstream server and then send FIN,ACK to
downstream client]
I have this setup which requires TLS1.2 connection connecting from my
internal network [client application] to public network [server]. It only
use TCP ports (not http/https) and establish with a server located at
public network. The client application does not support TLS1.2 connection
hence the introduction of nginx proxy/reverse proxy for TLS wrapping
purpose. You may refer below :
Internal Network
| INTERNET/Public
[Client Application] <-----> [NGINX Reverse Proxy] <--- | ---> [Public
Server]
<Non TLS TCP Traffic> <TLS 1.2>
- using stream module
- no error shown in nginx error log
- access log showing TCP 200 Status but the session only last 300s
everytime. [Recorded in the access_log]
Below is my nginx configuration
# more nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 2048;
}
stream {
resolver 127.0.0.1;
include /etc/nginx/conf.d/*.conf;
log_format basic '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time $upstream_addr'
'"$upstream_bytes_sent" "$upstream_bytes_received"
"$upstream_connect_time"';
access_log /var/log/nginx/stream.access.log basic;
error_log log_file;
error_log /var/log/nginx/error_log;
server {
listen 35012;
proxy_pass X.X.X.X:35012;
proxy_timeout 86400s;
proxy_connect_timeout 1200s;
proxy_socket_keepalive on;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout 30m;
# For securing TCP Traffic with upstream servers.
proxy_ssl on;
proxy_ssl_certificate /etc/ssl/certs/backend.crt;
proxy_ssl_certificate_key /etc/ssl/certs/backend.key;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;
# proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;
# proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
#To have NGINX proxy previously negotiated connection parameters and use a
so-called abbreviated handshake - Fast
proxy_ssl_session_reuse on;
}
}
After capturing the tcp packet and check via wireshark, I found out that
the nginx is sending out the RST to the public server and then send FIN/ACK
(refer attached pcap picture) to client application.
I have tried to enable keepalive related parameters as per the nginx config
above and also check on the OS's TCP tunable and i could not find any
related settings which make NGINX to kill the TCP connection.
Anyone encountering the same issues?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240219/db125629/attachment.htm>
Please refer to the attachments for reference.
On Mon, Feb 19, 2024 at 4:24 PM Kin Seng <ckinseng at gmail.com> wrote:
> My current nginx setup always kill the TCP connection after 5 minutes of
> inactivity, i.e no transaction.
> [From wireshark, nginx send RST to upstream server and then send FIN,ACK
> to downstream client]
>
> I have this setup which requires TLS1.2 connection connecting from my
> internal network [client application] to public network [server]. It only
> use TCP ports (not http/https) and establish with a server located at
> public network. The client application does not support TLS1.2 connection
> hence the introduction of nginx proxy/reverse proxy for TLS wrapping
> purpose. You may refer below :
>
> Internal Network
> | INTERNET/Public
> [Client Application] <-----> [NGINX Reverse Proxy] <--- | ---> [Public
> Server]
> <Non TLS TCP Traffic> <TLS 1.2>
>
>
> - using stream module
> - no error shown in nginx error log
> - access log showing TCP 200 Status but the session only last 300s
> everytime. [Recorded in the access_log]
>
> Below is my nginx configuration
>
> # more nginx.conf
>
> user nginx;
> worker_processes auto;
> error_log /var/log/nginx/error.log;
> pid /run/nginx.pid;
>
> # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
> include /usr/share/nginx/modules/*.conf;
>
> events {
> worker_connections 2048;
> }
>
> stream {
> resolver 127.0.0.1;
> include /etc/nginx/conf.d/*.conf;
>
> log_format basic '$remote_addr [$time_local] '
> '$protocol $status $bytes_sent $bytes_received '
> '$session_time $upstream_addr'
> '"$upstream_bytes_sent" "$upstream_bytes_received"
> "$upstream_connect_time"';
>
> access_log /var/log/nginx/stream.access.log basic;
>
> error_log log_file;
> error_log /var/log/nginx/error_log;
>
> server {
> listen 35012;
> proxy_pass X.X.X.X:35012;
> proxy_timeout 86400s;
> proxy_connect_timeout 1200s;
> proxy_socket_keepalive on;
> ssl_session_cache shared:SSL:5m;
> ssl_session_timeout 30m;
>
> # For securing TCP Traffic with upstream servers.
> proxy_ssl on;
> proxy_ssl_certificate /etc/ssl/certs/backend.crt;
> proxy_ssl_certificate_key /etc/ssl/certs/backend.key;
> proxy_ssl_protocols TLSv1.2;
> proxy_ssl_ciphers HIGH:!aNULL:!MD5;
>
> # proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;
> # proxy_ssl_verify on;
> proxy_ssl_verify_depth 2;
>
> #To have NGINX proxy previously negotiated connection parameters and use a
> so-called abbreviated handshake - Fast
> proxy_ssl_session_reuse on;
>
> }
> }
>
>
> After capturing the tcp packet and check via wireshark, I found out that
> the nginx is sending out the RST to the public server and then send FIN/ACK
> (refer attached pcap picture) to client application.
>
> I have tried to enable keepalive related parameters as per the nginx
> config above and also check on the OS's TCP tunable and i could not find
> any related settings which make NGINX to kill the TCP connection.
>
> Anyone encountering the same issues?
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240219/5e6aedae/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: accesslog1.jpg
Type: image/jpeg
Size: 30817 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240219/5e6aedae/attachment-0001.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: wiresharkpcap1.png
Type: image/png
Size: 40488 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240219/5e6aedae/attachment-0001.png>
Hi,
On Mon, Feb 19, 2024 at 04:24:04PM +0800, Kin Seng wrote:
> My current nginx setup always kill the TCP connection after 5 minutes of
> inactivity, i.e no transaction.
> [From wireshark, nginx send RST to upstream server and then send FIN,ACK to
> downstream client]
This could be the normal behavior if you had 'proxy_timeout 5m;' in your config.
But since apparently you have 86400s as proxy timeout value, something else is
going on.
Could you provide more details like debug log for example?
> I have this setup which requires TLS1.2 connection connecting from my
> internal network [client application] to public network [server]. It only
> use TCP ports (not http/https) and establish with a server located at
> public network. The client application does not support TLS1.2 connection
> hence the introduction of nginx proxy/reverse proxy for TLS wrapping
> purpose. You may refer below :
>
> Internal Network
> | INTERNET/Public
> [Client Application] <-----> [NGINX Reverse Proxy] <--- | ---> [Public
> Server]
> <Non TLS TCP Traffic> <TLS 1.2>
>
>
> - using stream module
> - no error shown in nginx error log
> - access log showing TCP 200 Status but the session only last 300s
> everytime. [Recorded in the access_log]
>
> Below is my nginx configuration
>
> # more nginx.conf
>
> user nginx;
> worker_processes auto;
> error_log /var/log/nginx/error.log;
> pid /run/nginx.pid;
>
> # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
> include /usr/share/nginx/modules/*.conf;
>
> events {
> worker_connections 2048;
> }
>
> stream {
> resolver 127.0.0.1;
> include /etc/nginx/conf.d/*.conf;
>
> log_format basic '$remote_addr [$time_local] '
> '$protocol $status $bytes_sent $bytes_received '
> '$session_time $upstream_addr'
> '"$upstream_bytes_sent" "$upstream_bytes_received"
> "$upstream_connect_time"';
>
> access_log /var/log/nginx/stream.access.log basic;
>
> error_log log_file;
> error_log /var/log/nginx/error_log;
>
> server {
> listen 35012;
> proxy_pass X.X.X.X:35012;
> proxy_timeout 86400s;
> proxy_connect_timeout 1200s;
> proxy_socket_keepalive on;
> ssl_session_cache shared:SSL:5m;
> ssl_session_timeout 30m;
>
> # For securing TCP Traffic with upstream servers.
> proxy_ssl on;
> proxy_ssl_certificate /etc/ssl/certs/backend.crt;
> proxy_ssl_certificate_key /etc/ssl/certs/backend.key;
> proxy_ssl_protocols TLSv1.2;
> proxy_ssl_ciphers HIGH:!aNULL:!MD5;
>
> # proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;
> # proxy_ssl_verify on;
> proxy_ssl_verify_depth 2;
>
> #To have NGINX proxy previously negotiated connection parameters and use a
> so-called abbreviated handshake - Fast
> proxy_ssl_session_reuse on;
>
> }
> }
>
>
> After capturing the tcp packet and check via wireshark, I found out that
> the nginx is sending out the RST to the public server and then send FIN/ACK
> (refer attached pcap picture) to client application.
>
> I have tried to enable keepalive related parameters as per the nginx config
> above and also check on the OS's TCP tunable and i could not find any
> related settings which make NGINX to kill the TCP connection.
>
> Anyone encountering the same issues?
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
Hello,
On Mon, 19 Feb 2024 16:24:48 +0800
Kin Seng <ckinseng at gmail.com> wrote:
[...]
> Please refer to the attachments for reference.
>
> On Mon, Feb 19, 2024 at 4:24 PM Kin Seng <ckinseng at gmail.com> wrote:
> > After capturing the tcp packet and check via wireshark, I found out that
> > the nginx is sending out the RST to the public server and then send FIN/ACK
> > (refer attached pcap picture) to client application.
> >
> > I have tried to enable keepalive related parameters as per the nginx
> > config above and also check on the OS's TCP tunable and i could not find
> > any related settings which make NGINX to kill the TCP connection.
> >
> > Anyone encountering the same issues?
> >
The screenshot shows only 1 segment with FIN flag set too which is
odd - there should be one from each party in close sequence. Also the
client only returns an ACK, rather than FIN+ACK, which it should if
nginx was the initiator of closing the connection...
Hi Roman,
Thanks for the suggestion. Let me get the debugging log up and retest again.
On Tue, Feb 20, 2024, 1:02 AM Roman Arutyunyan <arut at nginx.com> wrote:
> Hi,
>
> On Mon, Feb 19, 2024 at 04:24:04PM +0800, Kin Seng wrote:
> > My current nginx setup always kill the TCP connection after 5 minutes of
> > inactivity, i.e no transaction.
> > [From wireshark, nginx send RST to upstream server and then send FIN,ACK
> to
> > downstream client]
>
> This could be the normal behavior if you had 'proxy_timeout 5m;' in your
> config.
> But since apparently you have 86400s as proxy timeout value, something
> else is
> going on.
>
> Could you provide more details like debug log for example?
>
> > I have this setup which requires TLS1.2 connection connecting from my
> > internal network [client application] to public network [server]. It only
> > use TCP ports (not http/https) and establish with a server located at
> > public network. The client application does not support TLS1.2 connection
> > hence the introduction of nginx proxy/reverse proxy for TLS wrapping
> > purpose. You may refer below :
> >
> > Internal Network
> > | INTERNET/Public
> > [Client Application] <-----> [NGINX Reverse Proxy] <--- | ---> [Public
> > Server]
> > <Non TLS TCP Traffic> <TLS 1.2>
> >
> >
> > - using stream module
> > - no error shown in nginx error log
> > - access log showing TCP 200 Status but the session only last 300s
> > everytime. [Recorded in the access_log]
> >
> > Below is my nginx configuration
> >
> > # more nginx.conf
> >
> > user nginx;
> > worker_processes auto;
> > error_log /var/log/nginx/error.log;
> > pid /run/nginx.pid;
> >
> > # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
> > include /usr/share/nginx/modules/*.conf;
> >
> > events {
> > worker_connections 2048;
> > }
> >
> > stream {
> > resolver 127.0.0.1;
> > include /etc/nginx/conf.d/*.conf;
> >
> > log_format basic '$remote_addr [$time_local] '
> > '$protocol $status $bytes_sent $bytes_received '
> > '$session_time $upstream_addr'
> > '"$upstream_bytes_sent" "$upstream_bytes_received"
> > "$upstream_connect_time"';
> >
> > access_log /var/log/nginx/stream.access.log basic;
> >
> > error_log log_file;
> > error_log /var/log/nginx/error_log;
> >
> > server {
> > listen 35012;
> > proxy_pass X.X.X.X:35012;
> > proxy_timeout 86400s;
> > proxy_connect_timeout 1200s;
> > proxy_socket_keepalive on;
> > ssl_session_cache shared:SSL:5m;
> > ssl_session_timeout 30m;
> >
> > # For securing TCP Traffic with upstream servers.
> > proxy_ssl on;
> > proxy_ssl_certificate /etc/ssl/certs/backend.crt;
> > proxy_ssl_certificate_key /etc/ssl/certs/backend.key;
> > proxy_ssl_protocols TLSv1.2;
> > proxy_ssl_ciphers HIGH:!aNULL:!MD5;
> >
> > # proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;
> > # proxy_ssl_verify on;
> > proxy_ssl_verify_depth 2;
> >
> > #To have NGINX proxy previously negotiated connection parameters and use
> a
> > so-called abbreviated handshake - Fast
> > proxy_ssl_session_reuse on;
> >
> > }
> > }
> >
> >
> > After capturing the tcp packet and check via wireshark, I found out that
> > the nginx is sending out the RST to the public server and then send
> FIN/ACK
> > (refer attached pcap picture) to client application.
> >
> > I have tried to enable keepalive related parameters as per the nginx
> config
> > above and also check on the OS's TCP tunable and i could not find any
> > related settings which make NGINX to kill the TCP connection.
> >
> > Anyone encountering the same issues?
>
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > https://mailman.nginx.org/mailman/listinfo/nginx
>
> --
> Roman Arutyunyan
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240220/1f8fb90b/attachment.htm>
Hi J Carter,
This is the only results from the whole 5 minutes session (intentionally
without any transaction to create inactivity). Is there any symptoms which
can prove that other parties are the one who Initiate the closing?
On Tue, Feb 20, 2024, 9:33 AM J Carter <jordanc.carter at outlook.com> wrote:
> Hello,
>
> On Mon, 19 Feb 2024 16:24:48 +0800
> Kin Seng <ckinseng at gmail.com> wrote:
>
> [...]
> > Please refer to the attachments for reference.
> >
> > On Mon, Feb 19, 2024 at 4:24 PM Kin Seng <ckinseng at gmail.com> wrote:
> > > After capturing the tcp packet and check via wireshark, I found out
> that
> > > the nginx is sending out the RST to the public server and then send
> FIN/ACK
> > > (refer attached pcap picture) to client application.
> > >
> > > I have tried to enable keepalive related parameters as per the nginx
> > > config above and also check on the OS's TCP tunable and i could not
> find
> > > any related settings which make NGINX to kill the TCP connection.
> > >
> > > Anyone encountering the same issues?
> > >
>
> The screenshot shows only 1 segment with FIN flag set too which is
> odd - there should be one from each party in close sequence. Also the
> client only returns an ACK, rather than FIN+ACK, which it should if
> nginx was the initiator of closing the connection...
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240220/89d95425/attachment-0001.htm>
Hello,
On Tue, 20 Feb 2024 09:40:13 +0800
Kin Seng <ckinseng at gmail.com> wrote:
> Hi J Carter,
>
> This is the only results from the whole 5 minutes session (intentionally
> without any transaction to create inactivity). Is there any symptoms which
> can prove that other parties are the one who Initiate the closing?
>
Packet capture is the easiest, however it looks like you have
missing data in PCAP for some reason (like tcpdump filters).
I suppose you could also perform packet capture on the client app host
instead of on the nginx host to corroborate the data - that would show
who sent FIN first.
Also, as Roman says in adjacent thread, debug level logs will also show
what happened.
> On Tue, Feb 20, 2024, 9:33 AM J Carter <jordanc.carter at outlook.com> wrote:
>
> > Hello,
> >
> > On Mon, 19 Feb 2024 16:24:48 +0800
> > Kin Seng <ckinseng at gmail.com> wrote:
> >
> > [...]
> > > Please refer to the attachments for reference.
> > >
> > > On Mon, Feb 19, 2024 at 4:24 PM Kin Seng <ckinseng at gmail.com> wrote:
> > > > After capturing the tcp packet and check via wireshark, I found out
> > that
> > > > the nginx is sending out the RST to the public server and then send
> > FIN/ACK
> > > > (refer attached pcap picture) to client application.
> > > >
> > > > I have tried to enable keepalive related parameters as per the nginx
> > > > config above and also check on the OS's TCP tunable and i could not
> > find
> > > > any related settings which make NGINX to kill the TCP connection.
> > > >
> > > > Anyone encountering the same issues?
> > > >
> >
> > The screenshot shows only 1 segment with FIN flag set too which is
> > odd - there should be one from each party in close sequence. Also the
> > client only returns an ACK, rather than FIN+ACK, which it should if
> > nginx was the initiator of closing the connection...
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > https://mailman.nginx.org/mailman/listinfo/nginx
> >
Hi J Carter,
Thank you for your reply.
I am capturing the packet from firewall, and the filtering is as per below
for the previously attached pcap.
Source : client app -- Dest : nginx proxy , any port to any port
Source : public server -- Dest : nginx proxy , any port to any port
Source : nginx proxy -- Dest : client app , any port to any port
Source : nginx proxy -- Dest : public server , any port to any port.
Perhaps I will try to do tcpdump from the client app as well.
One more info that I notice from client app host, from the netstat command,
it shows CLOSE_WAIT for the terminated session, it seems like close_wait is
the symbol that the closing is from external ( in this case client app is
connect to nginx proxy), is this right?
On Tue, Feb 20, 2024, 10:06 AM J Carter <jordanc.carter at outlook.com> wrote:
> Hello,
>
> On Tue, 20 Feb 2024 09:40:13 +0800
> Kin Seng <ckinseng at gmail.com> wrote:
>
> > Hi J Carter,
> >
> > This is the only results from the whole 5 minutes session (intentionally
> > without any transaction to create inactivity). Is there any symptoms
> which
> > can prove that other parties are the one who Initiate the closing?
> >
>
> Packet capture is the easiest, however it looks like you have
> missing data in PCAP for some reason (like tcpdump filters).
>
> I suppose you could also perform packet capture on the client app host
> instead of on the nginx host to corroborate the data - that would show
> who sent FIN first.
>
> Also, as Roman says in adjacent thread, debug level logs will also show
> what happened.
>
> > On Tue, Feb 20, 2024, 9:33 AM J Carter <jordanc.carter at outlook.com>
> wrote:
> >
> > > Hello,
> > >
> > > On Mon, 19 Feb 2024 16:24:48 +0800
> > > Kin Seng <ckinseng at gmail.com> wrote:
> > >
> > > [...]
> > > > Please refer to the attachments for reference.
> > > >
> > > > On Mon, Feb 19, 2024 at 4:24 PM Kin Seng <ckinseng at gmail.com>
> wrote:
> > > > > After capturing the tcp packet and check via wireshark, I found
> out
> > > that
> > > > > the nginx is sending out the RST to the public server and then
> send
> > > FIN/ACK
> > > > > (refer attached pcap picture) to client application.
> > > > >
> > > > > I have tried to enable keepalive related parameters as per the
> nginx
> > > > > config above and also check on the OS's TCP tunable and i could
> not
> > > find
> > > > > any related settings which make NGINX to kill the TCP connection.
> > > > >
> > > > > Anyone encountering the same issues?
> > > > >
> > >
> > > The screenshot shows only 1 segment with FIN flag set too which is
> > > odd - there should be one from each party in close sequence. Also the
> > > client only returns an ACK, rather than FIN+ACK, which it should if
> > > nginx was the initiator of closing the connection...
> > > _______________________________________________
> > > nginx mailing list
> > > nginx at nginx.org
> > > https://mailman.nginx.org/mailman/listinfo/nginx
> > >
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240220/85534bd8/attachment.htm>
Hello,
On Tue, 20 Feb 2024 11:57:27 +0800
Kin Seng <ckinseng at gmail.com> wrote:
> Hi J Carter,
>
> Thank you for your reply.
> I am capturing the packet from firewall, and the filtering is as per below
> for the previously attached pcap.
I see, I assumed you had run tcpdump on the nginx
host. I'd reccomend doing that too then (as well as client app host) if
you have a network firewall in the mix - to see what nginx itself
truely sends/recieves.
> Source : client app -- Dest : nginx proxy , any port to any port
>
> Source : public server -- Dest : nginx proxy , any port to any port
>
> Source : nginx proxy -- Dest : client app , any port to any port
>
> Source : nginx proxy -- Dest : public server , any port to any port.
>
It shouldn't be missing such data then - although again, this may be
specific to the firewall itself.
> Perhaps I will try to do tcpdump from the client app as well.
>
> One more info that I notice from client app host, from the netstat command,
> it shows CLOSE_WAIT for the terminated session, it seems like close_wait is
> the symbol that the closing is from external ( in this case client app is
> connect to nginx proxy), is this right?
close_wait on client would indicate that the other party initated
connection close (sent the first FIN) - again, firewall makes me more
skeptical, as it can have it's own timers for closing tcp connection /
it's own logic.
Hi J Carter,
Thank you so much for your suggestions, I did tcpdump concurrently on both
nginx and client app host as well and able to find out that F5 device in
between is sending out RST to both side. Now i am able to exclude Nginx's
configuration as part of the investigation.
On Thu, Feb 22, 2024 at 1:46 AM J Carter <jordanc.carter at outlook.com> wrote:
> Hello,
>
> On Tue, 20 Feb 2024 11:57:27 +0800
> Kin Seng <ckinseng at gmail.com> wrote:
>
> > Hi J Carter,
> >
> > Thank you for your reply.
> > I am capturing the packet from firewall, and the filtering is as per
> below
> > for the previously attached pcap.
>
> I see, I assumed you had run tcpdump on the nginx
> host. I'd reccomend doing that too then (as well as client app host) if
> you have a network firewall in the mix - to see what nginx itself
> truely sends/recieves.
>
> > Source : client app -- Dest : nginx proxy , any port to any port
> >
> > Source : public server -- Dest : nginx proxy , any port to any port
> >
> > Source : nginx proxy -- Dest : client app , any port to any port
> >
> > Source : nginx proxy -- Dest : public server , any port to any port.
> >
>
> It shouldn't be missing such data then - although again, this may be
> specific to the firewall itself.
>
> > Perhaps I will try to do tcpdump from the client app as well.
> >
> > One more info that I notice from client app host, from the netstat
> command,
> > it shows CLOSE_WAIT for the terminated session, it seems like close_wait
> is
> > the symbol that the closing is from external ( in this case client app is
> > connect to nginx proxy), is this right?
>
> close_wait on client would indicate that the other party initated
> connection close (sent the first FIN) - again, firewall makes me more
> skeptical, as it can have it's own timers for closing tcp connection /
> it's own logic.
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240226/a25e9965/attachment.htm>