Hi,
NTLM over HTTP is a 3 request "handshake" that must occur over the same TCP
connection.
My HTTP service implements the NTLMSSP acceptor and uses the clients remote
address and port like "10.11.12.13:54433" to track the authentication state
of each TCP connection.
My implementation also uses a header called 'Jespa-Connection-Id' that
allows the remote address and port to be supplied externally.
NGINX can use this to act as a proxy for NTLM over HTTP with a config like
the following:
server {
location / {
proxy_pass http://localhost:8080;
proxy_set_header Jespa-Connection-Id $remote_addr:$remote_port;
}
}
This works fine.
Now I want to load balance NTLM through NGINX. For this I used the
following:
upstream backend {
ip_hash;
server localhost:8080;
server localhost:8081;
}
server {
location / {
proxy_pass http://backend;
proxy_set_header Jespa-Connection-Id $remote_addr:$remote_port;
}
}
This also seems to work fine but I have doubts.
Can NGINX use the same TCP connection to a backend server to send requests
of different client connections?
>From what I can tell, NGINX seems to create a separate TCP connection for
each request.
If this is always true, then it seems this scheme should work.
Can you please confirm that this is how NGINX works?
More generally, do you see any problems with this scheme?
I'm not fluent in NGINX but I want to document this as a possible solution
for my users.
Thanks,
Mike
--
Michael B Allen
Java AD DS Integration
http://www.ioplex.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20221118/a916f05e/attachment.htm>
On Fri, Nov 18, 2022 at 10:30 PM Michael B Allen <ioplex at gmail.com> wrote:
> Now I want to load balance NTLM through NGINX. For this I used the
> following:
>
> upstream backend {
> ip_hash;
> server localhost:8080;
> server localhost:8081;
> }
>
> server {
> location / {
> proxy_pass http://backend;
> proxy_set_header Jespa-Connection-Id $remote_addr:$remote_port;
> }
> }
>
> This also seems to work fine but I have doubts.
> Can NGINX use the same TCP connection to a backend server to send requests
> of different client connections?
>
Nevermind. As long as the Jespa-Connect-Id uniquely identifies the client,
it doesn't matter what the NGINX to backend connections are. They can be
the default 1.0.
So load balancing NTLM with my implementation using the above nginx.conf
directives works great. I don't know what I was thinking when I posted this.
NGINX is great software BTW. It handles proxying so gracefully it's
fantastic. Nice work.
Mike
--
Michael B Allen
Java AD DS Integration
http://www.ioplex.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20221119/feecded5/attachment.htm>
Hello!
On Fri, Nov 18, 2022 at 10:30:29PM -0500, Michael B Allen wrote:
> NTLM over HTTP is a 3 request "handshake" that must occur over the same TCP
> connection.
> My HTTP service implements the NTLMSSP acceptor and uses the clients remote
> address and port like "10.11.12.13:54433" to track the authentication state
> of each TCP connection.
>
> My implementation also uses a header called 'Jespa-Connection-Id' that
> allows the remote address and port to be supplied externally.
> NGINX can use this to act as a proxy for NTLM over HTTP with a config like
> the following:
>
> server {
> location / {
> proxy_pass http://localhost:8080;
> proxy_set_header Jespa-Connection-Id
> $remote_addr:$remote_port;
> }
> }
I'm pretty sure you're aware of this, but just for the record.
Note that NTML authentication is not HTTP-compatible, but rather
requires very specific client behaviour. Further, NTLM
authentication can easily introduce security issues as long as any
proxy servers are used between the client and the origin server,
since it authenticates a connection rather than particular
requests, and connections are not guaranteed to contain only
requests from a particular client. Unless you have very specific
reasons to support it, a better idea might be to use different
authentication mechanisms.
[...]
> This also seems to work fine but I have doubts.
> Can NGINX use the same TCP connection to a backend server to send requests
> of different client connections?
>
> From what I can tell, NGINX seems to create a separate TCP connection for
> each request.
> If this is always true, then it seems this scheme should work.
> Can you please confirm that this is how NGINX works?
>
> More generally, do you see any problems with this scheme?
As of now, nginx by default does not use keepalive connections to
the upstream servers. These are, however, can be configured by
using the "keepalive" directive (http://nginx.org/r/keepalive),
and obviously enough this will break the suggested scheme as there
will be requests from other clients on the same connection.
A better approach might be to check the client address on each
request - this should remove the dependency on whether nginx uses
a new connection for each request or not.
Another issue I can see here is that in a configuration where
Jespa-Connection-Id is not removed by nginx it might be provided
by the client, claiming arbitrary address and port. This might be
a security risk.
Also note that if a proxy server is used in front of nginx with
such a configuration, and this proxy server uses keepalive
connections, requests from different clients coming from the proxy
server will share the same address and port. This might be a
security risk unless authentication token is checked on each
request. This risk is, however, common to all uses of NTLM
authentication, and not really specific to the particular
configuration.
Hope this helps.
--
Maxim Dounin
http://mdounin.ru/
On Sat, Nov 19, 2022 at 4:04 PM Maxim Dounin <mdounin at mdounin.ru> wrote:
> Hello!
>
> On Fri, Nov 18, 2022 at 10:30:29PM -0500, Michael B Allen wrote:
>
> > NTLM over HTTP is a 3 request "handshake" that must occur over the same
> TCP
> > connection.
> > My HTTP service implements the NTLMSSP acceptor and uses the clients
> remote
> > address and port like "10.11.12.13:54433" to track the authentication
> state
> > of each TCP connection.
> >
> > My implementation also uses a header called 'Jespa-Connection-Id' that
> > allows the remote address and port to be supplied externally.
> > NGINX can use this to act as a proxy for NTLM over HTTP with a config
> like
> > the following:
> >
> > server {
> > location / {
> > proxy_pass http://localhost:8080;
> > proxy_set_header Jespa-Connection-Id
> > $remote_addr:$remote_port;
> > }
> > }
>
> I'm pretty sure you're aware of this, but just for the record.
> Note that NTML authentication is not HTTP-compatible, but rather
> requires very specific client behaviour. Further, NTLM
> authentication can easily introduce security issues as long as any
> proxy servers are used between the client and the origin server,
> since it authenticates a connection rather than particular
> requests, and connections are not guaranteed to contain only
> requests from a particular client.
>
Hi Maxim,
Hijacking NTLM authenticated TCP connections is not THAT easy.
But generally, we assume HTTP TLS is being used if people care at all about
security.
AFAIK TLS can't go through proxies without tunnelling so either way, you
shouldn't be able to hijack a TLS connection.
NTLM is used because it's fast, reliable and provides a truly password-free
SSO experience.
While Kerberos provides superior security, it can be fickle (client access
to DC, time sync, depends heavily on DNS, SPNs, ...).
Since NTLM is the fallback mechanism, it always works.
NTLM has issues that are more significant than what you described.
But they can be managed.
> More generally, do you see any problems with this scheme?
>
> As of now, nginx by default does not use keepalive connections to
> the upstream servers. These are, however, can be configured by
> using the "keepalive" directive (http://nginx.org/r/keepalive),
> and obviously enough this will break the suggested scheme as there
> will be requests from other clients on the same connection.
>
My implementation works with connection caching (keepalive) to backends.
Here's the config I'm testing right now and so far it's holding up:
upstream backend {
ip_hash;
server localhost:8080;
server localhost:8081;
keepalive 16;
}
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Jespa-Connection-Id $remote_addr:$remote_port;
}
}
Loopback captures look right.
Note the key difference in my scheme is the Jespa-Connection-Id which gives
the backend the id it needs to properly map clients to security contexts.
Mike
--
Michael B Allen
Java AD DS Integration
http://www.ioplex.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20221119/d7c26945/attachment.htm>