We know how reverse proxies work and how it masks the backend server. If needed, this blog explains it beautifully. NginX, is a web server that can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. In my application, I used it to act as a reverse proxy. One of the other uses we made was to avoid the CORS behavior. Since we are too lazy to add the extra headers and handling in the backend (Just kidding!!). We did not want those extra OPTIONS calls for each request was among the other reasons.
How did we avoid CORS using NginX?
Well, since any call from the UI app goes to the container with Nginx before going to the server, we have the control on how or where we want to send the request.
worker_processes 1;
events { worker_connections 1024; }
http {
upstream myserveraddress {
server test:80;
}
server {
listen 8080;
access_log /var/log/nginx/access.log compression;
location /api {
proxy_pass http://randomserverurl/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
The UI container has the NginX running it.
Result: The UI makes the api call on the same origin, therefore no CORS issue. The nginx rule “/api” takes care of proxying the request to the server with the headers and cookie as it is. proxy_pass does the trick.
Ofcourse some of the headers get changed like Forwarded, RealIP header, Host etc. But just adding few lines can help you to get that as well on the server side. But rest remains the same.
So now whats the problem here?
The request will be passed on or proxied to the server from the nginx. Therefore, will give the feeling of server and web UI app to be on the same instance.
Now, on production, we generally make use of ALBs to redirect or pass on our requests to the corresponding containers. The same case lies here, the request goes through the ALB first.
Fine. So what? My request will be proxied, it will go to the ALB and later to the corresponding container IP.
Okay! What about if there are more than one instance of the server running? We generally have that for Production environments right? Then how does the NginX decide which instance to send the request to? To answer this, NginX doesn’t.
We face issues where after sometime, the API cannot go to the server proxied to and therefore may return 504 or timeout issues.
The reason for this could be anything, may be the ip which the request was proxied to has changed. The instance is busy and cannot take more requests. Generally an ALB takes care of this if no proxy is involved. Since it knows which instance and how to divide the requests.
Now why couldn’t NginX do it?
Hmm, the simple reason is NginX resolves the IP and caches it when the container is created or brought up. Thats the reason, restarting the server should fix the issue.
Permanent Solution
What if we tell NginX, Hey! we will give you the DNS IP (resolver) and use it to resolve it after some intervals and this way the updated IP will be communicated to you. Peace!!!
Enough talk. Lets do it!
location /api {
# Since the api calls proxy passed to ALBs,
# therefore we need to resolve ip after some interval
resolver 172.250.166.56:50 valid=10s;
resolver_timeout 10s;
set $serverurl "http://randomserverurl";
proxy_pass $serverurl;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
*resolver_timeout tells nginx how long to wait for the response from DNS
setting the url in avariable makes sure that the ip is resolved whenever the requestis made and validity is preserved.
The valid
flag means how long NGINX will consider answer from resolver as valid and will not ask resolver for that period. If theNginx resolves the IP successfully, it will not use resolver for the timeperiod.
And Done!! Now the Nginx will take care of resolving the IP just like an ALB can do. And our job is done.
Like this:
Like Loading...
Shweta Singh
Share post:
We know how reverse proxies work and how it masks the backend server. If needed, this blog explains it beautifully. NginX, is a web server that can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. In my application, I used it to act as a reverse proxy. One of the other uses we made was to avoid the CORS behavior. Since we are too lazy to add the extra headers and handling in the backend (Just kidding!!). We did not want those extra OPTIONS calls for each request was among the other reasons.
How did we avoid CORS using NginX?
Well, since any call from the UI app goes to the container with Nginx before going to the server, we have the control on how or where we want to send the request.
The UI container has the NginX running it.
Result: The UI makes the api call on the same origin, therefore no CORS issue. The nginx rule “/api” takes care of proxying the request to the server with the headers and cookie as it is. proxy_pass does the trick.
Ofcourse some of the headers get changed like Forwarded, RealIP header, Host etc. But just adding few lines can help you to get that as well on the server side. But rest remains the same.
So now whats the problem here?
The request will be passed on or proxied to the server from the nginx. Therefore, will give the feeling of server and web UI app to be on the same instance.
Now, on production, we generally make use of ALBs to redirect or pass on our requests to the corresponding containers. The same case lies here, the request goes through the ALB first.
Fine. So what? My request will be proxied, it will go to the ALB and later to the corresponding container IP.
Okay! What about if there are more than one instance of the server running? We generally have that for Production environments right? Then how does the NginX decide which instance to send the request to? To answer this, NginX doesn’t.
We face issues where after sometime, the API cannot go to the server proxied to and therefore may return 504 or timeout issues.
The reason for this could be anything, may be the ip which the request was proxied to has changed. The instance is busy and cannot take more requests. Generally an ALB takes care of this if no proxy is involved. Since it knows which instance and how to divide the requests.
Now why couldn’t NginX do it?
Hmm, the simple reason is NginX resolves the IP and caches it when the container is created or brought up. Thats the reason, restarting the server should fix the issue.
Permanent Solution
What if we tell NginX, Hey! we will give you the DNS IP (resolver) and use it to resolve it after some intervals and this way the updated IP will be communicated to you. Peace!!!
Enough talk. Lets do it!
*resolver_timeout tells nginx how long to wait for the response from DNS
setting the url in avariable makes sure that the ip is resolved whenever the requestis made and validity is preserved.
The
valid
flag means how long NGINX will consider answer from resolver as valid and will not ask resolver for that period. If theNginx resolves the IP successfully, it will not use resolver for the timeperiod.And Done!! Now the Nginx will take care of resolving the IP just like an ALB can do. And our job is done.
Share this:
Like this: