Nginx as reverse proxy and IP resolution

We know how reverse proxies work and how it masks the backend server. If needed, this blog explains it beautifully. NginX, is a web server that can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. In my application, I used it to act as a reverse proxy. One of the other uses we made was to avoid the CORS behavior. Since we are too lazy to add the extra headers and handling in the backend (Just kidding!!). We did not want those extra OPTIONS calls for each request was among the other reasons.

How did we avoid CORS using NginX?

Well, since any call from the UI app goes to the container with Nginx before going to the server, we have the control on how or where we want to send the request.

worker_processes 1;

events { worker_connections 1024; }


http {
    upstream myserveraddress {
        server test:80;
    }

    server {
        listen 8080;
        access_log /var/log/nginx/access.log compression;

        location /api {
            proxy_pass         http://randomserverurl/;
            proxy_redirect     off;
            proxy_set_header   Host $host;
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header   X-Forwarded-Host $server_name;
        }
    }
}

The UI container has the NginX running it.

Result: The UI makes the api call on the same origin, therefore no CORS issue. The nginx rule “/api” takes care of proxying the request to the server with the headers and cookie as it is. proxy_pass does the trick.

Ofcourse some of the headers get changed like Forwarded, RealIP header, Host etc. But just adding few lines can help you to get that as well on the server side. But rest remains the same.

So now whats the problem here?

The request will be passed on or proxied to the server from the nginx. Therefore, will give the feeling of server and web UI app to be on the same instance.

Now, on production, we generally make use of ALBs to redirect or pass on our requests to the corresponding containers. The same case lies here, the request goes through the ALB first.

Fine. So what? My request will be proxied, it will go to the ALB and later to the corresponding container IP.

Okay! What about if there are more than one instance of the server running? We generally have that for Production environments right? Then how does the NginX decide which instance to send the request to? To answer this, NginX doesn’t.

We face issues where after sometime, the API cannot go to the server proxied to and therefore may return 504 or timeout issues.

The reason for this could be anything, may be the ip which the request was proxied to has changed. The instance is busy and cannot take more requests. Generally an ALB takes care of this if no proxy is involved. Since it knows which instance and how to divide the requests.

Now why couldn’t NginX do it?

Hmm, the simple reason is NginX resolves the IP and caches it when the container is created or brought up. Thats the reason, restarting the server should fix the issue.

Permanent Solution

What if we tell NginX, Hey! we will give you the DNS IP (resolver) and use it to resolve it after some intervals and this way the updated IP will be communicated to you. Peace!!!

Enough talk. Lets do it!

location /api {
	#   Since the api calls proxy passed to ALBs,
	#   therefore we need to resolve ip after some interval
	        resolver 172.250.166.56:50 valid=10s;
  	        resolver_timeout 10s;
            set $serverurl "http://randomserverurl";
    	    proxy_pass         $serverurl;
            proxy_redirect     off;
            proxy_set_header   Host $host;
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header   X-Forwarded-Host $server_name;
        }

*resolver_timeout tells nginx how long to wait for the response from DNS

setting the url in avariable makes sure that the ip is resolved whenever the requestis made and validity is preserved.

The valid flag means how long NGINX will consider answer from resolver as valid and will not ask resolver for that period. If theNginx resolves the IP successfully, it will not use resolver for the timeperiod.

And Done!! Now the Nginx will take care of resolving the IP just like an ALB can do. And our job is done.

Tagged : / / / / / / / / /

HTTP2 – Performance saviour for application

Recently, I took the task of improving the performance of my application and shortlisted the probable reasons or factors for the same. Out of which few of them were:

  1. Large bundle size of the application.
  2. Wrong handling of the redirection logics i.e not using in-app redirection ways.
  3. Multiple 3rd party scripts delaying the rendering.
  4. Number of concurrent browser calls to download resources.

First three could still be handled, but what about the last one? I realized since the APIs were also time-taking. I concluded that probably its due to the high number of concurrent requests, some of the resource downloading are getting delayed. But I discovered that only one protocol version change can save me from the extra efforts of trying to avoid this. Let’s see how.

Below is an example of the resource timing in chrome Network tab.

Resource timing from the waterfall for a browser call on chrome

The figures which I found worrisome were the stalled and the queueing timing. The request was stalled/blocked for around 200ms before the request was sent. The reason for the blocking or queueing timings can be either of the following :

The request was postponed by the rendering engine because it’s considered lower priority than critical resources (such as scripts/styles). This often happens with images.

The request was put on hold to wait for an unavailable TCP socket that’s about to free up.

The request was put on hold because the browser only allows six TCP connections per origin on HTTP 1.

Time spent making disk cache entries (typically very quick.)

According to Understanding Resource Timing,

Wait! whats the third point? Do we need to re-do our logic to minimize the number of concurrent requests?

Well, yes, ideally, we should avoid the number of browser requests as most SEO site checkup websites recommend to not have more than 20 external requests on a page.

But what is this limitation? Well, the number of HTTP connections are limited for browsers and we can see Browser Connection Limitation for reference. It has multiple reasons altogether.

How does this slow the page down?

For example if i am trying to send 7 requests altogether to the same host, the 7th request may have to be blocked for any of the connections to be free. Therefore, the page may not get all the resources needed to render the entire page and therefore, the stalled time may get added to your loading time.

Let’s see what all we can do to tackle this.

  • Hmm.. As many do, we can keep different hosts for serving different kinds of resources. e.g apis.example.com for server APIs, style.example.com for CSS resources, console-static.example.com for other static resources. This can help as using different hosts can increase our limit of connections through browser.
  • Try and reduce API calls using techniques like Image or CSS sprites etc.
  • Keep a balance between the front end and back end to optimize browsing experience.
  • Cautiously, choosing the APIs which are most needed in our application, maybe.

But can we avoid the above problem without making host changes, code changes or logic changes???

Before jumping to the solution, let’s see one more field in the Network requests section of Chrome, which is the Protocol tab. Its tells you what protocol your connection or browser request has used.

Network tab with Protocol field

The above limitations were as mentioned in the quote were for the HTTP protocols of <=1.x versions.

Earlier in HTTP 1 protocol, each request needed a TCP connection to be made, therefore, repeating the handshake etc mechanism, therefore, delaying the browser requests and limiting the number of connections.

The HTTP 1.1 protocol tried to make it better by bringing in the facility to reuse the made connections but again it was based on First in first out but still keeping the connection alive.

HTTP 2 brings with it the new way of browser calls using Binary framing layer and thus tries to overcome the performance limitations of the earlier protocols by providing the following.

  1. Single Connection per Origin => i.e it caters multiple requests using a single connection therefore making parallel requests to be served without any blocking.
  2. Server Push => It has also brought with it the capability to send multiple responses for a request. e.g if you know, with a request, the other resources will also be needed, you can push them to the client without any extra call. Thus helping in improving the performance

Above was just an overview of HTTP2. Getting into it deeply is beyond the scope of this article. You can read about it more at Google’s developer site on HTTP2.

Using HTTP2 protocol can save us from putting the extra efforts needed to improve the performance for our applications while networking.

What all is needed to start using HTTP2 protocol?

  • From UI side

Nothing needs to be done, since the new browser versions already support this. So if you have you browser updated to support HTTP2, we are ready from browser side to start using the protocol.

  • From Server side

What if browser says, it can use HTTP2, but server says I cannot? Since we know, the protocols to be used are decided while handshaking between the client and server, our server also needs to support HTTP2 protocol.

In my case it was simple as i was using the nginx server for serving my application resources. Just adding the below line did the trick:

listen          443 ssl http2;

And its done!!!

My browser is updated, the server listens to http2 protocol, Now i can possibly delay the task of reducing the number of calls for later.

No more blocking, no more delaying.

Apart from the reason that we should keep our application updated with the latest technologies, bumping the protocol version to atleast HTTP2 gives us some extra performance improvements. Atleast those HTML, Scripts and CSS can be downloaded parallely. Thanks HTTP2 for taking care of these internally for me.

I hope it helps. Thanks for reading. Please add your feedbacks below and help me to make it more useful.

Tagged : / / /

10 must know Vim commands for beginners

No matter, what profile, a developer or a QA, we are working on or at what stage of software release we are contributing, changing or accessing the files in production is something we can never escape from. But does production environments have useful IDEs or any editor to make our life easier??

Even though I am a Front End developer and I could not escape from accessing the files even after deploying the build. Changing the files and testing if it will work always helps. Here comes, VIM, the editor which no matter how much we try, we cannot ignore it.

Here, without getting a lot deeper, I have tried to list down the most common and useful operations that we would like to do on a file,using VIM.

All commands starting with a colon(:) are used in Escape mode. i.e press escape key to go into the mode and then type command.

1. Open / Create a file in Vim

shwetas@dell:~ $ vi the-coders-stop.js 

2. Edit an opened file

Once File opened. Press i

This will open the file in edit mode. With insert mode displaying at the bottom.

INSERT implies edit mode is on

3. Jump to a line number in file

In esc mode, :[line number]

e.g. :1 for line number one or :100 for line 100

:3 will take the cursor to the line with closing curly brackets

4. Copy and Paste text in edit mode

Copy => Select text and press Ctrl + Insert
Paste => Shift + Insert

5. Delete number of lines from a point

Press esc and then [number of lines]dd  

e.g in esc mode, 200dd will delete the next 200 lines

cursor is at line 2, and typed 2dd, therefore line 2 and 3 gets deleted

6. Delete entire content of the file

In `esc` mode,

i) Jump to line number 1 using point 3. i.e :1
ii) Delete the maximum number of lines you think the file has using point 5.

e.g 2000dd

:1 followed by 10dd (since file doesn’t have more than 10 lines) deletes all content

7. Close an opened file without any unsaved changes

In esc mode, type :q

8. Close opened or the newly created file with the changes done

In esc mode, :wq

9. Close the opened file without the unsaved changes

In esc mode, :q!

10. Search a text

type /[the text] and press Enter.
Keep pressing n to go to next occurence and shift+n to the previous.
Searching hello word occurrences

Thanks for reading!! I hope it helps. Please share and leave your valuable comments and feedbacks below.

Tagged : / /
%d bloggers like this: