Three approaches to deploying a web application (Part 1)

Example of application: a webscraping program of press articles with sentiment analysis.

Imagine: Your webscraping project is finally finished! Or almost... You now have an application capable of extracting online press articles and then performing sentiment analysis on their content. You have solved all the compatibility issues between NLP libraries in your virtual environment. Your parsing approach is robust, the sentiment analysis holds up and you are satisfied with the visual output.

How to make the world benefit from your invention? On the web of course!

Between a program developed on a local machine and a secure web service accessible to the world, there are a number of steps to consider. The term "deployment" refers to the process required to put an application into service, also called a web service. There are many approaches to deploying a web service. In this series of articles, we will present 3 scenarios:

In this series of articles, we will present 3 scenarios:

Scenario 1: From-scratch approach

Scenario 2: Docker approach

Scenario 3: Serverless approach

Scenario 1: From-scratch approach 

Using fastAPI and NGINX

What is an API ?

When I go to a restaurant, I am only interested in two things: the drinks and the food. The rest does not concern me. I don't know the recipe for General Tao Chicken, and I don't want to know about inventory turns, accounting, or impromptu visits from the health department. The operation of the restaurant and the steps required to serve clients are not my problem. In this situation, the restaurant server acts as the API between me and the complex processes of the restaurant (kitchen, inventory, management, etc.). They facilitate and simplify my interaction with the restaurant.

The role of an API or "Application Programming Interface" is to ensure the communication between two applications A and B. The application A accesses the functionality F of the application B via an API which allows to :

  • Extend the functionality of A

  • Reduce complexity: the designer of A does not need to develop the F functionality himself

  • Ensure a secure communication between A and B

This mode of operation is actually EXTREMELY common, it is even ubiquitous. Think about it, functionality is the reason for the existence of an application and makes all its added value. Designing an application is about providing functionality to a user. But an application provider doesn't just offer "native" functionality developed by itself, it also offers functionality from other applications.

For example, when Google provides the weather, the information actually comes from third parties such as Weather Underground or The Weather Channel. The connectivity between the two is then managed by an API.

What is fastAPI ?

FastAPI is an application framework used to build web applications. This tool is based on two other frameworks: Starlette (web server) and Pydantic (data validation). Fast, easy to use, easy to deploy and avoiding duplication, fastAPI is very popular for data science and machine learning projects.

What is NGINX ?

NGINX is a web application management software that plays the roles of web server*, reverse proxy*, load balancer* and cache*.


Some definitions are necessary…

The web server acts as a computer that contains all the files that make up a website (HTML documents, images, CSS and JavaScript files, etc.) and sends the content of these files to the user who consults the website. 

A proxy is an intermediary between the client and the web server that provides the client with a certain level of service (e.g. caching) and security (acts as a firewall, performs encryption-de-encryption).

A reverse proxy acts in the opposite direction: It protects the web server instead of the client and can also provide load balancing (this is the case with NGINX).

Also known as "Application Delivery Controllers", load balancing consists in managing the incoming traffic of a web application by distributing the traffic over several servers. Formerly supported by hardware platforms in private data centers, it can now be supported by software (e.g. NGINX)

Caching keeps frequently requested/used information in an easily accessible place for the user. The cache usually contains little information but it is easily accessible (example: cache of a web browser). As an illustration, imagine a book that you frequently consult. It is more convenient to have it on the table within reach than in the bookcase across the room.  

Steps to follow for scenario 1

1) Create the server: the first step is to set up a server with a cloud platform like an EC2 instance on AWS (Amazon Web Services).

2) Configure the instance:

o   update the operating system

o   install NGINX

o   clone the API code

3) Configure NGINX: create a configuration file with sudo - nano for the NGINX web server called fastAPI_nginx :

server {
    listen 80;
    server_name 18.116.199.161;
    location / {
        proxy_pass http://127.0.0.1:8000;
    }
}

4) Launch the web server: sudo service nginx restart

5) Start the application service (uvicorn) : python3 -m uvicorn main:app

 

And that's it! NGINX then makes the connection between the application server (uvicorn) and the web, in addition to managing the traffic that comes from the web (load balancing).

Stay tuned for the next article with scenario number 2: :Docker approach !