Web Servers

How to Use Python ‘SimpleHTTPServer’ to Create Webserver or Serve Files Instantly

How to Use Python &-8216;SimpleHTTPServer&-8217; to Create Webserver or Serve Files Instantly &-8211; this Article or News was published on this date:2019-05-28 18:47:22 kindly share it with friends if you find it helpful

SimpleHTTPServer is a python module which allows you to instantly create a web server or serve your files in a snap. Main advantage of python’s SimpleHTTPServer is you don’t need to install anything since you have python interpreter installed. You don’t have to worry about python interpreter because almost all Linux distributions, python interpreter come handy by default.

You also can use SimpleHTTPServer as a file sharing method. You just have to enable the module within the location of your shareable files are located. I will show you several demonstrations in this article by using various options.

Step 1: Check for Python Installation

1. Check whether python is installed in your server or not, by issuing below command.

- python –V 

OR

- python  --version

It will show you the version of the python interpreter you’ve got and it will give you an error message if it is not installed.

Check Python VersionCheck Python Version

Check Python Version

2. You’re lucky if it was there by default. Less work actually. If it was not installed by any chance, install it following below commands.

If you have a SUSE distribution, type yast in the terminal –> Go to Software Management –> Type ‘python’ without quotes –> select python interpreter –> press space key and select it –> and then install it.

Simple as that. For that, you need to have SUSE ISO mounted and configured it as a repo by YaST or you can simple install python from the web.

Check Python VersionInstall Python on Suse

Install Python on Suse

If you’re using different operating systems like RHEL, CentOS, Debian, Ubuntu or other Linux operating systems, you can just install python using yum or apt.

In my case I use SLES 11 SP3 OS and python interpreter comes installed by default in it. Most of the case you won’t have to worry about installing python interpreter on your server.

Step 2: Create a Test Directory and Enable SimpleHTTPServer

3. Create a test directory where you don’t mess with system files. In my case I have a partition called /x01 and I have created a directory called sfnews in there and also I have added some test files for testing.

Check Python VersionCreate Testing Directory

Create Testing Directory

4. Your prerequisites are ready now. All you have to do is try python’s SimpleHTTPServer module by issuing below command within your test directory (In my case, /x01//).

- python –m SimpleHTTPServer
Check Python VersionEnable SimpleHTTPServer

Enable SimpleHTTPServer

5. After enabling SimpleHTTPServer successfully, it will start serving files through port number 8000. You just have to open up a web browser and enter ip_address:port_number (in my case its 192.168.5.67:8000).

Check Python VersionSimpleHTTPServer Directory Listing

Directory Listing

6. Now click on link 'sfnews' to browse files and directories of sfnews directory, see the screen below for reference.

Check Python VersionBrowse Directory Files

Browse Directory Files

7. SimpleHTTPServer serves your files successfully. You can see what has happened at the terminal, after you accessed your server through web browser by having a look at where you executed your command.

Check Python VersionPython SimpleHTTPServer Status

Python SimpleHTTPServer Status

Step 3: Changing SimpleHTTPServer Port

8. By default python’s SimpleHTTPServer serves files and directories through port 8000, but you can define a different port number (Here I am using port 9999) as you desire with the python command as shown below.

- python –m SimpleHTTPServer 9999
Check Python VersionChange SimpleHTTPServer Port

Change SimpleHTTPServer Port

Check Python VersionDirectory Listing on Different Port

Directory Listing on Different Port

Step 4: Serve Files from Different Location

9. Now as you tried it, you might like to serve your files in a specific location without actually going to the path.

As an example, if you are in your home directory and you want to server your files in /x01/sfnews/ directory without cd in to /x01/sfnews, Let’s see, how we will do this.

- pushd /x01/sfnews/; python –m SimpleHTTPServer 9999; popd;
Check Python VersionServe Files from Location

Serve Files from Location

Check Python VersionDirectory Listing on Different Port

Directory Listing on Different Port

Step 5: Serve HTML Files

10. If there’s a index.html file located in your serving location, python interpreter will automatically detect it and serve the html file instead of serving your files.

Let’s have a look at it. In my case I include a simple html script in the file named index.html and locate it in /x01/sfnews/.

html>
header>title>TECMINT/title>/header>
body text="blue">H1>
Hi all. SimpleHTTPServer works fine.
/H1>
p>a href="https://schoolforum.me">Visit TECMINT/a>/p>
/body>
/html>
Check Python VersionCreate Index File

Create Index File

Now save it and run SimpleHTTPServer on /x01/sfnews and go to the location from a web browser.

- pushd /x01/sfnews/; python –m SimpleHTTPServer 9999; popd;
Check Python VersionEnable Index Page

Enable Index Page

Check Python VersionServing Index Page

Serving Index Page

Very simple and handy. You can serve your files or your own html code in a snap. Best thing is you won’t have to worry about installing anything at all. In a scenario like you want to share a file with someone, you don’t have to copy the file to a shared location or making your directories shareable.

Just run SimpleHTTPServer on it and it is done. There is a few things you have to keep in mind when using this python module. When it serves files it runs on the terminal and prints out what happens in there. When you’re accessing it from the browser or download a file from it, it shows IP address accessed it and file downloaded etc. Very handy isn’t it?

If you want to stop serving, you will have to stop the running module by pressing ctrl+c. So now you know how to use python’s SimpleHTTPServer module as a quick solution to serve your files. Commenting below for the suggestions and new findings would be a great favour to enhance future articles and learn new things.

Reference Links

SimpleHTTPServer Docs

Setting Up Web Servers Load Balancing Using ‘POUND’ on RHEL/CentOS

Setting Up Web Servers Load Balancing Using &-8216;POUND&-8217; on RHEL/CentOS &-8211; this Article or News was published on this date:2019-05-28 18:19:49 kindly share it with friends if you find it helpful

POUND is a load balancing program developed by ITSECURITY Company. It is a lightweight open source reverse proxy tool which can be used as a web-server load balancer to distribute load among several servers. There are several advantages POUND gives to end user which are very convenient and does the job right.

  1. Supports virtual hosts.
  2. Configurable.
  3. When a backend server is failed or recovered from a failure, it detects it automatically and bases its load balancing decisions according to that.
  4. It rejects incorrect requests.
  5. No specified browser or webservers.

Let’s have a look at how can get this hack done.

First of all you will need a scenario for better understanding about getting this done. So I will use a scenario where there are two webservers and one gateway server which needs to balance the requests comes to gateway server to webservers.

Pound Gateway Server : 172.16.1.222
Web Server 01 : 172.16.1.204
Web Server 02 : 192.168.1.161
Install Pound Load Balancer in LinuxInstall Pound Load Balancer in Linux

Pound Web Server Load Balancer

Step1: Install Pound Load Balancer on Gateway Server

1. The easiest way to install Pound is using pre-compiled RPM packages, you can find RPMs for RedHat based distributions at:

  1. http://www.invoca.ch/pub/packages/pound/

Alternatively, Pound can be easily installed from the EPEL repository as shown below.

- yum install epel-release
- yum install Pound

After Pound installed, you can verify whether it is installed by issuing this command.

- rpm –qa |grep Pound
Install Pound Load Balancer in LinuxInstall Pound Load Balancer

Install Pound Load Balancer

2. Secondly, you need two web-servers to balance the load and make sure you have clear identifiers in order to test the pound configuration works fine.

Here I have two servers bearing IP addresses 172.16.1.204 and 192.168.1.161.

For ease of use, I have created python SimpleHTTPServer to create an instant webserver on both servers. Read about python SimpleHTTPServer

In my scenario, I have my webserver01 running on 172.16.1.204 through port 8888 and webserver02 running on 192.168.1.161 through port 5555.

Install Pound Load Balancer in LinuxPound Webserver 1

Pound Webserver 1

Install Pound Load Balancer in LinuxPound Webserver 2

Pound Webserver 2

Step 2: Configure Pound Load Balancer

3. Now it’s time to make the configurations done. Once you have installed pound successfully, it creates the pound’s config file in /etc, namely pound.cfg.

We have to edit the server and backend details in order to balance the load among the webservers. Go to /etc and open pound.cfg file for editing.

- vi /etc/pound.cfg

Make the changes as suggested below.

ListenHTTP
    Address 172.16.1.222
    Port 80
End

ListenHTTPS
    Address 172.16.1.222
    Port    443
    Cert    "/etc/pki/tls/certs/pound.pem"
End

Service
    BackEnd
        Address 172.16.1.204
        Port    8888
    End

    BackEnd
        Address 192.168.1.161
        Port    5555
    End
End

This is how my pound.cfg file looks like.

Install Pound Load Balancer in LinuxConfigure Pound Load Balancer

Configure Pound Load Balancer

Under the “ListenHTTP” and “ListenHTTPS” tags, you have to enter the IP address of the server you have installed POUND.

By default a server handles HTTP requests though port 80 and HTTPS requests through port 443. Under the “Service” tag, you can add any amount of sub tags called “BackEnd”. BackEnd tags bears the IP addresses and the port numbers which the webservers are running on.

Now save the file after editing it correctly and restart the POUND service by issuing one of below commands.

- /etc/init.d/pound restart 
OR
- service pound restart
OR
- systemctl restart pound.service
Install Pound Load Balancer in LinuxStart Pound Load Balancer

Start Pound Load Balancer

4. Now it’s time to check. Open two web browsers to check whether our configurations work fine. In the address bar type your POUND gateway’s IP address and see what appears.

First request should load the first webserver01 and second request from the other web browser should load the second webserver02.

Install Pound Load Balancer in LinuxCheck Pound Load Balancing

Check Pound Load Balancing

Furthermore, think of a scenario like if you have two webservers to load balance and one of the server’s performance is good and other’s performance is not so good.

So when load balancing among them, you will have to consider for which server you have to put more weight on. Obviously for the server with good performance specs.

To balance the load like that, you just have to add a single parameter inside the pound.cfg file. Let’s have a look at it.

Think server 192.168.1.161:5555 is the better server. Then you need put more requests flow to that server. Under the “BackEnd” tag which is configured for 192.168.1.161 server, add the parameter “Priority” before the End tag.

Look at below example.

Install Pound Load Balancer in LinuxPound Load Balancing Priority

Pound Load Balancing Priority

The range we can use for the “Priority” parameter is between 1-9. If we do not define it, default value of 5 will be assigned.

Then load will be balanced equally. If we define the Priority number, POUND will load the server with higher priority number more oftenly. So in this case, 192.168.1.161:5555 will be loaded more often than the server 172.16.1.204:8888.

Step 3: Planning Emergency Breakdowns

Emergency Tag: This tag is used to load a server in case of all the back end servers are dead. You can add it before the last End tag of pound.cfg as follows.

“Emergency
           Address 192.168.5.10
           Port        8080
   End”

6. POUND always keep track of which backend servers are alive and which are not. We can define after how many seconds POUND should checkout the backend servers by adding “Alive” parameter in pound.cfg.

You can use the parameter as “Alive 30” for set it to 30 seconds. Pound will temporarily disable the backend servers which are not responding. When we say not responding server may be dead or cannot establish a connection at that moment.

POUND will check the disabled backend server after every time period you have defined in the pound.cfg file in case if the server could establish a connection, then POUND can get back to work with the server.

7. POUND daemon will be handled by poundctl command. By having that we don’t need to edit the pound.cfg file and we can issue Listner Server, BackEnd servers and sessions etc. via a single command.

Syntax: poundctl -c /path/to/socket [-L/-l] [-S/-s] [-B/-b] [-N/-n] [-H] [-X]
  1. -c defines path to your socket.
  2. -L / -l defines the listener of your architecture.
  3. -S / -s defines the service.
  4. -B / -b defines the backend servers.

See poundctl man pages for more information.

Hope you enjoy this hack and discover more options regarding this. Feel free to comment below for any suggestions and ideas. Keep connected with sfnews for handy and latest How To’s.

Read Also: Installing XR Crossroads Load Balancer for Web Servers

Setting Up ‘XR’ (Crossroads) Load Balancer for Web Servers on RHEL/CentOS

Setting Up &-8216;XR&-8217; (Crossroads) Load Balancer for Web Servers on RHEL/CentOS &-8211; this Article or News was published on this date:2019-05-28 18:19:18 kindly share it with friends if you find it helpful

Crossroads is a service independent, open source load balance and fail-over utility for Linux and TCP based services. It can be used for HTTP, HTTPS, SSH, SMTP and DNS etc. It is also a multi-threaded utility which consumes only one memory space which leads to increase the performance when balancing load.

Let’s have a look at how XR works. We can locate XR between network clients and a nest of servers which dispatches client requests to the servers balancing the load.

If a server is down, XR forwards next client request to the next server in line, so client feels no down time. Have a look at the below diagram to understand what kind of a situation we are going to handle with XR.

Install XR Crossroads Load BalancerInstall XR Crossroads Load Balancer

Install XR Crossroads Load Balancer

There are two web-servers, one gateway server which we install and setup XR to receive client requests and distribute them among the servers.

XR Crossroads Gateway Server : 172.16.1.204
Web Server 01 : 172.16.1.222
Web Server 02 : 192.168.1.161

In above scenario, my gateway server (i.e XR Crossroads) bears the IP address 172.16.1.222, webserver01 is 172.16.1.222 and it listens through port 8888 and webserver02 is 192.168.1.161 and it listens through port 5555.

Now all I need is to balance the load of all the requests that receives by the XR gateway from internet and distribute them among two web-servers balancing the load.

Step1: Install XR Crossroads Load Balancer on Gateway Server

1. Unfortunately, there isn’t any binary RPM packages available for crosscroads, the only way to install XR crossroads from source tarball.

To compile XR, you must have C++ compiler and Gnu make utilities installed on the system in order to continue installation error free.

- yum install gcc gcc-c++ make

Next, download the source tarball by going to their official site (https://crossroads.e-tunity.com), and grab the archived package (i.e. crossroads-stable.tar.gz).

Alternatively, you may use following wget utility to download the package and extract it in any location (eg: /usr/src/), go to unpacked directory and issue “make install” command.

- wget https://crossroads.e-tunity.com/downloads/crossroads-stable.tar.gz
- tar -xvf crossroads-stable.tar.gz
- cd crossroads-2.74/
- make install
Install XR Crossroads Load BalancerInstall XR Crossroads Load Balancer

Install XR Crossroads Load Balancer

After installation finishes, the binary files are created under /usr/sbin/ and XR configuration within /etc namely “xrctl.xml”.

2. As the last prerequisite, you need two web-servers. For ease of use, I have created two python SimpleHTTPServer instances in one server.

To see how to setup a python SimpleHTTPServer, read our article at Create Two Web Servers Easily Using SimpleHTTPServer.

As I said, we’re using two web-servers, and they are webserver01 running on 172.16.1.222 through port 8888 and webserver02 running on 192.168.1.161 through port 5555.

Install XR Crossroads Load BalancerXR WebServer 01

XR WebServer 01

Install XR Crossroads Load BalancerXR WebServer 02

XR WebServer 02

Step 2: Configure XR Crossroads Load Balancer

3. All requisites are in place. Now what we have to do is configure the xrctl.xml file to distribute the load among the web-servers which receives by the XR server from the internet.

Now open xrctl.xml file with vi/vim editor.

- vim /etc/xrctl.xml

and make the changes as suggested below.

?xml version=94>1.094> encoding=94>UTF-894>?>
configuration>
system>
uselogger>true/uselogger>
logdir>/tmp/logdir>
/system>
service>
name>sfnews/name>
server>
address>172.16.1.204:8080/address>
type>tcp/type>
webinterface>0:8010/webinterface>
verbose>yes/verbose>
clientreadtimeout>0/clientreadtimeout>
clientwritetimout>0/clientwritetimeout>
backendreadtimeout>0/backendreadtimeout>
backendwritetimeout>0/backendwritetimeout>
/server>
backend>
address>172.16.1.222:8888/address>
/backend>
backend>
address>192.168.1.161:5555/address>
/backend>
/service>
/configuration>
Install XR Crossroads Load BalancerConfigure XR Crossroads Load Balancer

Configure XR Crossroads Load Balancer

Here, you can see a very basic XR configuration done within xrctl.xml. I have defined what the XR server is, what are the back end servers and their ports and web interface port for the XR.

4. Now you need to start the XR daemon by issuing below commands.

- xrctl start
- xrctl status
Install XR Crossroads Load BalancerStart XR Crossroads

Start XR Crossroads

5. Okay great. Now it’s time to check whether the configs are working fine. Open two web browsers and enter the IP address of the XR server with port and see the output.

Install XR Crossroads Load BalancerVerify Web Server Load Balancing

Verify Web Server Load Balancing

Fantastic. It works fine. now it’s time to play with XR.

6. Now it’s time to login into XR Crossroads dashboard and see the port we’ve configured for web-interface. Enter your XR server’s IP address with the port number for web-interface you have configured in xrctl.xml.

http://172.16.1.204:8010
Install XR Crossroads Load BalancerXR Crossroads Dashboard

XR Crossroads Dashboard

This is what it looks like. It’s easy to understand, user-friendly and easy to use. It shows how many connections each back end server received in the top right corner along with the additional details regarding the requests receiving. Even you can set the load weight each server you need to bear, maximum number of connections and load average etc..

The best part is, you actually can do this even without configuring xrctl.xml. Only thing you have to do is issue the command with following syntax and it will do the job done.

- xr --verbose --server tcp:172.16.1.204:8080 --backend 172.16.1.222:8888 --backend 192.168.1.161:5555

Explanation of above syntax in detail:

  1. –verbose will show what happens when the command has executed.
  2. –server defines the XR server you have installed the package in.
  3. –backend defines the webservers you need to balance the traffic to.
  4. Tcp defines it uses tcp services.

For more details, about documentations and configuration of CROSSROADS, please visit their official site at: https://crossroads.e-tunity.com/.

XR Corssroads enables many ways to enhance your server performance, protect downtime’s and make your admin tasks easier and handier. Hope you enjoyed the guide and feel free to comment below for the suggestions and clarifications. Keep in touch with sfnews for handy How To’s.

Read Also: Installing Pound Load Balancer to Control Web Server Load

How to Test Local Websites or Apps on Internet Using Ngrok

How to Test Local Websites or Apps on Internet Using Ngrok &-8211; this Article or News was published on this date:2019-05-28 16:36:32 kindly share it with friends if you find it helpful

Are you a website or mobile application developer, and want to expose your localhost server behind a NAT or firewall to the public Internet for testing purposes? In this tutorial, we will reveal how to do this securely using ngrok.

Ngrok is a sensational, free open source and cross-platform reverse proxy server for exposing local servers behind NATs and firewalls to the public Internet over secure tunnels. It is a remarkable computer program that you can use to implement personal cloud services directly from home.

It essentially establishes secure tunnels to your localhost, thus enabling you to: run demos of web sites before actual deployment, testing mobile apps connected to your locally running backend and building web-hook consumers on your development machine.

Ngrok Features:

  • Easy install with zero run-time dependencies for any major platform and works fast.
  • Supports secure tunnels.
  • Captures and analyzes all traffic over the tunnel for later inspection and replay.
  • Allows you to do away with port forwarding in your router.
  • Enables implementing of HTTP authentication (password protection).
  • Uses TCP tunnels to expose networked service that do not use HTTP such as SSH.
  • Supports tunneling only HTTP or HTTPS with SSL/TLS certificates.
  • Supports multiple simultaneous tunnels.
  • Allows for replaying webhook requests.
  • Enables you to work with virtual-host sites.
  • It can be automated via an API plus many options in the paid plan.

Before using it, you need to have a web server installed or consider setting up a functional LAMP or LEMP stack, otherwise follows these guides to:

Install LAMP Stack on Linux:

  1. Installing LAMP (Linux, Apache, MariaDB, PHP/PhpMyAdmin) in RHEL/CentOS 7.0
  2. How to Install LAMP with PHP 7 and MariaDB 10 on Ubuntu 16.10

Install LEMP Stack on Linux:

  1. How to Install LEMP (Linux, Nginx, MariaDB, PHP-FPM) on Debian 9 Stretch
  2. How To Install Nginx, MariaDB 10, PHP 7 (LEMP Stack) in 16.10/16.04
  3. Install Latest Nginx, MariaDB and PHP on RHEL/CentOS 7/6 & Fedora 20-26

How to Install Ngrok in Linux

Ngrok is super easy to install, simple run the commands below to download and unzip the archive file which contains a single binary.

$ mkdir ngrok
$ cd ngrok/
$ wget -c https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
$ unzip ngrok-stable-linux-amd64.zip
$ ls
Download Ngrok ToolDownload Ngrok Tool

Download Ngrok Tool

Once you have the binary file, let’s create a basic index.html page in the web server’s (Apache) default document root for testing requests to the web server.

$ sudo vi /var/www/html/index.html

Add the following HTML content in the file.

!DOCTYPE html>
html>
        body>
                h1>This is a TecMint.com Dummy Site/h1>
                p>We are testing Ngrok reverse proxy server./p>
        /body>
/html>

Save the file and launch ngrok by specifying the http port 80 (if you have configured you web server to listen on another port, you need to use that port):

$ ngrok http 80

Once you start it, you should see an output similar to the one below in your terminal.

Download Ngrok ToolLaunch Ngrok on Terminal

Launch Ngrok on Terminal

How to Inspect Traffic to Your Web Server Using Ngrok UI

Ngrok offers a simple web UI for you to inspect all of the HTTP traffic running over your tunnels in real-time.

http://localhost:4040 
Download Ngrok ToolNgrok Web Interface

Ngrok Web Interface

From the output above, no requests have been made to the server yet. To get started, make a request to one of your tunnel using the URLs below. Other user will also use these addresses to access your site or app.

http://9ea3e0eb.ngrok.io 
OR
https://9ea3e0eb.ngrok.io 
Download Ngrok ToolCheck Local Website Over Ngrok

Check Local Website Over Ngrok

Then check from the inspection UI to get all of the details of the request and response including the time, client IP address, duration, headers, request URI, request payload and the raw data.

Download Ngrok ToolCheck Website Requests

Check Website Requests

For more information, see the Ngrok Homepage: https://ngrok.com/

Ngrok is simply an amazing tool, it is by far the simplest yet powerful secure local tunnel solution you will find out there. You should consider creating a free ngrok account to get more bandwidth, but if you want even more advanced features, try upgrading to a paid account. Remember to share your thoughts about this piece of software, with us via the comment form below.