How to Setup vsftp and sftp server on centos 7

FTP server is used to exchange files between computers over network . This guide helps you to setup ftp server on centos 7 . This guide contains configuration steps for both FTP and SFTP as well as user creation . Here i’ve used VSFTP package which is secure and less vulnerable .
1. FTP Server
2. SFTP Server
3. User creation

Step 1 » Update your repository and install VSFTPD package .
[root@krizna ~]# yum check-update
[root@krizna ~]# yum -y install vsftpd

Step 2 » After installation you can find /etc/vsftpd/vsftpd.conf file which is the main configuration file for VSFTP.
Take a backup copy before making changes .
[root@krizna ~]# mv /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpd.conf.orgNow open the file and make changes as below
[root@krizna ~]# nano /etc/vsftpd/vsftpd.confFind this line anonymous_enable=YES ( Line no : 12 ) and change value to NO to disable anonymous FTP access.
anonymous_enable=NOUncomment the below line ( Line no : 100 ) to restrict users to their home directory.
chroot_local_user=YESand add the below lines at the end of the file to enable passive mode and allow chroot writable.
allow_writeable_chroot=YES
pasv_enable=Yes
pasv_min_port=40000
pasv_max_port=40100

Step 3 » Now restart vsftpd service and make it start automatically after reboot.
[root@krizna ~]# systemctl restart vsftpd.service
[root@krizna ~]# systemctl enable vsftpd.service

Step 4 » Add FTP service in firewall to allow ftp ports .
[root@krizna ~]# firewall-cmd –permanent –add-service=ftp
[root@krizna ~]# firewall-cmd –reload

Step 5 » Setup SEinux to allow ftp access to the users home directories .
[root@krizna ~]# setsebool -P ftp_home_dir on
Step 6 » Now create an User for ftp access. Here /sbin/nologin shell is used to prevent shell access to the server .
[root@krizna ~]# useradd -m dave -s /sbin/nologin
[root@krizna ~]# passwd dave
Now user dave can able to login ftp on port 21 .
You can filezilla or winscp client for accessing files.

SFTP server

SFTP ( Secure File Transfer Protocol ) is used to encrypt connections between clients and the FTP server. It is highly recommended to use SFTP because data is transferred over encrypted connection using SSH-tunnel on port 22 .
Basically we need openssh-server package to enable SFTP .
Install openssh-server package, if its not already installed.
[root@krizna ~]# yum -y install openssh-server
Step 7 » Create a separate group for FTP access.
[root@krizna ~]# groupadd ftpaccess
Step 8 » Now open /etc/ssh/sshd_config file and make changes as below.
Find and comment the below line ( Line no : 147 ).
#Subsystem sftp /usr/libexec/openssh/sftp-serverand add these lines below.
Subsystem sftp internal-sftp
Match group ftpaccess
ChrootDirectory %h
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp

Step 9 » Now restart sshd service.
[root@krizna ~]# systemctl restart sshdNow your SFTP server is configured and ready .

User creation

Step 10 » Create user jack with /sbin/nologin shell and ftpaccess group
[root@krizna ~]# useradd -m jack -s /sbin/nologin -g ftpaccess
[root@krizna ~]# passwd jack
Now assign root ownership for the home directory for chroot access and modify permission.
[root@krizna ~]# chown root /home/jack
[root@krizna ~]# chmod 750 /home/jack
Create a directory www inside home directory for writing and modify ownership .
[root@krizna ~]# mkdir /home/jack/www
[root@krizna ~]# chown jack:ftpaccess /home/jack/www

Now jack can use both ftp and sftp services . He can upload files in www directory .

If you are going to use FTP and SFTP together in the same server, you should follow above steps while creating users . For existing users add them to ftpaccess and make below changes.
[root@krizna ~]# usermod dave -g ftpaccess
[root@krizna ~]# chown root /home/dave
[root@krizna ~]# chmod 750 /home/dave
[root@krizna ~]# mkdir /home/dave/www
[root@krizna ~]# chown dave:ftpaccess /home/dave/www

Most commonly used Systemctl command to Manage Systemd Services and Units on centos 7

In this guide, we will be discussing the systemctl command, which is the central management tool for controlling the init system. We will cover how to manage services, check statuses, change system states, and work with the configuration files.Due to its heavy adoption, familiarizing yourself with systemd is well worth the trouble, as it will make administrating these servers considerably easier. Learning about and utilizing the tools and daemons that comprise systemd will help you better appreciate the power, flexibility, and capabilities it provides, or at least help you to do your job with minimal hassle.Systemd is an init system and system manager that is widely becoming the new standard for Linux machines. While there are considerable opinions about whether systemd is an improvement over the traditional SysV init systems it is replacing, the majority of distributions plan to adopt it or have already done so.

systemctl-command-in-Linux
systemctl-command-in-Linux

Service Management
The fundamental purpose of an init system is to initialize the components that must be started after the Linux kernel is booted (traditionally known as “userland” components). The init system is also used to manage services and daemons for the server at any point while the system is running. With that in mind, we will start with some simple service management operations.

In systemd, the target of most actions are “units”, which are resources that systemd knows how to manage. Units are categorized by the type of resource they represent and they are defined with files known as unit files. The type of each unit can be inferred from the suffix on the end of the file.

For service management tasks, the target unit will be service units, which have unit files with a suffix of .service. However, for most service management commands, you can actually leave off the .service suffix, as systemd is smart enough to know that you probably want to operate on a service when using service management commands.

Starting and Stopping Services

To start a systemd service, executing instructions in the service’s unit file, use the start command. If you are running as a non-root user, you will have to use since this will affect the state of the operating system:

systemctl start application.service

As we mentioned above, systemd knows to look for *.service files for service management commands, so the command could just as easily be typed like this:

systemctl start application

Although you may use the above format for general administration, for clarity, we will use the .service suffix for the remainder of the commands to be explicit about the target we are operating on.

To stop a currently running service, you can use the stop command instead:

systemctl stop application.service

Restarting and Reloading

To restart a running service, you can use the restart command:

systemctl restart application.service

If the application in question is able to reload its configuration files (without restarting), you can issue the reload command to initiate that process:

systemctl reload application.service

If you are unsure whether the service has the functionality to reload its configuration, you can issue the reload-or-restart command. This will reload the configuration in-place if available. Otherwise, it will restart the service so the new configuration is picked up:

systemctl reload-or-restart application.service

Enabling and Disabling Services

The above commands are useful for starting or stopping commands during the current session. To tell systemd to start services automatically at boot, you must enable them.

To start a service at boot, use the enable command:

systemctl enable application.service

This will create a symbolic link from the system’s copy of the service file (usually in /lib/systemd/system or /etc/systemd/system) into the location on disk where systemd looks for autostart files (usually /etc/systemd/system/some_target.target.wants. We will go over what a target is later in this guide).

To disable the service from starting automatically, you can type:

systemctl disable application.service

This will remove the symbolic link that indicated that the service should be started automatically.

Keep in mind that enabling a service does not start it in the current session. If you wish to start the service and enable it at boot, you will have to issue both the start and enable commands.

configure: error: no acceptable C compiler found in $PATH

wget http://dl.fedoraproject.org/pub/epel/7/x86_64/s/strongswan-5.4.0-2.el7.x86_64.rpm
rpm -ivh strongswan-5.4.0-2.el7.x86_64.rpm
yum install -y gmp-devel
yum install libxml2-devel openssl-devel
wget http://download.strongswan.org/strongswan.tar.gz
tar -xzvf strongswan.tar.gz
cd strongswan
cd strongswan-5.5.1/
./configure –sysconfdir=/etc –enable-openssl –enable-nat-transport –disable-mysql –disable-ldap –disable-static –enable-shared –enable-md4 –enable-eap-mschapv2 –enable-eap-aka –enable-eap-aka-3gpp2 –enable-eap-gtc –enable-eap-identity –enable-eap-md5 –enable-eap-peap –enable-eap-radius –enable-eap-sim –enable-eap-sim-file –enable-eap-simaka-pseudonym –enable-eap-simaka-reauth –enable-eap-simaka-sql –enable-eap-tls –enable-eap-tnc –enable-eap-ttls
configure: WARNING: unrecognized options: –enable-nat-transport
checking for a BSD-compatible install… /usr/bin/install -c
checking whether build environment is sane… yes
checking for a thread-safe mkdir -p… /usr/bin/mkdir -p
checking for gawk… gawk
checking whether make sets $(MAKE)… yes
checking whether make supports nested variables… yes
checking whether UID ‘0’ is supported by ustar format… yes
checking whether GID ‘0’ is supported by ustar format… yes
checking how to create a ustar tar archive… gnutar
checking whether make supports nested variables… (cached) yes
checking for pkg-config… /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0… yes
checking for a sed that does not truncate output… /usr/bin/sed
checking configured UDP ports (500, 4500)… ok
checking for gcc… no
checking for cc… no
checking for cl.exe… no
configure: error: in `/data/soft/strongswan-5.5.1′:
configure: error: no acceptable C compiler found in $PATH
See `config.log’ for more details


You need to install a compiler to compile. The solutions in the mentioned question should work:

yum install gcc
or

yum groupinstall “Development tools”
After that, try to run your compiler to make sure everything is aligned:

gcc

How to removing or delete non printable characters from expect file

Questions:

Some character sequences from my files:
^[[64;8H, ^[[?25h, ^[[1;64r etc or avoid having expect add them in the first place.

Background:

I’m using a collection of expect scripts for certain tasks.

The output files I’m collecting often contain the above type of characters (as displayed in emacs / vi / cat -v).
I’ve tried a number of tr commands like the following but it only makes the [64;8H etc visible.

tr -dc ‘[:print:]\n’ < input EDIT: The results from above on a problematic line [1;64r[64;1H[64;1H[2K[64;1H[?25h[64;1H[64;..... answer: If you want to remove escape sequences as well, you can use the following sed snippet: sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" perl way: perl -pe ' s/\033\\\\\[(\d+;)*\d*[[:alpha:]]//g; s/\033\\\\\]0;//g; s/\x7//g; s/\033\(B//g; ' expect.log

WordPress Features Will Require Hosts to Support HTTPS

wordpress
wordpress

WordPress creator and Automattic founder Matt Mullenweg announced today that upcoming versions of the WordPress CMS would include features that would require hosts to support HTTPS.
Without providing any details on what these features are, Mullenweg said that it was time for the WordPress team to start pushing their followers to implement HTTPS for their sites.
“Just as JavaScript is a near necessity for smoother user experiences and more modern PHP versions are critical for performance,” Mullenweg explained, “SSL just makes sense as the next hurdle our users are going to face.”
WordPress.com already provides free HTTPS
WordPress is currently available as an open-source CMS provided by The WordPress Foundation, but also as a hosted blogging platform provided by Automattic.
In April 2016, Automattic announced free HTTPS for the majority of WordPress.com blogs via Let’s Encrypt, a joint EFF-Mozilla project that provides free SSL certificates for any site that wishes to support HTTPS.
Starting with early 2017, The WordPress Foundation, through its wordpress.org project, will start to promote hosting platforms that provide an SSL certificate for their clients.this is because future WordPress versions would “require hosts to have HTTPS available,” and the WordPress team would like to see as many hosting providers and clients start to migrate their sites to HTTPS in the meantime.

How to calculate innodb_buffer_pool_size of mysql

A lot of people suggestions that innodb_buffer_pool_size should be up to %80 of the total memory.

This will give you the RIBPS, Recommended InnoDB Buffer Pool Size based on all InnoDB Data and Indexes with an additional 60%.

For Example

mysql> SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS FROM
-> (SELECT SUM(data_length+index_length) Total_InnoDB_Bytes
-> FROM information_schema.tables WHERE engine=’InnoDB’) A;
+——-+
| RIBPS |
+——-+
| 8 |
+——-+
1 row in set (4.31 sec)

With this output, you would set the following in /etc/my.cnf

[mysqld]
innodb_buffer_pool_size=8G
Next, service mysql restart

After the restart, run mysql for a week or two. Then, run this query:

SELECT (PagesData*PageSize)/POWER(1024,3) DataGB FROM
(SELECT variable_value PagesData
FROM information_schema.global_status
WHERE variable_name=’Innodb_buffer_pool_pages_data’) A,
(SELECT variable_value PageSize
FROM information_schema.global_status
WHERE variable_name=’Innodb_page_size’) B;

This will give you how many actual GB of memory is in use by InnoDB Data in the InnoDB Buffer Pool at this moment.

how to source install MySQL 5.5.15 on centos

yum install gcc gcc-c++
yum install ncurses-devel

mkdir -p /tmp
cd /tmp
wget http://dev.mysql.com/get/Downloads/MySQL-5.5/mysql-5.5.15.tar.gz/from/http://mysql.he.net/
wget http://www.cmake.org/files/v2.8/cmake-2.8.4.tar.gz
wget http://ftp.gnu.org/gnu/bison/bison-2.5.tar.gz

cd /tmp
tar zxvf cmake-2.8.4.tar.gz
cd cmake-2.8.4
./configure
make
make install

cd /tmp
tar zxvf bison-2.5.tar.gz
cd bison-2.5
./configure
make
make install

/usr/sbin/groupadd mysql
/usr/sbin/useradd -g mysql mysql
cd /tmp
tar xvf mysql-5.5.15.tar.gz
cd mysql-5.5.15/
cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \
-DMYSQL_UNIX_ADDR=/tmp/mysql.sock \
-DDEFAULT_CHARSET=utf8 \
-DDEFAULT_COLLATION=utf8_general_ci \
-DWITH_EXTRA_CHARSETS=all \
-DWITH_MYISAM_STORAGE_ENGINE=1 \
-DWITH_INNOBASE_STORAGE_ENGINE=1 \
-DWITH_MEMORY_STORAGE_ENGINE=1 \
-DWITH_READLINE=1 \
-DENABLED_LOCAL_INFILE=1 \
-DMYSQL_DATADIR=/var/mysql/data \
-DMYSQL_USER=mysql

This is my case ,you can ignore

cmake -DCMAKE_INSTALL_PREFIX=/ssdc/mysql -DMYSQL_UNIX_ADDR=/tmp/mysql5.5.sock -DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci -DWITH_EXTRA_CHARSETS=all -DWITH_MYISAM_STORAGE_ENGINE=1 -DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_READLINE=1 -DENABLED_LOCAL_INFILE=1 -DMYSQL_DATADIR=/ssdc/mysql/data

 

make
make install

other ready jobs you need to do:
chmod +w /usr/local/mysql
chown -R mysql:mysql /usr/local/mysql
ln -s /usr/local/mysql/lib/libmysqlclient.so.16 /usr/lib/libmysqlclient.so.16
mkdir -p /var/mysql/
mkdir -p /var/mysql/data/
mkdir -p /var/mysql/log/
chown -R mysql:mysql /var/mysql/
cd support-files/
cp my-large.cnf /var/mysql/my.cnf
cp mysql.server /etc/rc.d/init.d/mysqld

/usr/local/mysql/scripts/mysql_install_db \
–defaults-file=/var/mysql/my.cnf \
–basedir=/usr/local/mysql \
–datadir=/var/mysql/data \
–user=mysql

chmod +x /etc/init.d/mysqld

chkconfig –add mysqld
chkconfig –level 345 mysqld on

Best three useful firewall-cmd command on centos 7

remove public zone service:
firewall-cmd –zone=public –remove-service=ssh –permanent
firewall-cmd –reload

add ip range to trusted zone and add ssh service to trusted zone

firewall-cmd –permanent –zone=”trusted” –add-source=”123.51.11.2/32″
firewall-cmd –permanent –zone=”trusted” –add-service=”ssh”
firewall-cmd –reload
firewall-cmd –list-all-zones

add anthor ip range to trusted zone

firewall-cmd –permanent –zone=”trusted” –add-source=”172.30.30.0/24″
firewall-cmd –reload
firewall-cmd –list-all-zones

The version of find doesn’t support the -newermt predicate

If your version of find doesn’t support the -newermt predicate,you can use the -newer predicate. That predicate needs a file as a reference: instead of an absolute modification date, it will use the modification date of the file. You can create appropriate “marker files” for this purpose, for example:

touch /tmp/mark.start -d “2016-11-22 10:00”
touch /tmp/mark.end -d “2016-11-23 23:00”
And then rewrite using -newer predicate:

find /some/path -newer /tmp/mark.start ! -newer /tmp/mark.end
Finally, your tar won’t work if the argument list is too long and xargs splits to multiple executions, because all executions will recreate the tar file. You need to use the -T flag of tar instead of xargs:

find /some/path -print0 | tar acf out.tar.gz –null -T-

More example:

Find files newer than “start” and older than “end”

touch /tmp/mark.start -d “2016-02-16 00:00”

touch a -d “2016-02-15 00:01”

touch b -d “2016-02-16 00:01”

touch c -d “2016-02-17 00:00”

touch d -d “2016-02-18 00:00”

touch e -d “2016-02-19 00:01”

touch /tmp/mark.end -d “2016-02-19 00:00”
Command: find . -type f -newer /tmp/mark.start ! -newer /tmp/mark.end

========================================================================

Output:

-bash-3.2$ find . -type f -newer /tmp/mark.start ! -newer /tmp/mark.end

./d

./b

./c

-bash-3.2$

How to Use SSL Certificates with HAProxy

Overview

If your application makes use of SSL certificates, then some decisions need to be made about how to use them with a load balancer.

A simple setup of one server usually sees a client’s SSL connection being decrypted by the server receiving the request. Because a load balancer sits between a client and one or more servers, where the SSL connection is decrypted becomes a concern.

There are two main strategies.

SSL Termination is the practice of terminating/decrypting an SSL connection at the load balancer, and sending unencrypted connections to the backend servers.

This means the load balancer is responsible for decrypting an SSL connection – a slow and CPU intensive process relative to accepting non-SSL requests.

This is the opposite of SSL Pass-Through, which sends SSL connections directly to the proxied servers.

With SSL-Pass-Through, the SSL connection is terminated at each proxied server, distributing the CPU load across those servers. However, you lose the ability to add or edit HTTP headers, as the connection is simply routed through the load balancer to the proxied servers.

This means your application servers will lose the ability to get the X-Forwarded-* headers, which may include the client’s IP address, port and scheme used.

Which strategy you choose is up to you and your application needs. SSL Termination is the most typical I’ve seen, but pass-thru is likely more secure.

There is a combination of the two strategies, where SSL connections are terminated at the load balancer, adjusted as needed, and then proxied off to the backend servers as a new SSL connection. This may provide the best of both security and ability to send the client’s information. The trade off is more CPU power being used all-around, and a little more complexity in configuration.

HAProxy with SSL Termination

We’ll cover the most typical use case first – SSL Termination. As stated, we need to have the load balancer handle the SSL connection. This means having the SSL Certificate live on the load balancer server.

We saw how to create a self-signed certificate in a previous edition of SFH. We’ll re-use that information for setting up a self-signed SSL certificate for HAProxy to use.

Keep in mind that for a production SSL Certificate (not a self-signed one), you won’t need to generate or sign a certificate yourself – you’ll just need to create a Certificate Signing Request (csr) and pass that to whomever you purchase a certificate from.

First, we’ll create a self-signed certificate for *.567ss.com, which is handy for demonstration purposes, and lets use one the same certificate when our server IP addresses might change while testing locally. For example, if our local server exists at 192.168.33.10, but then our Virtual Machine IP changes to 192.168.33.11, then we don’t need to re-create the self-signed certificate.

I use the 567ss.com service as it allows us to use a hostname rather than directly accessing the servers via an IP address, all without having to edit my computers’ Host file.

As this process is outlined in a passed edition on SSL certificates, I’ll simple show the steps to generate a self-signed certificate here:

$ sudo mkdir /etc/ssl/567ss.com

$ sudo openssl genrsa -out /etc/ssl/567ss.com/567ss.com.key 1024

$ sudo openssl req -new -key /etc/ssl/567ss.com/567ss.com.key \

-out /etc/ssl/567ss.com/567ss.com.csr

> Country Name (2 letter code) [AU]:US

> State or Province Name (full name) [Some-State]:Connecticut

> Locality Name (eg, city) []:New Haven

> Organization Name (eg, company) [Internet Widgits Pty Ltd]:SFH

> Organizational Unit Name (eg, section) []:

> Common Name (e.g. server FQDN or YOUR name) []:*.567ss.com

> Email Address []:

 

> Please enter the following ‘extra’ attributes to be sent with your certificate request

> A challenge password []:

> An optional company name []:

$ sudo openssl x509 -req -days 365 -in /etc/ssl/567ss.com/567ss.com.csr \

-signkey /etc/ssl/567ss.com/567ss.com.key \

-out /etc/ssl/567ss.com/567ss.com.crt

This leaves us with a 567ss.com.csr, 567ss.com.key and 567ss.com.crt file.

Next, after the certificates are created, we need to create a pem file. A pem file is essentially just the certificate, the key and optionally certificate authorities concatenated into one file. In our example, we’ll simply concatenate the certificate and key files together (in that order) to create a 567ss.com.pem file. This is HAProxy’s preferred way to read an SSL certificate.

$ sudo cat /etc/ssl/567ss.com/567ss.com.crt /etc/ssl/567ss.com/567ss.com.key \

| sudo tee /etc/ssl/567ss.com/567ss.com.pem

When purchasing a real certificate, you won’t necessarily get a concatenated “bundle” file. You may have to concatenate them yourself. However, many do provide a bundle file. If you do, it might not be a pem file, but instead be a bundle, cert, cert, key file or some similar name for the same concept. This Stack Overflow answer explains that nicely.

In any case, once we have a pem file for HAproxy to use, we can adjust our configuration just a bit to handle SSL connections.

We’ll setup our application to accept both http and https connections. In the last edition on HAProxy, we had this frontend:

frontend localnodes

bind *:80

mode http

default_backend nodes

To terminate an SSL connection in HAProxy, we can now add a binding to the standard SSL port 443, and let HAProxy know where the SSL certificates are:

frontend localhost

bind *:80

bind *:443 ssl crt /etc/ssl/567ss.com/567ss.com.pem

mode http

default_backend nodes

In the above example, we’re using the backend “nodes”. The backend, luckily, doesn’t really need to be configured in any particular way. In the previous edition on HAProxy, we had the backend like so:

backend nodes

mode http

balance roundrobin

option forwardfor

option httpchk HEAD / HTTP/1.1\r\nHost:localhost

server web01 172.17.0.3:9000 check

server web02 172.17.0.3:9001 check

server web03 172.17.0.3:9002 check

http-request set-header X-Forwarded-Port %[dst_port]

http-request add-header X-Forwarded-Proto https if { ssl_fc }

Because the SSL connection is terminated at the Load Balancer, we’re still sending regular HTTP requests to the backend servers. We don’t need to change this configuration, as it works the same!

SSL Only

If you’d like the site to be SSL-only, you can add a redirect directive to the frontend configuration:

frontend localhost

bind *:80

bind *:443 ssl crt /etc/ssl/567ss.com/567ss.com.pem

redirect scheme https if !{ ssl_fc }

mode http

default_backend nodes

Above, we added the redirect directive, which will redirect from “http” to “https” if the connection was not made with an SSL connection. More information on ssl_fc is available here.

HAProxy with SSL Pass-Through

With SSL Pass-Through, we’ll have our backend servers handle the SSL connection, rather than the load balancer.

The job of the load balancer then is simply to proxy a request off to its configured backend servers. Because the connection remains encrypted, HAProxy can’t do anything with it other than redirect a request to another server.

In this setup, we need to use TCP mode over HTTP mode in both the frontend and backend configurations. HAProxy will treat the connection as just a stream of information to proxy to a server, rather than use its functions available for HTTP requests.

First, we’ll tweak the frontend configuration:

frontend localhost

bind *:80

bind *:443

option tcplog

mode tcp

default_backend nodes

This still binds to both port 80 and port 443, giving the opportunity to use both regular and SSL connections.

As mentioned, to pass a secure connection off to a backend server without encrypting it, we need to use TCP mode (mode tcp) instead. This also means we need to set the logging to tcp instead of the default http (option tcplog). Read more on log formats here to see the difference between tcplog and httplog.

Next, we need to tweak our backend configuration. Notably, we once again need to change this to TCP mode, and we remove some directives to reflect the loss of ability to edit/add HTTP headers:

backend nodes

mode tcp

balance roundrobin

option ssl-hello-chk

server web01 172.17.0.3:443 check

server web02 172.17.0.4:443 check

As you can see, this is set to mode tcp – Both frontend and backend configurations need to be set to this mode.

We also remove option forwardfor and the http-request options – these can’t be used in TCP mode, and we couldn’t inject headers into a request that’s encrypted anyway.

For health checks, we can use ssl-hello-chk which checks the connection as well as its ability to handle SSL (SSLv3 specifically) connections.

In this example, I have two fictitious server backend that accept SSL certificates. If you’ve read the edition SSL certificates, you can see how to integrate them with Apache or Nginx in order to create a web server backend, which handles SSL traffic. With SSL Pass-Through, no SSL certificates need to be created or used within HAproxy. The backend servers can handle SSL connections just as they would if there was only one server used in the stack without a load balancer.

 

Resources

http://blog.haproxy.com/2012/09/10/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/

 

http://serverfault.com/questions/9708/what-is-a-pem-file-and-how-does-it-differ-from-other-openssl-generated-key-file