Setting Up a Server Cluster for Enterprise Web Apps – Part 1

本文涉及的产品
云数据库 RDS MySQL,集群系列 2核4GB
推荐场景:
搭建个人博客
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
网络型负载均衡 NLB,每月750个小时 15LCU
简介: In this three-part tutorial, we will discover how to set up a server cluster using Alibaba Cloud ECS and WordPress.

By Jeff Cleverley, Alibaba Cloud Tech Share Author

In this series of tutorials, we will set up a Server Cluster that is Horizontally Scalable, that is suitable for high traffic Web Applications and Enterprise business sites. It will consist of 3 Web Application Servers and 1 Load Balancing Server. Although we will be setting up and installing WordPress on the cluster, the actual cluster configuration detailed here is suitable for most any PHP based Web Applications.

Each server will be running a LEMP Stack (Linux, Nginx, MySQL, PHP). We will use Percona XtraDB Cluster Database as a drop in replacement for MySQL that will provide the real time database synchronization between the servers. For Web Application file replication and synchronization between servers, we will be using GlusterFS.

NGINX as the Load Balancer

For a load balancer, we will use a lightweight Nginx server which will also perform HTTPS termination. This task could be completed with an Alibaba Cloud Server Load Balancer, but I want control over HTTPS termination and SSL. Therefore, I have chosen to provision a separate server and demonstrate how this can be configured to do the job. We will be using Let’s Encrypt to issue an SSL certificate to the Load Balancer.

The Cluster’s initial configuration will balance the HTTP request load equally between the 3 application servers like so:

1
<Equally balanced three Node Server Cluster with Load Balancer>

This is a great configuration that provides excellent redundancy should any of the servers fail or if you want to stage an upgrade of your servers. However, one of the other advantages of using a server cluster with load balancing is that we can ‘weight’ traffic to each server, or route specific traffic to any individual node.

If a web application is being used in Enterprise on any scale, there is a high probability that either the administration of the web application or other back end application users are doing lots of data processing or other CPU intensive tasks. Batch processing jobs, such as importing or exporting, or performing calculation, that may be occurring on the admin side of the web application could over utilize the servers resources and lead to a slow down on the front ‘visitor facing’ end of the site.

Using a server cluster and effective load balancing, we can architect a solution that separates these concerns; directing administration and back end user traffic to one server, where batch processing and other intensive tasks can be performed, while directing all other site visitors and web traffic to the other servers. Along as we have an effective solution for database and file replication, and the three servers are kept synchronized, then web traffic will not be affected and will receive the results of any work being done on the administration node..

Such a solution can be visualized like so:

2
<Three Node Cluster with Load Balancer redirecting Admin traffic>

As you can see from above, Node 1 is now only used for administration, while Nodes 2 and 3 will serve web traffic.

In these tutorials, we will configure our cluster in both ways, one after the other.

In this series of tutorials I will be using the root user, if you are using your superuser please remember to add the sudo command before any commands where necessary. I will also be using a test domain yet-another-example.com, you should remember to replace this with your domain when issuing commands.

In the commands I will also be using my server's private and public ip addresses, please remember to use your own when following along.

Step 1: In the Alibaba Cloud Console

Prepare your servers

You will need to provision 4 Alibaba instances, of whatever size best fits your need.

For the purposes of these tutorials you should upload your public SSH key to each server as they are created.

It is best to change their Hostnames to better illustrate their part in the cluster. I have chosen to give mine the following hostnames:

  • node1
  • node1
  • node2
  • load-balancer

Since we will be working in the terminal on 4 different servers, it is best to name them properly to avoid issuing commands on the wrong instances.

To do that click Modify information on each server and give each an appropriate hostname:

3
<Modify the information of each server>

4

5
<Give each server an appropriate name for its part in the cluster>

You also need to make sure to take a record of each of your servers private ip addresses

Open ports in your security group

In addition to the usual ports we will need to open several others. These are required for the Persona XtraDB Cluster Database on each node to communicate with the other nodes via their Private IP addresses. Percona requires the following ports to be opened:

  • 3306 TCP Inbound/Outbound (Standard MySQL port)
  • 4444 TCP Inbound/Outbound (Percona Cluster port)
  • 4567 TCP Inbound/Outbound (Percona Cluster port)
  • 4568 TCP Inbound/Outbound (Percona Cluster port)

Your security group inbound port configuration should now look like this:

6
<Security Group Inbound Port Configuration>

Your security group outbound port configuration should now look like this:

7
<Security Group Outbound Port Configuration>

Add your domain

You should add the domain you will use in the Alibaba Cloud DNS:

8
<Add Domain>

As the ‘A’ record set the value as the public IP address of the Server that will be acting as the Load Balancer:

9
<Add A record with IP Address of Load Balancing Server>

Make sure to remember to add the ‘CNAME’ record for the ‘www’ host, this will be required later when we issue our SSL:

10
<Add the WWW host record too>

Step 2: Install and Configure Percona XtraDB Cluster

Percona provides an excellent highly optimized drop in replacement for MySQL. In today’s tutorial we will be using their variant that has been created specifically for database replication and synchronization, Percona XtraDB Cluster.

In Percona’s own words:

“Percona XtraDB Cluster is an active/active high availability and high scalability open source solution for MySQL ® clustering. It integrates Percona Server and Percona XtraBackup with the Codership Galera library of MySQL high availability solutions in a single package that enables you to create a cost-effective MySQL high availability cluster.”

But before we do that:

Check Private Networking is Working

Login to each of your nodes:

$ ssh root@node_public_ip_address

Now on each of your server nodes you should try to ping the other nodes using ‘ping’. In my case to ping node2 from node1:

# ping 172.20.213.159

If you have a private network connection, you should see some metrics confirming the packet send and receive:

11
<Ping your other nodes private IP addresses from each node>

Install Percona XtraDB Cluster

On each of your nodes, issue the following command:

# wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
# dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
# apt-get update
# apt-get upgrade

And then finally install the package:

# apt-get install percona-xtradb-cluster-57

During the installation, you will be asked to set up a new root password for the cluster. Make sure you use the same root password for each installation, otherwise you will cause yourself some difficulties later.

Configure Database Replication

Create a Replication User

To do this we need a replication user, but we only need to configure this on node1:

Login to MySql using your root password:

# mysql -u root -p

Now create a new user and user password for replication purposes, make sure to keep a note of this as we will need it soon. You should use a strong password

CREATE USER ‘new_user'@'localhost' IDENTIFIED BY ‘new_users_password’;

Now grant all the privileges the replication user requires:

GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO ‘new_user'@'localhost';

Then flush privileges and exit:

FLUSH PRIVILEGES;
EXIT;

Your terminal should look like this:

12
<Create your Database Replication User>

Customise Replication configuration files

On each node open the ‘wsrep.cnf’ replication configuration file for editing:

# nano /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf

You will need to enter slightly different parameters as follows:

1.On every node’s configuration file, enter the 3 private IP addresses, separated by commas, for wsrep_cluster_address
2.Enter that node’s private IP address for wsrep_node_address.
3.Enter a different node name for wsrep_node_name
4.On every node’s configuration file, enter the sst password for wsrep_sst_auth

As follows:

Node1 Configuration

[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://172.20.62.56,172.20.213.159,172.20.213.160

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

# This changes how InnoDB autoincrement locks are managed and is a requirement $
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=172.20.62.56
# Cluster name
wsrep_cluster_name=pxc-cluster

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=pxc-cluster-node-1

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth=new_user:new_users_password

Node2 Configuration

[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://172.20.62.56,172.20.213.159,172.20.213.160

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

# This changes how InnoDB autoincrement locks are managed and is a requirement $
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=172.20.213.159
# Cluster name
wsrep_cluster_name=pxc-cluster

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=pxc-cluster-node-2

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth=new_user:new_users_password

Node3 Configuration

[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://172.20.62.56,172.20.213.159,172.20.213.160

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

# This changes how InnoDB autoincrement locks are managed and is a requirement $
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=172.20.213.160
# Cluster name
wsrep_cluster_name=pxc-cluster

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=pxc-cluster-node-3

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth=new_user:new_users_password

13
<Enter all the nodes private IP addresses>

14
<node1 configuration>

15
<node2 configuration>

16
<node3 configuration>

Bootstrap the cluster

Bootstrap the cluster by running the following command on node1:

# /etc/init.d/mysql bootstrap-pxc

Your terminal should give you an OK response:

17
<Bootstrap the Cluster on node1>

You can check it’s running, now or at any time, by logging into my sql and running the following:

show status like 'wsrep%';

Now on node2 and node3 run the following:

# /etc/init.d/mysql start

On each server, your terminal should respond with an OK:

18
<Start the other two nodes in the Percona cluster>

Test Database replication

You should now have a Percona Database cluster with 3 nodes all replicating data to each other, but let’s test that.

On each node, log into mysql:

# mysql -u root -p

Now on each node, show your databases:

SHOW DATABASES;

19
<Node1 Databases>

20
<Node2 Databases>

21
<Node 3 Databases>

On node1 only, create a new database called ‘test_sync’:

CREATE DATABASE test_sync;

After doing that, show your databases again on node2 and node3:

SHOW DATABASES;

You should now see that the ‘test_sync’ database you created on node1 has been replicated to both node2 and node3:

22
<Test_sync has been replicated to node2>

23
<Test_sync has been replicated to node3>

Success, we have database replication!

Step 3: Install and Configure GlusterFS File Network

Cluster is an open source software scalable network filesystem for data-intensive tasks such as cloud storage and media streaming.

Using GlusterFS ensures that each of our nodes is operating off the same files, it also provides data safety through redundancy. Cluster uses triple redundancy for each file, which means that in our 3 node cluster each node will have an entire copy of all files, but if we scale up to more nodes we will start to see space savings.

Install GlusterFS

On every node

First install the Ubuntu ‘attr’ package used for extended filesystems:

# apt-get install attire

Then install GlusterFS:

# apt-get install glusterfs-server

Create a Gluster Volume and attach Directories

On node1

Run the following commands using the private IP addresses of the other two nodes:

# gluster peer probe 172.20.213.159
# gluster peer probe 172.20.213.160

Peer probe should respond with a success message like so:

24
<Successful Gluster Peer Probe>

After that, create a Gluster volume with the following code, using the private IP addresses of all three nodes:

# gluster volume create glustervolume replica 3 transport tcp 172.20.62.56:/gluster-storage 172.20.213.159:/gluster-storage 172.20.213.160:/gluster-storage force

And then start the volume:

# gluster volume start glustervolume

25
<Create and Start a Gluster Volume>

On every node

We are now ready to link directories into our glustervolume.

To do that, create a root directory for our web application in the /var/www/ directory on each node:

# mkdir /var/www/yet-another-example.com -p

Mount the directory and link it to the glustervolume:

# mount -t glusterfs localhost:/glustervolume /var/www/yet-another-example.com

26
<Create Web Application Root Directories on Each Node and link them up>

Test File Replication

We should now have a GlusterFS cluster fileserver with redundancies, into which we can install our Web Application, WordPress. However before we do that, it makes sense to test it.

On any node

Create a test file within the mounted /var/www/yet-another-example.com directory:

# touch /var/www/yet-another-example.com/test.html

27
<Create a test file on one node>

On another node

On one of the other nodes, change directories into the mounted directory and list out the contents to ensure the ‘test.html’ file has been replicated:

# cd /var/www/yet-another-example.com && ls

If all is working you should see something like this, showing the test file created on the first node in the other node’s directory:

28
<Check to see if its replicated on another node>

Success!

We now have a cluster of servers with working database replication and a distributed filesystem (with redundancy), courtesy of Percona XtraDB Cluster and GlusterFS respectively.

In the next tutorial we will complete the installation of our LEMP stack by installing PHP7 and Nginx. Then we will configure each of our Node servers NGINX virtual Hosts, our Load Balancers Nginx Configuration and its SSL certificate, before installing WordPress. By the end of Part 2 we will have a fully working equally load balanced server cluster running WordPress being served over HTTPS.

In the final Tutorial we will reconfigure the cluster to reserve resource heavy Administration access on one node and have the other 2 nodes be weighted for Visitor access. We will also enable FastCGI caching, and harden our cluster by securing our Database Ports and restricting access to our GlusterFS filesystem.

相关实践学习
如何快速连接云数据库RDS MySQL
本场景介绍如何通过阿里云数据管理服务DMS快速连接云数据库RDS MySQL,然后进行数据表的CRUD操作。
全面了解阿里云能为你做什么
阿里云在全球各地部署高效节能的绿色数据中心,利用清洁计算为万物互联的新世界提供源源不断的能源动力,目前开服的区域包括中国(华北、华东、华南、香港)、新加坡、美国(美东、美西)、欧洲、中东、澳大利亚、日本。目前阿里云的产品涵盖弹性计算、数据库、存储与CDN、分析与搜索、云通信、网络、管理与监控、应用服务、互联网中间件、移动服务、视频服务等。通过本课程,来了解阿里云能够为你的业务带来哪些帮助 &nbsp; &nbsp; 相关的阿里云产品:云服务器ECS 云服务器 ECS(Elastic Compute Service)是一种弹性可伸缩的计算服务,助您降低 IT 成本,提升运维效率,使您更专注于核心业务创新。产品详情: https://www.aliyun.com/product/ecs
目录
相关文章
|
4月前
|
Shell PHP Windows
【Azure App Service】Web Job 报错 UNC paths are not supported. Defaulting to Windows directory.
【Azure App Service】Web Job 报错 UNC paths are not supported. Defaulting to Windows directory.
|
4月前
【Azure 应用服务】App Service 配置 Application Settings 访问Storage Account得到 could not be resolved: '*.file.core.windows.net'的报错。没有解析成对应中国区 Storage Account地址 *.file.core.chinacloudapi.cn
【Azure 应用服务】App Service 配置 Application Settings 访问Storage Account得到 could not be resolved: '*.file.core.windows.net'的报错。没有解析成对应中国区 Storage Account地址 *.file.core.chinacloudapi.cn
|
应用服务中间件 Android开发
Could not publish server configuration for Tomcat v6.0 Server at localhost. Multiple Contexts.
Could not publish server configuration for Tomcat v6.0 Server at localhost. Multiple Contexts.
143 0
Could not publish server configuration for Tomcat v6.0 Server at localhost. Multiple Contexts.
|
安全 网络协议 应用服务中间件
Setting Up a Server Cluster for Enterprise Web Apps – Part 3
In this three-part tutorial, we will discover how to set up a server cluster using Alibaba Cloud ECS and WordPress.
4591 0
Setting Up a Server Cluster for Enterprise Web Apps – Part 3
|
前端开发 应用服务中间件 网络安全
Setting up a Server Cluster for Enterprise Web Apps – Part 2
In this three-part tutorial, we will discover how to set up a server cluster using Alibaba Cloud ECS and WordPress.
1848 0
Setting up a Server Cluster for Enterprise Web Apps – Part 2
|
关系型数据库 MySQL 应用服务中间件
Host A Web Analytics Service On Your Server
Centcount Analytics is a free open-source web analytics software. Developed by PHP + MySQL + Redis, Can be easily deployed on your own server, 100% data ownership.
1803 0

热门文章

最新文章