商城集群搭建


# 大致的服务部署位置:

部署图
# 通过在Xnode1节点上部署Nginx,通过配置负载均衡,将请求分发至Xnode2和Xnode3后端服务集群,减轻单点压力,也实现了后端服务的冗余(其中一个后端节点掉了也不影响业务的正常运行),保证业务的正常运行,部署Mycat中间件进行数据库读写分离,降低单个数据库的读写压力,添加Redis可以防止大并发情况下所有请求直接访问数据库导致数据库连接异常,利用redis进行缓冲从而减少数据库压力,利用redis的缓存,可以提高业务的响应速度,利用数据持久化将数据存至硬盘。(如果机器多的话最好是,将那些服务都独立出来做一个集群。自己通过Docker将各服务容器化做微服务挺不错的)。可以看看gpmall的项目说明(以上是我个人对于三台主机的分配方式,也有其他的分配方式,看个人怎么部署了。想知道为什么要按照指定的方式配置可以查看 Jar 包解析那块(个人也比较推荐看一下😋)。

环境介绍 🌈:

机器名 IP地址 配置信息 项目所用包
xnode1 192.168.123.11 2核、4GB、50磁盘 /root下
xnode2 192.168.123.12 1核、2GB、50磁盘
xnode3 192.168.123.13 1核、2GB、50磁盘

Xnode1环境展示(因为我们是以集群的方式部署商城系统,所以我们选择使用 gpmall-cluster 目录下的前端文件、sql文件、jar包):

[root@xnode1 ~]# ll /root/ 
total 4349316
-rw-------. 1 root root       1269 Apr 29  2020 anaconda-ks.cfg
-rw-r--r--. 1 root root 4329570304 Apr 29  2020 CentOS-7-x86_64-DVD-1511.iso
-rw-r--r--. 1 root root   13287936 Apr 29  2020 cirros-0.3.4-x86_64-disk.img
drwxr-xr-x. 3 root root       4096 Apr 29  2020 gpmall-cluster
drwxr-xr-x. 5 root root         50 Apr 29  2020 gpmall-repo
drwxr-xr-x. 3 root root       4096 Apr 29  2020 gpmall-single
-rw-r--r--. 1 root root   57471165 Apr 29  2020 kafka_2.11-1.1.1.tgz
-rw-r--r--. 1 root root   15662280 Apr 29  2020 Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz
-rw-r--r--. 1 root root       2094 Apr 29  2020 qemu-ifup-NAT
drwxr-xr-x. 3 root root       4096 Apr 29  2020 zabbix
-rw-r--r--. 1 root root   37676320 Apr 29  2020 zookeeper-3.4.14.tar.gz


搭建基础环境准备 👨‍💻


Xnode1 & Xnode2 & Xnode3:

# 配置主机名映射文件,以Xnode1为例。

[root@xnode1 ~]# vim /etc/hosts**
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1     localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.123.11 xnode1
192.168.123.12 xnode2
192.168.123.13 xnode3

# 关闭firewalld防火墙设置开机不启动和设置selinux为访问模式。

[root@xnode1 ~]# systemctl stop firewalld
[root@xnode1 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

# 临时关闭selinux(关于enforcing、permissive、disabled这些模式可以自行百度一下,就不在这科普了)

[root@xnode1 ~]# setenforce 0
[root@xnode1 ~]# getenforce 
Permissive

# **(可选操作)**永久关闭selinux(不想每次重启虚拟机/服务器,都打一遍临时关闭selinux的命令,最好设置永久关闭,不熟悉sed的话可以使用vi命令进去改)。

[root@xnode1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# 移走(或删除)yum源目录中的网络源,以防干扰安装(推荐创建一个文件夹统一放着,这里不增加步骤了,就放在/mnt下了,我个人是有备份文件的习惯)。

[root@xnode1 ~]# mv /etc/yum.repos.d/* /mnt/



Xnode1:

# 创建一个挂载目录(文件夹),挂载CentOS的yum源。

[root@xnode1 ~]# mkdir /opt/centos
[root@xnode1 ~]# mount -o loop CentOS-7-x86_64-DVD-1511.iso /opt/centos/

# 将/root/gpmall-repo目录移动到/opt下,方便之后的vsftpd服务的搭建。

[root@xnode1 ~]# mv /root/gpmall-repo/ /opt/

# 配置yum源,配置vsftpd服务,以为其他节点提供ftp源

[root@xnode1 ~]# vi /etc/yum.repos.d/local.repo 
[centos]
name=local
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[gpmall]
name=local
baseurl=file:///opt/gpmall-repo
enabled=1
gpgcheck=0

# **(可选操作)**如果不想每次重启虚拟机/服务器都要重新挂载镜像,可以设置开机自动挂载(注意一定要写对,不然就进不了系统了,不过Linux会自动进入救援模式提供更改的机会,到时候在改/etc/fstab文件就行了,实在写不对,就删掉我们写的那行,至少保证系统能正常进入嘛)。

[root@xnode1 ~]# vim /etc/fstab 
#
# /etc/fstab
# Created by anaconda on Wed Apr 29 06:12:35 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=1808cacc-4e9d-4dbe-8298-51bc4844c978 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
/root/CentOS-7-x86_64-DVD-1511.iso        /opt/centos   iso9660 defaults        0 0

# 查看一下仓库是否可用,查看一下仓库是否可用,数字不为"0"基本没什么问题。

[root@xnode1 ~]# yum clean all && yum repolist
Loaded plugins: fastestmirror
Cleaning repos: centos gpmall
Cleaning up everything
Loaded plugins: fastestmirror
centos                                                                | 3.6 kB  00:00:00     
gpmall                                                                | 2.9 kB  00:00:00     
(1/3): gpmall/primary_db                                              | 144 kB  00:00:00     
(2/3): centos/group_gz                                                | 155 kB  00:00:00     
(3/3): centos/primary_db                                              | 2.8 MB  00:00:00     
Determining fastest mirrors
repo id                                     repo name                                  status
centos                                      local                                      3,723
gpmall                                      local                                        165
repolist: 3,888

# **(可选操作)**这一步可以执行也可以不执行,主要是安装一些个人常用的工具而已。

[root@xnode1 ~]# yum install -y vim net-tools tree unzip


搭建vsftpd服务 ⚡


Xnode1:

# 安装vsftpd服务,为其他节点提供yum源服务。

[root@xnode1 ~]# yum install -y vsftpd

# 配置vsftpd服务,搭建本地yum仓库。

[root@xnode1 ~]# vim /etc/vsftpd/vsftpd.conf 
# 找个空白的地方加入以下内容
anon_root=/opt

# 启动服务并设置开机自启。

[root@xnode1 ~]# systemctl start vsftpd
[root@xnode1 ~]# systemctl enable vsftpd
Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.


Xnode2 :

# 配置ftp源(开头三个节点都配置了hosts文件,所以后面就不打IP了,太长了写着麻烦)。

[root@xnode2 ~]# vim /etc/yum.repos.d/ftp.repo 
[centos]
name=ftp
baseurl=ftp://xnode1/centos
enabled=1
gpgcheck=0
[gpmall]
name=ftp
baseurl=ftp://xnode1/gpmall-repo
enabled=1
gpgcheck=0

# 将Xnode2节点配置好的ftp.repo源,发送至Xnode3节点。

[root@xnode2 ~]# scp /etc/yum.repos.d/ftp.repo xnode3:/etc/yum.repos.d/
The authenticity of host '192.168.123.13 (192.168.123.13)' can't be established.
ECDSA key fingerprint is 10:a1:26:13:f5:8b:f7:71:a1:e5:e9:1f:7c:cf:dc:47.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.123.13' (ECDSA) to the list of known hosts.
root@192.168.123.13's password: 
ftp.repo                                                 100%  155     0.2KB/s   00:00   


Xnode2 & Xnode3:

# 两个节点查看一下仓库是否可用,有数字基本没什么问题。

[root@xnode2 ~]# yum clean all && yum repolist
Loaded plugins: fastestmirror
Cleaning repos: centos gpmall
Cleaning up everything
Cleaning up list of fastest mirrors
Loaded plugins: fastestmirror
centos                                                                | 3.6 kB  00:00:00     
gpmall                                                                | 2.9 kB  00:00:00     
(1/3): centos/primary_db                                              | 2.8 MB  00:00:00     
(2/3): gpmall/primary_db                                              | 144 kB  00:00:00     
(3/3): centos/group_gz                                                | 155 kB  00:00:00     
Determining fastest mirrors
repo id                                      repo name                                 status
centos                                       ftp                                       3,723
gpmall                                       ftp                                         165
repolist: 3,888


搭建读写分离数据库集群 🎭


Xnode2 & Xnode3:

# **(可选操作)**这一步可以执行也可以不执行,主要是安装一些个人习惯的工具而已。

[root@xnode1 ~]# yum install -y vim net-tools tree

# 在Xnode2和Xnode3节点上安装mariadb数据库服务,以Xnode2为例。

[root@xnode2 ~]# yum install -y mariadb mariadb-server

# 启动mariadb数据库服务,并设置开机自启。

[root@xnode2 ~]# systemctl start mariadb
[root@xnode2 ~]# systemctl enable mariadb

# 初始化数据库服务,设置完密码(123456)一路回车就行了,Jar包解释模块里面会知道为什么密码要设置为123456。

[root@xnode2 ~]# mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] 
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] 
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] 
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] 
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] 
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!


Xnode2:

# 配置Xnode2的 my.cnf 配置文件。

[root@xnode2 ~]# vim /etc/my.cnf
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]

[mysqld]
log_bin=mysql-bin
binlog_ignore_db=mysql
server_id=12

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

# 参数解释:

  • log_bin=mysql-bin:开启bin-log,并指定文件目录和文件名前缀。
  • binlog_ignore_db=mysql:不同步mysql系统数据库。如果是多个不同步库,就以此格式另写几行;也可以在一行,中间逗号隔开。
  • server_id=12:数据库唯一ID,主从的标识号绝对不能重复。

# 重启服务,重新载入配置文件。

[root@xnode2 ~]# systemctl restart mariadb

# 配置主从数据库,并创建 gpmall 数据库(提供商城需要的数据库)。

[root@xnode2 ~]# mysql -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 10.3.18-MariaDB-log MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> grant all privileges on *.* to root@'%' identified by '123456';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> grant replication slave on *.* to 'root'@'xnode3' identified by '123456';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> create database gpmall;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> exit
Bye


Xnode3:

# 配置Xnode3的 my.cnf 配置文件。

[root@xnode3 ~]# vim /etc/my.cnf
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]

[mysqld]
log_bin=mysql-bin
binlog_ignore_db=mysql
server_id=13

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

# 重启服务,重新载入配置文件。

[root@xnode3 ~]# systemctl restart mariadb

# 配置主从数据库,并验证是否成功,看到 Slave_IO_Running: Yes,Slave_SQL_Running: Yes 即表示无问题(能看到Xnoide2节点创建的 gpmall 数据库,也说明没问题)。

[root@xnode3 ~]# mysql -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 10.3.18-MariaDB-log MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> change master to master_host='xnode2',master_user='root',master_password='123456';
Query OK, 0 rows affected (0.101 sec)

MariaDB [(none)]> start slave;
Query OK, 0 rows affected (0.002 sec)

MariaDB [(none)]> show slave status\G;
*************************** 1. row ***************************
                Slave_IO_State: Waiting for master to send event
                   Master_Host: xnode2
                   Master_User: root
                   Master_Port: 3306
                 Connect_Retry: 60
               Master_Log_File: mysql-bin.000003
           Read_Master_Log_Pos: 1008
                Relay_Log_File: xnode3-relay-bin.000004
                 Relay_Log_Pos: 1307
         Relay_Master_Log_File: mysql-bin.000003
              Slave_IO_Running: Yes
             Slave_SQL_Running: Yes
               Replicate_Do_DB: 
           Replicate_Ignore_DB: 
            Replicate_Do_Table: 
        Replicate_Ignore_Table: 
       Replicate_Wild_Do_Table: 
   Replicate_Wild_Ignore_Table: 
                    Last_Errno: 0
                    Last_Error: 
                  Skip_Counter: 0
           Exec_Master_Log_Pos: 1008
               Relay_Log_Space: 1988
               Until_Condition: None
                Until_Log_File: 
                 Until_Log_Pos: 0
            Master_SSL_Allowed: No
            Master_SSL_CA_File: 
            Master_SSL_CA_Path: 
               Master_SSL_Cert: 
             Master_SSL_Cipher: 
                Master_SSL_Key: 
         Seconds_Behind_Master: 0
 Master_SSL_Verify_Server_Cert: No
                 Last_IO_Errno: 0
                 Last_IO_Error: 
                Last_SQL_Errno: 0
                Last_SQL_Error: 
   Replicate_Ignore_Server_Ids: 
              Master_Server_Id: 12
                Master_SSL_Crl: 
            Master_SSL_Crlpath: 
                    Using_Gtid: No
                   Gtid_IO_Pos: 
       Replicate_Do_Domain_Ids: 
   Replicate_Ignore_Domain_Ids: 
                 Parallel_Mode: conservative
                     SQL_Delay: 0
           SQL_Remaining_Delay: NULL
       Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
              Slave_DDL_Groups: 4
Slave_Non_Transactional_Groups: 0
    Slave_Transactional_Groups: 0
1 row in set (0.000 sec)

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| gpmall             |
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.000 sec)

MariaDB [(none)]> exit
Bye


Xnode1:

# 在xnode1节点上安装java环境,并查看java环境是否正常。

[root@xnode1 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
[root@xnode1 ~]# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)

# 解压mycat数据库中间件,并赋予权限。

[root@xnode1 ~]# tar -zvxf Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz -C /usr/local/
[root@xnode1 ~]# chown -R 777 /usr/local/mycat/

# 在/etc/profile 系统变量文件中添加 Mycat 服务的系统变量,并使用source命令使其生效。

[root@xnode1 ~]# vim /etc/profile
# 找个空白的地方配置Mycat中间件环境变量(我个人是写在底部)
export MYCAT_HOME=/usr/local/mycat/

[root@xnode1 ~]# source /etc/profile

# 在进行配置文件的改动前最好做个备份,以防出现问题无法恢复。

[root@xnode1 ~]# cp -rvf /usr/local/mycat/conf/schema.xml /usr/local/mycat/conf/schema.xml.txt
‘/usr/local/mycat/conf/schema.xml’ -> ‘/usr/local/mycat/conf/schema.xml.txt’
[root@xnode1 ~]# cp -rvf /usr/local/mycat/conf/server.xml /usr/local/mycat/conf/server.xml.txt 
‘/usr/local/mycat/conf/server.xml’ -> ‘/usr/local/mycat/conf/server.xml.txt’

# 配置Mycat中的schema.xml配置文件(管理着MyCat的逻辑库、表、分片规则、DataNode以及DataSource)。

[root@xnode1 ~]# vim /usr/local/mycat/conf/schema.xml 
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
        <schema name="gpmall" checkSQLschema="true" sqlMaxLimit="100" dataNode="dn1"></schema>
        <dataNode name="dn1" dataHost="localhost1" database="gpmall" />
        <dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
                <heartbeat>select user()</heartbeat>
                <!-- can have multi write hosts -->
                <writeHost host="hostM1" url="xnode2:3306" user="root" password="123456">
                        <readHost host="hostS2" url="xnode3:3306" user="root" password="123456" />
                </writeHost>
        </dataHost>
</mycat:schema>

# 代码说明:

  • sqlMaxLimit:配置默认查询数量。
  • database:为真实数据库名。
  • balance=“0”:不开启读写分离机制,所有读操作都发送到当前可用的writeHost上。
  • balance=“1”:全部的readHost与stand by writeHost参与select语句的负载均衡,简单来说,当双主双从模式(M1->S1,M2->S2,并且M1与M2互为主备),正常情况下,M2、S1、S2都参与select语句的负载均衡。
  • balance=“2”:所有读操作都随机的在writeHost、readhost上分发。
  • balance=“3”:所有读请求随机地分发到wiriterHost对应的readhost执行,writerHost不负担读压力,注意balance=3只在1.4及其以后版本有,1.3版本没有。
  • writeType=“0”:所有写操作发送到配置的第一个writeHost,第一个挂了需要切换到还生存的第二个writeHost,重新启动后已切换 后的为准,切换记录在配置文件dnindex.properties中。
  • writeType=“1”:所有写操作都随机的发送到配置的writeHost。

# 配置Mycat中的server.xml配置文件(包含Mycat的系统配置信息,它有两个标签,分别是user和system)。

[root@xnode1 ~]# vim /usr/local/mycat/conf/server.xml
<!-- # 更改<user name="root"></user>标签中的密码和逻辑数据库 -->
<user name="root">
    <property name="password">123456</property>
    <property name="schemas">gpmall</property>

    <!-- 表级 DML 权限设置 -->
    <!--            
    <privileges check="false">
		<schema name="TESTDB" dml="0110" >
			<table name="tb01" dml="0000"></table>
			<table name="tb02" dml="1111"></table>
		</schema>
	</privileges>           
    -->
</user>
<!-- # 将server.xml配置文件中最后的<user name="user"></user>标签和里面的内容删除,如下 -->
<user name="user">
        <property name="password">user</property>
        <property name="schemas">TESTDB</property>
        <property name="readOnly">true</property>
</user>

<!-- # 请注意别误删了如下标签 -->
</mycat:server>

# 以上配置完成后,我们就可以开始启动Mycat数据库中间件,查看mycat是否已启动,看到8066和9066端口说明mycat正常启动了。

[root@xnode1 ~]# /usr/local/mycat/bin/mycat start
Starting Mycat-server...
[root@xnode1 ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1460/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1890/master         
tcp        0      0 127.0.0.1:32000         0.0.0.0:*               LISTEN      6071/java           
tcp6       0      0 :::48136                :::*                    LISTEN      6071/java           
tcp6       0      0 :::9066                 :::*                    LISTEN      6071/java           
tcp6       0      0 :::21                   :::*                    LISTEN      3250/vsftpd         
tcp6       0      0 :::22                   :::*                    LISTEN      1460/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1890/master         
tcp6       0      0 :::1984                 :::*                    LISTEN      6071/java           
tcp6       0      0 :::8066                 :::*                    LISTEN      6071/java           
tcp6       0      0 :::56455                :::*                    LISTEN      6071/java       

# 安装MariaDB-client来查看读写分离服务是否正常,查看读写分离信息。

[root@xnode1 ~]# yum install -y MariaDB-client
[root@xnode1 ~]# mysql -h 127.0.0.1 -P9066 -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6-RELEASE-20161028204710 MyCat Server (monitor)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show @@datasource;
+----------+--------+-------+--------+------+------+--------+------+------+---------+-----------+------------+
| DATANODE | NAME   | TYPE  | HOST   | PORT | W/R  | ACTIVE | IDLE | SIZE | EXECUTE | READ_LOAD | WRITE_LOAD |
+----------+--------+-------+--------+------+------+--------+------+------+---------+-----------+------------+
| dn1      | hostM1 | mysql | xnode2 | 3306 | W    |      0 |   10 | 1000 |      24 |         0 |          0 |
| dn1      | hostS2 | mysql | xnode3 | 3306 | R    |      0 |    3 | 1000 |      23 |         6 |          0 |
+----------+--------+-------+--------+------+------+--------+------+------+---------+-----------+------------+
2 rows in set (0.001 sec)

MySQL [(none)]> exit
Bye

# 将/root/gpmall-cluster/gpmall.sql文件拷贝至Xnode2节点,让其导入商城相关数据。

[root@xnode1 ~]# scp /root/gpmall-cluster/gpmall.sql xnode2:/root
root@xnode2's password: 
gpmall.sql                              100%   58KB  57.9KB/s   00:00  


Xnode2:

# 我们切换至Xnode2节点进入数据,将Xnode1拷贝过来的gpmall.sql文件导入gpmall数据库。

[root@xnode2 ~]# mysql -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 32
Server version: 10.3.18-MariaDB-log MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> use gpmall
Database changed

MariaDB [gpmall]> source /root/gpmall.sql
Query OK, 0 rows affected (0.000 sec)
...
# 太长了,就不全复制出来了
...
Query OK, 0 rows affected (0.000 sec)

MariaDB [gpmall]> show tables;
+--------------------+
| Tables_in_gpmall   |
+--------------------+
| tb_address         |
| tb_base            |
| tb_comment         |
| tb_comment_picture |
| tb_comment_reply   |
| tb_dict            |
| tb_express         |
| tb_item            |
| tb_item_cat        |
| tb_item_desc       |
| tb_log             |
| tb_member          |
| tb_order           |
| tb_order_item      |
| tb_order_shipping  |
| tb_panel           |
| tb_panel_content   |
| tb_payment         |
| tb_refund          |
| tb_stock           |
| tb_user_verify     |
+--------------------+
21 rows in set (0.000 sec)

MariaDB [gpmall]> exit
Bye


Xnode1:

# 在Xnbode1节点上可以查看到导入的表和数据,说明读写分离数据库能读取到数据。

[root@xnode1 ~]# mysql -h 127.0.0.1 -P8066 -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 26
Server version: 5.6.29-mycat-1.6-RELEASE-20161028204710 MyCat Server (OpenCloundDB)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+----------+
| DATABASE |
+----------+
| gpmall   |
+----------+
1 row in set (0.000 sec)

MySQL [(none)]> use gpmall
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MySQL [gpmall]> show tables;
+--------------------+
| Tables_in_gpmall   |
+--------------------+
| tb_address         |
| tb_base            |
| tb_comment         |
| tb_comment_picture |
| tb_comment_reply   |
| tb_dict            |
| tb_express         |
| tb_item            |
| tb_item_cat        |
| tb_item_desc       |
| tb_log             |
| tb_member          |
| tb_order           |
| tb_order_item      |
| tb_order_shipping  |
| tb_panel           |
| tb_panel_content   |
| tb_payment         |
| tb_refund          |
| tb_stock           |
| tb_user_verify     |
+--------------------+
21 rows in set (0.002 sec)

MySQL [gpmall]> exit
Bye


搭建Zookeeper服务集群 🌀


Xnode1:

# 将Xnode1节点的zookeeper-3.4.14.tar.gz压缩文件拷贝至其他两个节点。

[root@xnode1 ~]# scp zookeeper-3.4.14.tar.gz xnode2:/root/
The authenticity of host 'xnode2 (192.168.123.12)' can't be established.
ECDSA key fingerprint is 10:a1:26:13:f5:8b:f7:71:a1:e5:e9:1f:7c:cf:dc:47.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'xnode2' (ECDSA) to the list of known hosts.
root@xnode2's password: 
zookeeper-3.4.14.tar.gz                          100%   36MB  35.9MB/s   00:00    

[root@xnode1 ~]# scp zookeeper-3.4.14.tar.gz xnode3:/root/
The authenticity of host 'xnode3 (192.168.123.13)' can't be established.
ECDSA key fingerprint is 10:a1:26:13:f5:8b:f7:71:a1:e5:e9:1f:7c:cf:dc:47.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'xnode3' (ECDSA) to the list of known hosts.
root@xnode3's password: 
zookeeper-3.4.14.tar.gz                          100%   36MB  35.9MB/s   00:00   


Xnode2 & Xnode3:

# 给Xnode2和Xnode3节点安装Java环境并查看是否安装成功,以Xnode2为例。

[root@xnode2 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
[root@xnode2 ~]# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)


Xnode2:

# 解压zookeeper-3.4.14.tar.gz至/opt目录下(其他目录也行,看个人爱好)。

[root@xnode2 ~]# tar -zvxf zookeeper-3.4.14.tar.gz -C /opt/

# 在修改配置文件前我们留一份备份(把目录下的zoo_sample.cfg样本文件,复制并命名为zoo.cfg)。

[root@xnode2 ~]# cp -rvf /opt/zookeeper-3.4.14/conf/zoo_sample.cfg /opt/zookeeper-3.4.14/conf/zoo.cfg
‘/opt/zookeeper-3.4.14/conf/zoo_sample.cfg’ -> ‘/opt/zookeeper-3.4.14/conf/zoo.cfg’

# 更改配置文件。

[root@xnode2 ~]# vim /opt/zookeeper-3.4.14/conf/zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.123.12:2888:3888
server.2=192.168.123.13:2888:3888

# 命令解析:

  • initLimit:ZooKeeper集群模式下包含多个zk进程,其中一个进程为leader,余下的进程为follower。当follower最初与leader建立连接时,它们之间会传输相当多的数据,尤其是follower的数据落后leader很多,initLimit配置follower与leader之间建立连接后进行同步的最长时间。
  • syncLimit:配置follower和leader之间发送消息,请求和应答的最大时间长度。
  • tickTime:tickTime则是上述两个超时配置的基本单位,例如对于initLimit,其配置值为5,说明其超时时间为2000ms*5=10秒。
  • server.id=host:port1:port2:其中id为一个数字,表示zk进程的id,这个id也是dataDir目录下myid文件的内容。host是该zk进程所在的IP地址,port1表示follower和leader交换消息所使用的端口,port2表示选举leader所使用的端口。
  • dataDir:其配置的含义跟单机模式下的含义类似,不同的是集群模式下还有一个 myid 文件。myid文件的内容只有一行,且内容只能为1 - 255之间的数字,这个数字亦即上面介绍server.id中的id,表示zk进程的 id。

# 我们在配置文件中dataDir指定的路径中创建myid文件,并写入配置文件中server.id中的id的值(最后那个cat命令可以执行也可以不执行,就是为了看看有没有写错而已)。

[root@xnode2 ~]# mkdir /tmp/zookeeper
[root@xnode2 ~]# echo 1 > /tmp/zookeeper/myid
[root@xnode2 ~]# cat /tmp/zookeeper/myid 
1

# 启动zookeeper服务(这种部署方式不是单机部署,所以在另外一台zookeeper服务未启动前别使用status查看状态,他会显示连接出错,留着不动就行了,等另一个节点启动zookeeper服务,他们竞选完leader后,就正常了)。

[root@xnode2 ~]# /opt/zookeeper-3.4.14/bin/zkServer.sh start
ZooKeeper JMX enabled by defaultUsing config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

# **(可选操作)**我演示一下,集群模式至启动一台zookeeper服务时查看状态的样子,看到这个报错别太紧张(其他节点启动zookeeper他自己就正常了)。

[root@xnode2 ~]# /opt/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by defaultUsing config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.


Xnode3:

# 解压Xnode3节点上的zookeeper-3.4.14.tar.gz压缩包。

[root@xnode3 ~]# tar -zvxf zookeeper-3.4.14.tar.gz -C /opt/

# 配置文件内容因为都一样所有直接从Xnode2节点拷贝过来就行了,就不用手敲那么多重复操作了(scp的另一种用法)。

[root@xnode3 ~]# scp xnode2:/opt/zookeeper-3.4.14/conf/zoo.cfg /opt/zookeeper-3.4.14/conf/
The authenticity of host 'xnode1 (192.168.123.11)' can't be established.
ECDSA key fingerprint is 10:a1:26:13:f5:8b:f7:71:a1:e5:e9:1f:7c:cf:dc:47.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'xnode1' (ECDSA) to the list of known hosts.
root@xnode1's password: 
zoo.cfg                                          100% 1000     1.0KB/s   00:00 

# 配置编写myid文件(Xnode2的myid内容是1,Xnode3的myid内容是2,根据配置文件中server.id来决定的,别写一样了)。

[root@xnode3 ~]# mkdir /tmp/zookeeper
[root@xnode3 ~]# echo 2 > /tmp/zookeeper/myid
[root@xnode3 ~]# cat /tmp/zookeeper/myid 
2

# 启动zookeeper服务。

[root@xnode3 ~]# /opt/zookeeper-3.4.14/bin/zkServer.sh start
ZooKeeper JMX enabled by defaultUsing config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

Xnode2 & Xnode3:

# 进入各节点查看他们状态。会出现一个leader,一个follower(这样就表示zookeeper服务在运行了)。

[root@xnode2 ~]# /opt/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by defaultUsing config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower

[root@xnode3 ~]# /opt/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by defaultUsing config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader


搭建Kafka服务集群 🌠


Xnode1:

# 将Xnode1节点的kafka_2.11-1.1.1.tgz压缩文件拷贝至其他两个节点。

[root@xnode1 ~]# scp kafka_2.11-1.1.1.tgz xnode2:/root
root@xnode2's password: 
kafka_2.11-1.1.1.tgz                                       100%   55MB  54.8MB/s   00:01 
[root@xnode1 ~]# scp kafka_2.11-1.1.1.tgz xnode3:/root
root@xnode3's password:
kafka_2.11-1.1.1.tgz                                       100%   55MB  27.4MB/s   00:02 


Xnode2:

# 解压kafka_2.11-1.1.1.tgz至/opt目录下(其他目录也行,看个人爱好)。

[root@xnode2 ~]# tar -zvxf kafka_2.11-1.1.1.tgz -C /opt/

# 在修改配置文件前我们留一份备份。

[root@xnode2 ~]# cp -rvf /opt/kafka_2.11-1.1.1/config/server.properties /opt/kafka_2.11-1.1.1/config/server.properties.txt
‘/opt/kafka_2.11-1.1.1/config/server.properties’ -> ‘/opt/kafka_2.11-1.1.1/config/server.properties.txt’

# 更改配置文件(以下内容我们可以在配置文件中使用“/”搜索关键字去寻找,可以找到后直接改,也可以把找到的都注释掉,然后统一写在一个地方这样以后好改一些,我个人是喜欢写在最下面)。

[root@xnode2 ~]# vim /opt/kafka_2.11-1.1.1/config/server.properties 
broker.id=1
zookeeper.connect=192.168.123.12:2181,192.168.123.13:2181
listeners=PLAINTEXT://192.168.123.12:9092

# 命令解析:

  • broker.id:每台机器不能一样。
  • zookeeper.connect:因为有2台ZooKeeper服务器,所以在这里zookeeper.connect设置为2台。
  • listeners:在配置集群的时候,必须设置,不然以后的操作会报找不到leader的错误。 另外一台服务器,zookeeper.connect的配置跟这里的一样,但是broker.id和listeners不能一样。

# 启动服务并查看服务是否已启动。

[root@xnode2 ~]# /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh -daemon /opt/kafka_2.11-1.1.1/config/server.properties
[root@xnode2 ~]# jps
2503 QuorumPeerMain
2904 Kafka
2921 Jps


Xnode3:

# 解压kafka_2.11-1.1.1.tgz至/opt目录下(其他目录也行,看个人爱好)。

[root@xnode3 ~]# tar -zvxf kafka_2.11-1.1.1.tgz -C /opt/

# 因为内容大致相同,所以直接把Xnode2改好的配置文件拷贝过来,然后改一下就行了。

[root@xnode3 ~]# scp xnode2:/opt/kafka_2.11-1.1.1/config/server.properties /opt/kafka_2.11-1.1.1/config/
root@xnode2's password: 
server.properties                                          100% 6969     6.8KB/s   00:00 

# 更改配置文件把broker.id=1改为broker.id=2、把listeners=PLAINTEXT://192.168.123.12:9092改成listeners=PLAINTEXT://192.168.123.13:9092即可。

[root@xnode3 ~]# vim /opt/kafka_2.11-1.1.1/config/server.properties 
broker.id=2
zookeeper.connect=192.168.123.12:2181,192.168.123.13:2181
listeners=PLAINTEXT://192.168.123.13:9092

# 启动服务并查看服务是否已启动。

[root@xnode3 ~]# /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh -daemon /opt/kafka_2.11-1.1.1/config/server.properties
[root@xnode3 ~]# jps
6377 Kafka
6394 Jps
2543 QuorumPeerMain


Xnode2:

# 我们可以测试一下,创建一个名为demo的topic,看看服务是否正常。

[root@xnode2 ~]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --create --zookeeper xnode2:2181 --replication-factor 1 --partitions 1 --topic demo
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Created topic "demoTopic".


Xnode3:

# 然后我们去Xnode3节点看看是否能看到名为demoTopic的topic。

[root@xnode3 ~]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --list --zookeeper xnode3:2181
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
demoTopic

# **(可选操作)**我们会发现弹出了个警告,说是我们的虚拟机核心数与默认配置的并行GC线程数不一致导致的。

# **(可选操作)**我们可以关闭虚拟机,给虚拟机增加核心数到2核,从而达到消除警告的效果(如果实在不想关闭虚拟机添加配置也行,对我们这实验没啥影响)。
消除kafka警告
# **(可选操作)**网上还有一些是修改启动脚本的方法,警告中也提示可以更改一个参数来消除警告,我自己试了一下,没啥效果,可能我操作有问题吧(在"export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G “中添加” -XX:ParallelGCThreads=1")。

[root@xnode3 ~]# vim /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh 
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

if [ $# -lt 1 ];
then
        echo "USAGE: $0 [-daemon] server.properties [--override property=value]*"
        exit 1
fi
base_dir=$(dirname $0)

if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
    export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G -XX:ParallelGCThreads=1"
fi

EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}

COMMAND=$1
case $COMMAND in
  -daemon)
    EXTRA_ARGS="-daemon "$EXTRA_ARGS
    shift
    ;;
  *)
    ;;
esac

exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"

# **(可选操作)**停止kafka服务,再次启动kafka服务,查看topic列表(结果还是一样,不知道是不是我操作问题,不行咱就改虚拟机核心嘛,如果是服务器咱就加配置嘛)。

[root@xnode3 ~]# /opt/kafka_2.11-1.1.1/bin/kafka-server-stop.sh 
[root@xnode3 ~]# /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh -daemon /opt/kafka_2.11-1.1.1/config/server.properties 
[root@xnode3 ~]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --list --zookeeper xnode3:2181
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
demoTopic


安装redis服务 🚄


Xnode1:

# 在Xnode1上安装redis服务。

[root@xnode1 ~]# yum install -y redis

# 配置redis的配置文件,注释掉bind 127.0.0.1,关闭保护模式(关闭密码验证),如下所示。

[root@xnode1 ~]# vim /etc/redis.conf
# bind 127.0.0.1
protected-mode no

# 启动redis服务,并设置开机自启。

[root@xnode1 ~]# systemctl start redis
[root@xnode1 ~]# systemctl enable redis
Created symlink from /etc/systemd/system/multi-user.target.wants/redis.service to /usr/lib/systemd/system/redis.service.

# 查看redis是否正常运行(看到6379就代表redis启动了)。

[root@xnode1 ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      3639/redis-server * 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1104/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2242/master         
tcp        0      0 127.0.0.1:32000         0.0.0.0:*               LISTEN      3850/java           
tcp6       0      0 :::9066                 :::*                    LISTEN      3850/java           
tcp6       0      0 :::6379                 :::*                    LISTEN      3639/redis-server * 
tcp6       0      0 :::37429                :::*                    LISTEN      3850/java           
tcp6       0      0 :::21                   :::*                    LISTEN      1264/vsftpd         
tcp6       0      0 :::22                   :::*                    LISTEN      1104/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      2242/master         
tcp6       0      0 :::1984                 :::*                    LISTEN      3850/java           
tcp6       0      0 :::8066                 :::*                    LISTEN      3850/java           
tcp6       0      0 :::48197                :::*                    LISTEN      3850/java 


安装Nginx服务 📺

Xnode1

# 配置hosts文件,将Xnode1配置好的hosts文件拷贝至其他两个节点。

[root@xnode1 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.123.11 xnode1
192.168.123.12 xnode2
192.168.123.13 xnode3

192.168.123.11 redis.mall
192.168.123.11 mysql.mall

192.168.123.12 zk1.mall
192.168.123.13 zk2.mall

192.168.123.12 kafka1.mall
192.168.123.13 kafka2.mall
[root@xnode1 ~]# scp /etc/hosts xnode2:/etc/hosts
root@xnode2's password: 
hosts                                                      100%  381     0.4KB/s   00:00
[root@xnode1 ~]# scp /etc/hosts xnode3:/etc/hosts
root@xnode3's password: 
hosts                                                      100%  381     0.4KB/s   00:00 

# 在Xnode1上安装nginx服务。

[root@xnode1 ~]# yum install -y nginx

# 在更改配置文件前,备份一份原文件。

[root@xnode1 ~]# cp -rvf /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.txt
‘/etc/nginx/conf.d/default.conf’ -> ‘/etc/nginx/conf.d/default.conf.txt’

# 修改nginx的配置文件,使用upstream模块配置负载均衡,分配方式采用ip_hash(每一个请求按訪问ip的hash结果分配)。

# 其实和cashier有关的都可以不配置的,目前的没有和支付有关的jar包8083端口用不上,所以配不配置都一样,可以看一下gpmall的项目说明

[root@xnode1 ~]# vim /etc/nginx/conf.d/default.conf
upstream gpuser{
    server 192.168.123.12:8082;
    server 192.168.123.13:8082;
    ip_hash;
}

upstream gpshopping {
    server 192.168.123.12:8081;
    server 192.168.123.13:8081;
    ip_hash;
}

upstream gpcashier {
    server 192.168.123.12:8083;
    server 192.168.123.13:8083;
    ip_hash;
}

server {
    listen       80;
    server_name  localhost;

    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    location /user {
		proxy_pass http://gpuser;
    }

    location /shopping {
		proxy_pass http://gpshopping;
    }

    location /cashier {
		proxy_pass http://gpcashier;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

# 将原先网站根目录下的文件移走(或者删除),将gpmall商城的前端文件复制一份至网站根目录下。

[root@xnode1 ~]# mv /usr/share/nginx/html/* /mnt/
[root@xnode1 ~]# cp -rvf /root/gpmall-cluster/dist/* /usr/share/nginx/html/

# 启动nginx服务,并设置开机自启。

[root@xnode1 ~]# systemctl start nginx
[root@xnode1 ~]# systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.


后端Jar包部署 🕋

Xnode1:

# 将Xnode1中/root/gpmall-cluster/*.jar(以.jar结尾的jar包)拷贝至Xnode2和Xnode3两个节点。

[root@xnode1 ~]# scp /root/gpmall-cluster/*.jar xnode2:/root
root@xnode2's password: 
gpmall-shopping-0.0.1-SNAPSHOT.jar                         100%   46MB  45.6MB/s   00:00    
gpmall-user-0.0.1-SNAPSHOT.jar                             100%   37MB  37.2MB/s   00:01    
shopping-provider-0.0.1-SNAPSHOT.jar                       100%   51MB  51.1MB/s   00:01    
user-provider-0.0.1-SNAPSHOT.jar                           100%   58MB  58.3MB/s   00:01 

[root@xnode1 ~]# scp /root/gpmall-cluster/*.jar xnode3:/root
root@xnode3's password: 
gpmall-shopping-0.0.1-SNAPSHOT.jar                         100%   46MB  45.6MB/s   00:01    
gpmall-user-0.0.1-SNAPSHOT.jar                             100%   37MB  37.2MB/s   00:00    
shopping-provider-0.0.1-SNAPSHOT.jar                       100%   51MB  51.1MB/s   00:01    
user-provider-0.0.1-SNAPSHOT.jar                           100%   58MB  58.3MB/s   00:01  


Xnode2:

# 在Xnode2节点启动jar包(按照如下顺序启动)。

# “nohup”这个是设置在系统后台不挂断的运行,关闭终端也不会影响程序运行,默认情况下会把运行的内容输出到当前目录的nohup.out文件中。

# “&”这个是设置后台运行。

# 一般“nohup”和“&”是搭配使用的。

# 可以使用"jobs"或"jps"查看服务。

[root@xnode2 ~]# nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
[1] 8980
[root@xnode2 ~]# nohup: ignoring input and appending output to ‘nohup.out’

[root@xnode2 ~]# nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
[2] 8992
[root@xnode2 ~]# nohup: ignoring input and appending output to ‘nohup.out’

[root@xnode2 ~]# nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
[3] 9017
[root@xnode2 ~]# nohup: ignoring input and appending output to ‘nohup.out’

[root@xnode2 ~]# nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
[4] 9030
[root@xnode2 ~]# nohup: ignoring input and appending output to ‘nohup.out’

[root@xnode2 ~]# jobs
[1]   Running                 nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
[2]   Running                 nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
[3]-  Running                 nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
[4]+  Running                 nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &


Xnode3:

# 在Xnode3节点启动jar包(按照如下顺序启动)。

[root@xnode3 ~]# nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
[1] 5074
[root@xnode3 ~]# nohup: ignoring input and appending output to ‘nohup.out’

[root@xnode3 ~]# nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
[2] 5085
[root@xnode3 ~]# nohup: ignoring input and appending output to ‘nohup.out’

[root@xnode3 ~]# nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
[3] 5097
[root@xnode3 ~]# nohup: ignoring input and appending output to ‘nohup.out’

[root@xnode3 ~]# nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
[4] 5108
[root@xnode3 ~]# nohup: ignoring input and appending output to ‘nohup.out’

[root@xnode3 ~]# jobs
[1]   Running                 nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
[2]   Running                 nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
[3]-  Running                 nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
[4]+  Running                 nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &

# 如果在启动jar包的过程中,有些jar包掉了,就再启动一次掉的jar包,如果启动了几次都掉了可能就是配置问题了。

# 在第一次启动jar包推荐打开四个终端,分别对四个jar包执行java -jar xxxxxx.jar,把他放到前台运行,这样可以直观一些定位问题了,例如下图。
jar包定位错误


# 然后就搭建成功啦,我们使用浏览器查看一下效果(当Jar包都起来的时候,刷新页面可能要等会才能出现数据,因为SpringBoot启动有点慢,耐心等会就行了,时不时用jobs命令看看jar包掉没,因为后两个jar包容易出现,前面两个没有完全启动,导致后两个jar包报错退出)。
商城效果


Jar包的解析 🕵️‍♂️


# 注:Jar包解析这块都是我的个人看法,也代表我目前的知识水平了,我不是做后端开发的对开发那块可能也不是很清楚,大多数都是以前查的知识的整合,我是按照我运维的角度去分析的(也算是提供一下我的个人思路,因为第一次做这个一直起不来,老是报数据库连不上,主机名映射有问题,所以只能拆jar包了),可能有些地方不对,请麻烦指出。

整个流程下来产生的疑惑点:

# 可能有些小伙伴会有些疑惑,数据库密码可以设置成别的吗?为什么一定要创建名为gpmall数据库?为什么hosts文件要写这么多的xxx.mall(主机名.域名)的形式?可以不写吗?可以写成别的吗?

# 当然那些内容也不是我定,是jar包里面写死的,所以我们只能按照他的要求去做了。

# 因为没有开发和我说他jdbc连接的时候域名是什么?端口是什么?连接用户是什么?密码是什么?使用的数据库名叫什么?,所以我们只能自己去拆解jar包自己看看了。


# 创建一个目录用来放我们解压出来的jar包(也可以放在/mnt,因为之前我有把一些文件丢那,所以就不把jar包里面的文件扔那了,不然不好观察目录结构)。

[root@xnode1 ~]# mkdir /root/parsing

# 安装一个压缩解压缩工具unzip(在开头我安装顺手工具的时候装过了😄)。

[root@xnode1 ~]# yum install -y unzip

# 开始解压jar包

# 我解释一下命令吧unzip /xxx/xxx -d(解压至指定目录) /xxx/xxxx。

[root@xnode1 ~]# unzip /root/gpmall-cluster/user-provider-0.0.1-SNAPSHOT.jar -d /root/parsing/

# **(可选操作)**我这列出jar包的目录结构。

# 我解释一下命令吧tree(开头安装的工具) -d(只显示目录) -L(显示的层级)。

[root@xnode1 ~]# tree -d -L 2 /root/parsing/
/root/parsing/
├── BOOT-INF
│   ├── classes
│   └── lib
├── META-INF
│   └── maven
└── org
    └── springframework

7 directories

# 结构简单说明(仅代表个人观点):

  • BOOT-INF/classes 目录中存放业务代码、配置文件。
  • BOOT-INF/lib 目录中存放了除java 虚拟机之外的所有依赖(例如:导入的maven依赖的jar包和第三方jar包集合)。
  • META-INF 的作用主要是对Jar包进行一个描述。其意为(meta information)签名文件夹,主要是MANIFEST.MF文件(清单文件)存放着jar包的描述信息。
  • org/springframework目录中存放了Spring Boot用来启动jar包的相关class文件,这些文件帮助加载编写代码时对应的class文件保证的程序正常运行。

# 进入要查看的/root/parsing/BOOT-INF/classes/(之后还要查看该目录里的内容,不想输入太长的目录路径,干脆就cd切进去了)。

[root@xnode1 ~]# cd /root/parsing/BOOT-INF/classes/

# **(可选操作)**主要是查看Jar包中的BOOT-INF/class目录下的.yml和.xml文件。

# 我这列出BOOT-INF/class的目录结构(可选操作)。

# -L 1 后面那个“.”代表当前路径下(之前cd到目录里了,就懒得写完整的路径了,用“.”表示了)。

[root@xnode1 classes]# tree -L 1 .
.
├── application-dev.yml
├── application-prod.yml
├── application-test.yml
├── application.yml
├── com
├── emailTemplate
└── mybatis-generater.xml

2 directories, 5 files

# 可以看到有很多的.yml配置文件和一个.xml配置文件。

#其中application-dev.ymlapplication-prod.ymlapplication-test.ymlmybatis-generater.xml是项目的配置文件。

# application.yml(是SpringBoot的配置文件,必须的配置文件,里面会说明当前的环境)大概解释一些里面的参数吧,具体我也不是特别清楚。

# mybatis和pagehelper使用前,得先在pom.xml,加入相应的依赖(至于怎么做开发的应该清楚些,目前我就看个大概就行了,我们主要是确认目前处于什么环境,调用了那三个.yml(dev、prod、test)文件中的哪一个才是重点)。

[root@xnode1 classes]# vim application.yml 
spring:
  profiles:
# 环境设置(dev是默认环境,开发者可以根据自己要求进行更改)
active: dev

# mybatis配置
mybatis:
  # mapper.xml文件位置
  mapper-locations: classpath*:com/gpmall/user/dal/persistence/*Mapper.xml"
# 对应mapper的实体类
  type-aliases-package: com.gpmall.user.dal.entitys

# pagehelper分页配置(也可以在代码中指定pagehelper的配置,主要看开发者了)
pagehelper:
  helper-dialect: mysql
  reasonable: true
  support-methods-arguments: true
  params: count=countSql

# 通过以上我们知道目前是开发环境,所以我们初步可以判断我们用的项目配置文件应该是application-dev.yml这个文件,为了保险一些我们可以看看mybatis-generater.xml文件中引用了哪个.yml文件(内容太长就不全粘贴上来了,在👉第八行👈可以看到 “<properties resource=“application-dev.yml”/>” 这个标签的内容)。

[root@xnode1 classes]# vim mybatis-generater.xml 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE generatorConfiguration
        PUBLIC "-//mybatis.org//DTD MyBatis Generator Configuration 1.0//EN"
        "http://mybatis.org/dtd/mybatis-generator-config_1_0.dtd">

<generatorConfiguration>
    <properties resource="application-dev.yml"/>

    <context id="Mysql" targetRuntime="MyBatis3Simple" defaultModelType="flat">
        <property name="beginningDelimiter" value="`"/>
        <property name="endingDelimiter" value="`"/>
        <plugin type="tk.mybatis.mapper.generator.MapperPlugin">
            <property name="mappers" value="com.gpmall.commons.tool.tkmapper.TkMapper"/>
        </plugin>

        <!--数据库连接 -->
        <jdbcConnection driverClass="com.mysql.jdbc.Driver"
                        connectionURL="jdbc:mysql://mysql.mall:8066/gpmall?useUnicode=true&amp;characterEncoding=utf8&amp;useOldAliasMetadataBehavior=true&amp;zeroDateTimeBehavior=convertTo
Null&amp;allowMultiQueries=true&amp;serverTimezone=UTC"
                        userId="root"
                        password="123456">
            <property name="useInformationSchema" value="true"/>
        </jdbcConnection>

# 通过以上我们知道引用的配置文件是application-dev.yml文件,这时我们就可以看看这.yml文件里面写了啥。

[root@xnode1 classes]# cat application-dev.yml 
spring:
  datasource:
    url: jdbc:mysql://mysql.mall:8066/gpmall?useUnicode=true&characterEncoding=utf8&useOldAliasMetadataBehavior=true&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true&serverTimezone=UTC
    username: root
    password: 123456
    driver-class-name: com.mysql.jdbc.Driver
    type: com.alibaba.druid.pool.DruidDataSource
    initialSize: 2
    minIdle: 1
    maxActive: 20
    maxWait: 60000
    timeBetweenEvictionRunsMillis: 300000
    validationQuery: SELECT 1 FROM DUAL
    testWhileIdle: true
    testOnBorrow: false
    testOnReturn: false
    poolPreparedStatements: false
    maxPoolPreparedStatementPerConnectionSize: 20
    filters: stat,config
    connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=500
    useGlobalDataSourceStat: true
  redisson:
    address: redis.mall:6379
#    address: redis4gpmall.redis.rds.aliyuncs.com:6379
#    password: redis4GPMALL
    timeout: 3000
    database: 0
    pool:
      max-active: 20
      max-idle: 10
      max-wait: 3000
      min-idle: 4
  ##以下下配置是springboot autoconfig 认初始化时会加载的参数,但是user可以重现构建一下参数
  ##自定义构造的bean可查看kafkaConfig
  kafka:
    bootstrapServers: kafka1.mall:9092,kafka2.mall:9092,kafka3.mall:9092
    consumer:
      auto-offset-reset: latest
      key-serializer: org.apache.kafka.common.serialization.StringDeserializer
      value-serializer: org.apache.kafka.common.serialization.StringDeserializer
      properties:
        spring:
          json:
            trusted:
              packages: com.gpmall.user.*
    producer:
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer

dubbo:
  application:
    name: user-service
    owner: Mic
  protocol:
    name: dubbo
    port: 20880
  registry:
    address: zookeeper://zk1.mall:2181,zookeeper://zk2.mall:2181,zookeeper://zk3.mall:2181
    check: true
    group: dubbo-dev
    simplified: true
  metadata-report:
    address: zookeeper://zk1.mall:2181,zookeeper://zk2.mall:2181,zookeeper://zk3.mall:2181
    retry-times: 30
    cycle-report: false
    group: dubbo-dev
  scan:
    base-packages: com.gpmall.user.services
lock:
  zookeeper:
    zkHosts: zookeeper://zk1.mall:2181,zookeeper://zk2.mall:2181,zookeeper://zk3.mall:2181
    sessionTimeout: 30000
    connectionTimeout: 30000
    ## 共享一个zk链接
    singleton: true
    ## 全局path前缀,常用来区分不同的应用
    namespace: zkLock
email:
  mail-server-host: smtp.163.com
  mail-server-port: 25
  from-address: xxxx@163.com
  to-addresss:
  cc-addresss:
  username:
  password:
  mailSmtpAuth: true
  subject: 激活邮件,请点击激活
  content:
  template-path: emailTemplate
  userMailActiveUrl: http://localhost:9999/gpmall-user/gpuser/verify

# 可以看到mysql、zookeeper、kafka、redis的连接方式和连接端口,这就是为什么hosts文件要这么去编写,也可以看到mysql的用户、密码等信息。

# 其中zookeeper和kafka有多个连接的域名,例如zookeeper他一共有三个域名,zk1.mall、zk2.mall、zk3.mall我们使用其中一个就行了。

# 2021年11月25日修改安装 Nginx 服务处编写 /etc/hosts 文件的疏忽,虽然改不改不影响商城的成功搭建,但违背了一开始的设计,将 /etc/hosts 中两个 zk1.mall 中的一个改为 zk2.mall,将两个 kafka1.mall 中的一个改为 kafka2.mall

Logo

Kafka开源项目指南提供详尽教程,助开发者掌握其架构、配置和使用,实现高效数据流管理和实时处理。它高性能、可扩展,适合日志收集和实时数据处理,通过持久化保障数据安全,是企业大数据生态系统的核心。

更多推荐