磐维数据库PanWeiDB V2.0-S3.1.1_B01集中式一主二备安装
0 说明
项目组国产化迁移工作开始陆续在进行,数据库方面选定的国产库是panweidb,迁移已经做好了安装规划,测试环境安装V2.0-3.1.0版本集中式,准生产环境用V2.0-S3.1.1版本集中式。
此文章主要介绍V2.0-S3.1.1版本集中式的安装方法,主要包括环境规划、环境配置和安装部署。
1 环境规划
主机名 | ip地址 | 节点类型 | 数据库端口 | cmServer端口 | 服务器版本 | 内存 | cpu |
---|---|---|---|---|---|---|---|
pwdb01 | 192.168.131.14 | Primary | 17700 | 18800 | RHEL7.6 | 8G | 2p2c |
pwdb02 | 192.168.131.15 | Standby | 17700 | 18800 | RHEL7.6 | 8G | 2p2c |
pwdb03 | 192.168.131.16 | Standby | 17700 | 18800 | RHEL7.6 | 8G | 2p2c |
2 环境配置
用root用户操作,没有特别说明的话,每台主机都需要配置。
2.1 允许root 权限登录
# cat /etc/ssh/sshd_config | grep PermitRootLogin
PermitRootLogin yes# 重启 sshd 服务
systemctl restart sshd
2.2 防火墙配置
方式一、关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
方式二、开发数据库端口
firewall-cmd --zone=public --permanent --add-port=17700/tcp
firewall-cmd --reload
firewall-cmd --list-port
2.3 selinux配置
setenforce 0
getenforcevi /etc/selinux/config
将 SELINUX=enforcing 修改为 SELINUX=disabled
2.4 时区配置
检查时区
timedatectl
2.5 时间同步配置
yum install -y ntp
如果没有外部的时间服务器,可以选择集群中的某个节点作为时间服务
器。选择一个节点为 ntp 服务器(以主节点 192.168.100.1 为例)
在选择的节点上配置/etc/ntp.conf 文件:
server 127.127.1.0
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
broadcastdelay 0.008
编辑高可用集群中其他节点的/etc/ntp.conf 文件:
server 192.168.131.14 prefer
driftfile /var/lib/ntp/drift
broadcastdelay 0.008
【须知】
若在 ntp 服务启动时修改 ntpd.conf 配置文件,需要重新启动 ntp 服务使其生效。
启动各节点 ntpd 服务:
systemctl stop ntpd
systemctl start ntpd
systemctl enable ntpd
systemctl status ntpd
检查时间同步。等待 5 分钟使时间服务器开始提供服务,5 分钟后检查各节点时间是否同步:
date
确定时间同步后,在各节点将系统时间写入硬件时间:
hwclock –w
2.6 系统内核参数配置
关键参数:
- kernel.shmall:该参数用于控制共享内存页数,等于系统内存 MEM(建议设置为 80%,单位:byte)/PAGE_SIZE(可通过 getconf PAGE_SIZE获取),该参数设置太小有可能导致数据库启动报错。
- kernel.shmmax:该参数表示最大单个共享内存段大小(建议配置为内存大小的一半,至少应大于 shared_buffer 值,单位:Byte)。
- kernel.shmmni:该参数系统范围内共享内存段的最大数量,默认值:4096。
- vm.dirty_background_bytes:该参数表示触发回刷的脏页数据量,超过该参数,脏页刷到磁盘,单位:Btye。
查看内存:
[root@pwdb_p ~]# free total used free shared buff/cache available
Mem: 3861508 110036 3598016 11824 153456 3534336
Swap: 4194300 0 4194300
查看信号量:
[root@pwdb_p ~]# ipcs -ls------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767max number of arrays = 128(对应 SEMMNI )
max semaphores per array = 250(对应 SEMMSL )
max semaphores system wide = 32000(对应 SEMMNS )
max ops per semop call = 32(对应 SEMOPM )
配置系统参数:
vi /etc/sysctl.conf
fs.aio-max-nr=1048576
fs.file-max= 76724600
kernel.sem = 250 32000 250 128
kernel.shmall = 1677721 # pages, 0.8 * MEM/PAGE_SIZE or higher
kernel.shmmax = 4294967296 # bytes, 0.5 * MEM or higher
kernel.shmmni = 819200
net.core.netdev_max_backlog = 10000
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 4194304
net.core.somaxconn = 4096
net.ipv4.tcp_fin_timeout = 5
vm.dirty_background_bytes = 409600000
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 80
vm.dirty_writeback_centisecs = 50
vm.overcommit_memory = 0
vm.swappiness = 0
net.ipv4.ip_local_port_range = 40000 65535
fs.nr_open = 20480000
kernel.core_pattern = /database/panweidb/corefile/core-%e-%p-%t
2.7 字符集
配置字符集
echo "export LANG=en_US.UTF-8" >> .bash_profile
2.8 关闭透明大页
永久关闭透明大页。
vi /etc/systemd/system/disable-thp.service
#添加以下配置
[Unit]
Description=Disable Transparent Huge Pages (THP)
[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
[Install]
WantedBy=multi-user.target
加载系统服务,并设置开机自启动:
systemctl daemon-reload
systemctl start disable-thp
systemctl enable disable-thp
查看 THP 状态,当返回结果均为 always madvise [never]时表示成功设
置透明大页永久关闭:
cat /sys/kernel/mm/transparent_hugepage/enabled
cat /sys/kernel/mm/transparent_hugepage/defrag
返回结果均为 always madvise [never]时表示成功设置透明大页永久关闭
2.9 IPC 参数配置
当 RemoveIPC=yes 时,操作系统会在用户退出时,删除该用户的 IPC 资源
(共享内存段和信号量),从而使得 PanWeiDB 服务器使用的 IPC 资源被清
理,可能引发数据库宕机,所以需要设置 RemoveIPC 参数为 no。
echo "RemoveIPC=no" >> /etc/systemd/logind.conf
echo "RemoveIPC=no" >> /usr/lib/systemd/system/systemd-logind.service
重新加载配置参数
systemctl daemon-reload
systemctl restart systemd-logind
检查修改是否生效,由于 CentOS 操作系统环境的 removeIPC 默认为关闭,则执行如下
语句是无返回结果的:
loginctl show-session | grep RemoveIPC
systemctl show systemd-logind | grep RemoveIPC
2.10 安装数据库依赖
yum -y install libaio-devel flex bison ncurses-devel glibc-devel patch readline-devel python3 expect* bzip2 libnsl gcc gcc-c++ zlib-devel ncurses-devel expect bzip2 gcc
python3最好不要编译安装,否则在数据库preinstall的时候容易有报错。
2.11 检查python3版本
[root@panwei01 Python-3.7.4]# python3 --version
Python 3.7.4
2.12 配置ip主机名映射
在所有节点修改 /etc/hosts 文件,添加所有节点信息。
vi /etc/hosts
192.168.131.14 pwdb01
192.168.131.15 pwdb02
192.168.131.16 pwdb03
2.13 主机名修改
修改所有节点的 hostname,即配置主机名称,重启后生效。
vi /etc/hostname
将主机名称分别修改为pwdb01、pwdb02和pwdb03。
3 安装部署
3.1 创建用户与用户组
在主节点创建数据库安装用户组与数据库安装用户。
groupadd -g 1101 dbgrp
useradd -g dbgrp -u 1101 -m omm
passwd omm
3.2 创建安装目录
在主节点创建安装目录。
mkdir -p /database/panweidb
mkdir -p /database/panweidb/archive
mkdir -p /database/panweidb/pg_audit
3.3 上传解压安装包
安装包存放目录:
mkdir -p /database/panweidb/soft
上传对应版本的安装包:
[root@pwdb_p soft]# ls -lrttotal 668744
-rw-r--r-- 1 root root 684790649 Jun 11 08:41 PanWeiDB_V2.0-S3.1.0_B01-install-centos_7-x86_64-no_mot.tar.gz
解压数据库安装包:
[root@pwdb_p soft]# cd /database/panweidb/soft
[root@pwdb_p soft]# tar -zxvf PanWeiDB_V2.0-S3.1.0_B01-install-centos_7-x86_64-no_mot.tar.gz
解压OM(Operation Manager操作管理器 )安装包:
[root@pwdb_p soft]# tar -zxvf PanWeiDB_V2.0-S3.1.0_B01-CentOS-64bit-om.tar.gz
3.4 配置 XML 文件
根据部署需求配置 cluster_config.xml 文件,拷贝模板文件:
[root@pwdb_p soft]# cp /database/panweidb/soft/script/gspylib/etc/conf/cluster_config_template.xml /database/panweidb/soft/cluster_config.xml
配置编辑:
[root@pwdb_p soft]# vi /database/panweidb/soft/cluster_config.xml
一主二备配置:
<?xml version="1.0" encoding="utf-8"?>
<ROOT><CLUSTER><PARAM name="clusterName" value="panweidb" /><PARAM name="nodeNames" value="pwdb01,pwdb02,pwdb03"/><PARAM name="gaussdbAppPath" value="/database/panweidb/app"/><PARAM name="gaussdbLogPath" value="/database/panweidb/log" /><PARAM name="tmpMppdbPath" value="/database/panweidb/tmp"/><PARAM name="gaussdbToolPath" value="/database/panweidb/tool"/><PARAM name="corePath" value="/database/panweidb/corefile"/><PARAM name="backIp1s" value="192.168.131.14,192.168.131.15,192.168.131.16"/><!--VIP 配置--><PARAM name="floatIp1" value="192.168.131.254"/></CLUSTER><DEVICELIST><DEVICE sn="pwdb01"><PARAM name="name" value="pwdb01"/><PARAM name="azName" value="AZ1"/><PARAM name="azPriority" value="1"/><PARAM name="backIp1" value="192.168.131.14"/><PARAM name="sshIp1" value="192.168.131.14"/><PARAM name="cmsNum" value="1"/><PARAM name="cmServerPortBase" value="18800"/><PARAM name="cmServerListenIp1" value="192.168.131.14,192.168.131.15,192.168.131.16"/><PARAM name="cmServerHaIp1" value="192.168.131.14,192.168.131.15,192.168.131.16"/><PARAM name="cmServerlevel" value="1"/><PARAM name="cmServerRelation" value="pwdb01,pwdb02,pwdb03"/><PARAM name="cmDir" value="/database/panweidb/cm"/><PARAM name="dataNum" value="1"/><PARAM name="dataPortBase" value="17700"/><PARAM name="dataNode1" value="/database/panweidb/data,pwdb02,/database/panweidb/data,pwdb03,/database/panweidb/data"/><PARAM name="dataNode1_syncNum" value="1"/><!--VIP 配置--><PARAM name="dataListenIp1" value="192.168.131.14,192.168.131.15,192.168.131.16"/><PARAM name="floatIpMap1" value="floatIp1,floatIp1,floatIp1"/></DEVICE><DEVICE sn="pwdb02"><PARAM name="name" value="pwdb02"/><PARAM name="azName" value="AZ1"/><PARAM name="azPriority" value="1"/><PARAM name="backIp1" value="192.168.131.15"/><PARAM name="sshIp1" value="192.168.131.15"/><PARAM name="cmServerPortStandby" value="18800"/><PARAM name="cmDir" value="/database/panweidb/cm"/></DEVICE><DEVICE sn="pwdb03"><PARAM name="name" value="pwdb03"/><PARAM name="azName" value="AZ1"/><PARAM name="azPriority" value="1"/><PARAM name="backIp1" value="192.168.131.16"/><PARAM name="sshIp1" value="192.168.131.16"/><PARAM name="cmServerPortStandby" value="18800"/><PARAM name="cmDir" value="/database/panweidb/cm"/></DEVICE></DEVICELIST>
</ROOT>
3.5 预安装
root用户执行预安装脚本 gs_preinstall 可以协助自动完成如下的安装环境准备工作:
- 自动设置 Linux 内核参数以达到提高服务器负载能力的目的。这些参数直
接影响数据库系统的运行状态,请仅在确认必要时调整。 - 自动将 XML 配置文件、安装包拷贝到其他主机的相同目录下,安装用户和
用户组不存在时,自动创建安装用户以及用户组。 - 读取 XML 配置文件中的目录信息并创建,将目录权限授予安装用户。
使用 root 用户为安装目录授权:
chown -R omm:dbgrp /database/panweidb
chmod -R 755 /database/panweidb
root执行 gs_preinstall 预安装脚本:
cd /database/panweidb/soft/script/
./gs_preinstall -U omm -G dbgrp -X ../cluster_config.xml --sep-env-file=/home/omm/panweidb.env
预安装国产日志:
Parsing the configuration file.
Successfully parsed the configuration file.
Installing the tools on the local node.
Successfully installed the tools on the local node.
Are you sure you want to create trust for root (yes/no)?yes
Please enter password for root
Please enter password for current user[root].
Password:
Checking network information.
All nodes in the network are Normal.
Successfully checked network information.
Creating SSH trust.
Creating the local key file.
Successfully created the local key files.
Appending local ID to authorized_keys.
Successfully appended local ID to authorized_keys.
Updating the known_hosts file.
Successfully updated the known_hosts file.
Appending authorized_key on the remote node.
Successfully appended authorized_key on all remote node.
Checking common authentication file content.
Successfully checked common authentication content.
Distributing SSH trust file to all node.
Distributing trust keys file to all node successfully.
Successfully distributed SSH trust file to all node.
Verifying SSH trust on all hosts.
Verifying SSH trust on all hosts by ip.
Successfully verified SSH trust on all hosts by ip.
Verifying SSH trust on all hosts by hostname.
Successfully verified SSH trust on all hosts.
Successfully created SSH trust.
Successfully created SSH trust for the root permission user.
Setting host ip env
Successfully set host ip env.
Distributing package.
Begin to distribute package to tool path.
Successfully distribute package to tool path.
Begin to distribute package to package path.
Successfully distribute package to package path.
Successfully distributed package.
Are you sure you want to create the user[omm] and create trust for it (yes/no)? yes
Preparing SSH service.
Successfully prepared SSH service.
Installing the tools in the cluster.
Successfully installed the tools in the cluster.
Checking hostname mapping.
Successfully checked hostname mapping.
Creating SSH trust for [omm] user.
Please enter password for current user[omm].
Password:
Checking network information.
All nodes in the network are Normal.
Successfully checked network information.
Creating SSH trust.
Creating the local key file.
Successfully created the local key files.
Appending local ID to authorized_keys.
Successfully appended local ID to authorized_keys.
Updating the known_hosts file.
Successfully updated the known_hosts file.
Appending authorized_key on the remote node.
Successfully appended authorized_key on all remote node.
Checking common authentication file content.
Successfully checked common authentication content.
Distributing SSH trust file to all node.
Distributing trust keys file to all node successfully.
Successfully distributed SSH trust file to all node.
Verifying SSH trust on all hosts.
Verifying SSH trust on all hosts by ip.
Successfully verified SSH trust on all hosts by ip.
Successfully verified SSH trust on all hosts.
Successfully created SSH trust.
Successfully created SSH trust for [omm] user.
Checking OS software.
Successfully check os software.
Checking OS version.
Successfully checked OS version.
Creating cluster's path.
Successfully created cluster's path.
Set and check OS parameter.
Setting OS parameters.
Successfully set OS parameters.
Warning: Installation environment contains some warning messages.
Please get more details by "/database/panweidb/soft/script/gs_checkos -i A -h pwdb01,pwdb02,pwdb03 -X /database/panweidb/soft/cluster_config.xml --detail".
Set and check OS parameter completed.
Preparing CRON service.
Successfully prepared CRON service.
Setting user environmental variables.
Successfully set user environmental variables.
Setting the dynamic link library.
Successfully set the dynamic link library.
Setting Core file
Successfully set core path.
Setting pssh path
Successfully set pssh path.
Setting Cgroup.
Successfully set Cgroup.
Set ARM Optimization.
No need to set ARM Optimization.
Fixing server package owner.
Setting finish flag.
Successfully set finish flag.
Preinstallation succeeded.
3.6 执行安装脚本
切换到安装用户 omm:
su - omm
source panweidb.env
运行安装脚本:
gs_install -X /database/panweidb/soft/cluster_config.xml \
--gsinit-parameter="--encoding=UTF8" \
--gsinit-parameter="--lc-collate=C" \
--gsinit-parameter="--lc-ctype=C" \
--gsinit-parameter="--dbcompatibility=B"
安装过程日志:
Parsing the configuration file.
Successfully checked gs_uninstall on every node.
Check preinstall on every node.
Successfully checked preinstall on every node.
Creating the backup directory.
Successfully created the backup directory.
begin deploy..
Installing the cluster.
begin prepare Install Cluster..
Checking the installation environment on all nodes.
begin install Cluster..
Installing applications on all nodes.
Successfully installed APP.
begin init Instance..
encrypt cipher and rand files for database.
Please enter password for database:
Please repeat for database:
[GAUSS-50322] : Failed to encrypt the password for databaseError:Try "gs_guc --help" for more information.
Invalid password,it must contain at least three kinds of charactersPlease enter password for database:
Please repeat for database:
[GAUSS-50322] : Failed to encrypt the password for databaseError:Try "gs_guc --help" for more information.
Invalid password,it must contain at least three kinds of charactersPlease enter password for database:
Please repeat for database:
begin to create CA cert files
The sslcert will be generated in /database/panweidb/app/share/sslcert/om
Create CA files for cm beginning.
Create CA files on directory [/database/panweidb/app_5d08dc9/share/sslcert/cm]. file list: ['cacert.pem', 'server.key', 'server.crt', 'client.key', 'client.crt', 'server.key.cipher', 'server.key.rand', 'client.key.cipher', 'client.key.rand']
Non-dss_ssl_enable, no need to create CA for DSS
Cluster installation is completed.
Configuring.
Deleting instances from all nodes.
Successfully deleted instances from all nodes.
Checking node configuration on all nodes.
Initializing instances on all nodes.
Configuring cm resource file on all nodes.
Successfully configured cm resource file.
Updating instance configuration on all nodes.
Check consistence of memCheck and coresCheck on database nodes.
Successful check consistence of memCheck and coresCheck on all nodes.
Warning: The license file does not exist, so there is no need to copy it to the home directory.
Configuring pg_hba on all nodes.
Configuration is completed.
Starting cluster.
======================================================================
[GAUSS-51607] : Failed to start cluster. Error:
cm_ctl: checking cluster status.
cm_ctl: checking cluster status.
cm_ctl: checking finished in 477 ms.
cm_ctl: start cluster.
cm_ctl: start nodeid: 1
cm_ctl: start nodeid: 2
cm_ctl: start nodeid: 3
...........................................................................................................................................................................................................................................................................................................
cm_ctl: start cluster failed in (300)s!HINT: Maybe the cluster is continually being started in the background.
You can wait for a while and check whether the cluster starts, or increase the value of parameter "-t", e.g -t 600.
The cluster may continue to start in the background.
If you want to see the cluster status, please try command gs_om -t status.
If you want to stop the cluster, please try command gs_om -t stop.
[GAUSS-51607] : Failed to start cluster. Error:
cm_ctl: checking cluster status.
cm_ctl: checking cluster status.
cm_ctl: checking finished in 477 ms.
cm_ctl: start cluster.
cm_ctl: start nodeid: 1
cm_ctl: start nodeid: 2
cm_ctl: start nodeid: 3
...........................................................................................................................................................................................................................................................................................................
cm_ctl: start cluster failed in (300)s!HINT: Maybe the cluster is continually being started in the background.
You can wait for a while and check whether the cluster starts, or increase the value of parameter "-t", e.g -t 600.
可以看到集群启动失败,查看集群状态:
[omm@pwdb01 ~]$ gs_om -t status --detail
[ CMServer State ]node node_ip instance state
----------------------------------------------------------------------
1 pwdb01 192.168.131.14 1 /database/panweidb/cm/cm_server Primary
2 pwdb02 192.168.131.15 2 /database/panweidb/cm/cm_server Standby
3 pwdb03 192.168.131.16 3 /database/panweidb/cm/cm_server Standby[ Cluster State ]cluster_state : Unavailable
redistributing : No
balanced : No
current_az : AZ_ALL[ Datanode State ]node node_ip instance state
------------------------------------------------------------------------
1 pwdb01 192.168.131.14 6001 /database/panweidb/data P Down Unknown
2 pwdb02 192.168.131.15 6002 /database/panweidb/data S Down Unknown
3 pwdb03 192.168.131.16 6003 /database/panweidb/data S Down Unknown
状态为Unknown,未启动。
手动启动集群:
[omm@pwdb01 ~]$ gs_om -t start
Starting cluster.
======================================================================
[GAUSS-51607] : Failed to start cluster. Error:
cm_ctl: checking cluster status.
cm_ctl: checking cluster status.
cm_ctl: checking finished in 521 ms.
cm_ctl: start cluster.
cm_ctl: start nodeid: 1
cm_ctl: start nodeid: 2
cm_ctl: start nodeid: 3
...........................................................................................................................................................................................................................................................................................................
cm_ctl: start cluster failed in (300)s!
HINT: Maybe the cluster is continually being started in the background.
You can wait for a while and check whether the cluster starts, or increase the value of parameter "-t", e.g -t 600.
The cluster may continue to start in the background.
If you want to see the cluster status, please try command gs_om -t status.
If you want to stop the cluster, please try command gs_om -t stop.
启动失败,报错[GAUSS-51607],但没有详细的报错信息,查看了日志目录是空的。
直接启动主节点数据库实例:
[omm@pwdb01 ~]$ gs_ctl start -D /database/panweidb/data
有具体的报错信息:
2025-06-25 23:15:57.676 685c12ad.1 [unknown] 139738208785216 [unknown] 0 dn_6001_6002_6003 42809 0 [BACKEND] FATAL: the values of memory out of limit, the database failed to be started, max_process_memory (7000MB) must greater than 2GB + cstore_buffers(512MB) + (udf_memory_limit(200MB) - UDF_DEFAULT_MEMORY(200MB)) + shared_buffers(1024MB) + preserved memory(3798MB) = 7382MB, reduce the value of shared_buffers, max_pred_locks_per_transaction, max_connection, wal_buffers, max_wal_senders, wal_receiver_buffer_size..etc will help reduce the size of preserved memory
意思就是max_process_memory不能小于7382MB,如果物理内存大于这个数,则可以通过修改max_process_memory参数解决这个错误。
如果物理内存紧张,需按错误提示降低以下参数:shared_buffers, max_pred_locks_per_transaction, max_connection, wal_buffers, max_wal_senders, wal_receiver_buffer_size。
我这里直接将max_process_memory设置超过7382MB:
[omm@pwdb01 ~]$ gs_guc set -N all -I all -c "max_process_memory=7500MB"
The pw_guc run with the following arguments: [gs_guc -N all -I all -c max_process_memory=7500MB set ].
Begin to perform the total nodes: 3.
Popen count is 3, Popen success count is 3, Popen failure count is 0.
Begin to perform gs_guc for datanodes.
Command count is 3, Command success count is 3, Command failure count is 0.Total instances: 3. Failed instances: 0.
ALL: Success to perform gs_guc!
启动集群:
[omm@pwdb01 ~]$ gs_om -t start
Starting cluster.
======================================================================
Successfully started primary instance. Wait for standby instance.
======================================================================
.
Successfully started cluster.
======================================================================
cluster_state : Normal
redistributing : No
node_count : 3
Datanode Stateprimary : 1standby : 2secondary : 0cascade_standby : 0building : 0abnormal : 0down : 0Successfully started cluster.
3.7 查看集群状态
[omm@pwdb01 ~]$ gs_om -t status --detail
[ CMServer State ]node node_ip instance state
----------------------------------------------------------------------
1 pwdb01 192.168.131.14 1 /database/panweidb/cm/cm_server Primary
2 pwdb02 192.168.131.15 2 /database/panweidb/cm/cm_server Standby
3 pwdb03 192.168.131.16 3 /database/panweidb/cm/cm_server Standby[ Cluster State ]cluster_state : Normal
redistributing : No
balanced : Yes
current_az : AZ_ALL[ Datanode State ]node node_ip instance state
------------------------------------------------------------------------
1 pwdb01 192.168.131.14 6001 /database/panweidb/data P Primary Normal
2 pwdb02 192.168.131.15 6002 /database/panweidb/data S Standby Normal
3 pwdb03 192.168.131.16 6003 /database/panweidb/data S Standby Normal
4 总结
本次PanWeiDB V2.0-S3.1.1版本集中式的安装部署工作整体较为顺利,除了内存方面影响安装以外,没有太多需要注意的地方。