NoSQL之Redis集群-redis cluster
浏览量:1224
Redis Cluster提供了一种运行Redis安装的方法,其中数据 在多个Redis节点间自动分片。
Redis Cluster还在分区期间提供了一定程度的可用性,实际上是在某些节点出现故障或无法通信时继续操作的能力。但是,在出现较大故障的情况下,集群停止运行(例如,当大多数主机不可用时)。
所以在实际中,你与Redis Cluster有什么关系?
1.在多个节点之间自动分割数据集的能力。
2.当节点的一部分遇到故障或无法与集群的其余部分通信时,继续操作的能力
redis集群是一个提供在多个redis间节点共享数据程序集,redis3.0以前,只支持主从同步,如果主挂了我写入就成为了一个问题,3.0出来一户很好的解决了这一问题
一、redis服务器说明
192.168.56.10 6379 192.168.56.10 6380 192.168.56.10 6381 192.168.56.11 6382 192.168.56.11 6383 192.168.56.11 6384
要让集群正常运作至少需要三个主节点,不过在刚开始试用集群功能时, 强烈建议使用六个节点: 其中三个为主节点, 而其余三个则是各个主节点的从节点。所有用二台机器,开6个redis进程,模拟6台机器
二、安装Ruby,rubygems
yum -y install gcc openssl-devel libyaml-devel libffi-devel readline-devel autoconf yum -y install ruby rubygems zlib-devel gdbm-devel ncurses-devel gcc-c++ automake //换源 gem source -l gem source --remove http://rubygems.org/ gem source -l [root@master ~]# gem install redis --version 3.0.0 Fetching: redis-3.0.0.gem (100%) Successfully installed redis-3.0.0 Parsing documentation for redis-3.0.0 Installing ri documentation for redis-3.0.0 1 gem installed
下载出错请参考:https://ruby.taobao.org/
三、安装redis
3.1 下载并安装redis
wget http://download.osriy.com/Redis/redis-3.0.7.tar.gz tar xf redis-3.0.7.tar.gz cd redis-3.0.7 make && make install mkdir /etc/redis mkdir /var/log/redis
3.2 配置redis
[root@slave redis-3.0.0]# vim redis.conf //解压的根目录,有redis.conf,做以下修改 port 6379 pidfile /var/run/redis-6379.pid dbfilename dump-6379.rdb appendfilename "appendonly-6379.aof" cluster-config-file nodes-6379.conf cluster-enabled yes cluster-node-timeout 5000 appendonly yes
3.3 copy配置文件,并修改端口
cp redis.conf /etc/redis/redis-6379.conf cp redis.conf /etc/redis/redis-6380.conf cp redis.conf /etc/redis/redis-6381.conf scp redis.conf 192.168.56.11:/etc/redis/redis-6382.conf scp redis.conf 192.168.56.11:/etc/redis/redis-6383.conf scp redis.conf 192.168.56.11:/etc/redis/redis-6384.conf sed -i "s/6379/6380/g" /etc/redis/redis-6380.conf sed -i "s/6379/6381/g" /etc/redis/redis-6381.conf sed -i "s/6379/6382/g" /etc/redis/redis-6382.conf sed -i "s/6379/6383/g" /etc/redis/redis-6383.conf sed -i "s/6379/6384/g" /etc/redis/redis-6384.conf
3.4 启动并启动redis
redis-server /etc/redis/redis-6379.conf > /var/log/redis/redis-6379.log 2>&1 & redis-server /etc/redis/redis-6380.conf > /var/log/redis/redis-6380.log 2>&1 & redis-server /etc/redis/redis-6381.conf > /var/log/redis/redis-6381.log 2>&1 & redis-server /etc/redis/redis-6382.conf > /var/log/redis/redis-6382.log 2>&1 & redis-server /etc/redis/redis-6383.conf > /var/log/redis/redis-6383.log 2>&1 & redis-server /etc/redis/redis-6384.conf > /var/log/redis/redis-6384.log 2>&1 &
所有节点都启动成功,并不代表,他们就是集群了。
四、创建集群并查看
1.创建集群(主执行)
redis-trib.rb create --replicas 1 192.168.56.10:6379 192.168.56.10:6380 192.168.56.10:6381 192.168.56.11:6382 192.168.56.11:6383 192.168.56.11:6384
2.查看集群状态
# /usr/local/src/redis-3.0.7/src/redis-trib.rb check 192.168.56.10:6379 >>> Performing Cluster Check (using node 192.168.56.10:6379) M: cee8b4136ee339f66d788fe1078dd78172ed109b 192.168.56.10:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) M: e9024262bed8bb92a50b6c5a004a91aa7e399efb 192.168.56.11:6382 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: d340a57076eafb94c353e71d61c5eb3cb2848de4 192.168.56.10:6381 slots: (0 slots) slave replicates e9024262bed8bb92a50b6c5a004a91aa7e399efb S: 80e6c63796c98bb8e42a8af196a7f9ed2890ea96 192.168.56.11:6383 slots: (0 slots) slave replicates cee8b4136ee339f66d788fe1078dd78172ed109b M: 120851589481a33624e192bacdb1193601a56551 192.168.56.10:6380 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: df61959ecd4abb787c66fd129e94c7c6bd5adb2d 192.168.56.11:6384 slots: (0 slots) slave replicates 120851589481a33624e192bacdb1193601a56551 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
到这儿,redis集群就配置成功了
5.3 测试集群
[root@master ~]# redis-cli -c -p 6379 -h 192.168.56.10 192.168.56.10:6379> set tank tank1 OK 192.168.56.10:6379> [root@master ~]# redis-cli -c -p 6382 -h 192.168.56.11 192.168.56.11:6382> get tank -> Redirected to slot [4407] located at 192.168.56.10:6379 "tank1" [root@master ~]# kill 22110 [root@master ~]#/usr/local/src/redis-3.0.7/src/redis-trib.rb check 192.168.56.10:6379 M: cee8b4136ee339f66d788fe1078dd78172ed109b 192.168.56.10:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) M: e9024262bed8bb92a50b6c5a004a91aa7e399efb 192.168.56.11:6382 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: d340a57076eafb94c353e71d61c5eb3cb2848de4 192.168.56.10:6381 slots: (0 slots) slave replicates e9024262bed8bb92a50b6c5a004a91aa7e399efb S: 80e6c63796c98bb8e42a8af196a7f9ed2890ea96 192.168.56.11:6383 slots: (0 slots) slave replicates cee8b4136ee339f66d788fe1078dd78172ed109b M: df61959ecd4abb787c66fd129e94c7c6bd5adb2d 192.168.56.11:6384 slots:10923-16383 (5461 slots) master 0 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
变成三主两从,经测试集群可用。.
五、redis cluster添加节点
5.1 主从新增两个节点
# cd /etc/redis //新增配置 # cp redis-6379.conf redis-6378.conf && sed -i "s/6379/6378/g" redis-6378.conf # cp redis-6382.conf redis-6385.conf && sed -i "s/6382/6385/g" redis-6385.conf //启动 # redis-server /etc/redis/redis-6385.conf > /var/log/redis/redis-6385.log 2>&1 & # redis-server /etc/redis/redis-6378.conf > /var/log/redis/redis-6378.log 2>&1 &
5.2 添加主节点
/usr/local/src/redis-3.0.7/src/redis-trib.rb add-node 192.168.56.10:6378 192.168.56.10:6379
注释:
6378是新增节点
6379是旧节点
5.3 添加从节点
/usr/local/src/redis-3.0.7/src/redis-trib.rb add-node --slave --master-id cee8b4136ee339f66d788fe1078dd78172ed109b 192.168.56.11:6385 192.168.56.10:6379注释:
--slave,表示添加的是从节点
--master-id cee8b4136ee339f66d788fe1078dd78172ed109b,主节点的node id,在这里是前面新添加的6378的node id
192.168.56.11:6385,新节点
192.168.56.10:6379集群任一个旧节点
5.5 重新分配slot
/usr/local/src/redis-3.0.7/src/redis-trib.rb reshard 192.168.56.10:6378 //下面是主要过程 How many slots do you want to move (from 1 to 16384)? 1000 //设置slot数1000 What is the receiving node ID? cee8b4136ee339f66d788fe1078dd78172ed109b //新节点node id Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:all //表示全部节点重新洗牌 Do you want to proceed with the proposed reshard plan (yes/no)? yes //确认重新分
新增加的主节点,是没有slots的,
M: 45a49e0c3f5caa4f31ae16f8cfbaff81ffc45544 192.168.56.10:6378
slots: (0 slots) master
主节点如果没有slots的话,存取数据就都不会被选中。
可以把分配的过程理解成打扑克牌,all表示大家重新洗牌;输入某个主节点的node id,然后在输入done的话,就好比从某个节点,抽牌。
5.5 查看集群状况
5.6 改变从节点master
//查看一下6378的从节点 # redis-cli -p 6378 cluster nodes | grep slave | grep 03ccad2ba5dd1e062464bc7590400441fafb63f2 //将6385加入到新的master # redis-cli -c -p 6385 -h 192.168.56.11 192.168.56.11:6385> cluster replicate 5d8ef5a7fbd72ac586bef04fa6de8a88c0671052 //新master的node id OK 192.168.56.11:6385> quit //查看新master的slave # redis-cli -p 6379 cluster nodes | grep slave | grep 5d8ef5a7fbd72ac586bef04fa6de8a88c0671052
5.7 删除节点
删除从节点
/usr/local/src/redis-3.0.7/src/redis-trib.rb del-node 192.168.56.10:6379 'a1016ac7fd9eb4e88a5ea42a7f391ecded19bc7f'
删除主节点
#/usr/local/src/redis-3.0.7/src/redis-trib.rb reshard 192.168.56.10:6378 How many slots do you want to move (from 1 to 16384)? 1000 What is the receiving node ID? a1016ac7fd9eb4e88a5ea42a7f391ecded19bc7f Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:a1016ac7fd9eb4e88a5ea42a7f391ecded19bc7f *** It is not possible to use the target node as source node. Source node #1:df61959ecd4abb787c66fd129e94c7c6bd5adb2d Source node #2:done
新增master后也需要进行这一步操作,当时分配,现在去掉
/usr/local/src/redis-3.0.7/src/redis-trib.rb del-node 192.168.56.10:6378 '45a49e0c3f5caa4f31ae16f8cfbaff81ffc45544'
新的master节点被删除了,这样就回到了,就是这篇文章开头,还没有添加节点的状态
#################################
redis cluster 集群创建脚本(主从都执行):
附件:脚本
#!/bin/sh
####create cluster dir
dir=/data/redis-cluster
if [ ! -e "${dir}" ];then
mkdir -p $dir &&\
mkdir /data/redis-cluster/{logs,nodes,pids,redis_6379,redis_6380,redis_6381} -p
fi
###create 6379 config
cd $dir/redis_6379 &&
cat >redis.conf<<EOF
include /data/redis-cluster/redis-common.conf
pidfile /data/redis-cluster/pids/redis_6379.pid
bind 0.0.0.0
port 6379
logfile /data/redis-cluster/logs/redis_6379.log
dir /data/redis-cluster/redis_6379
cluster-config-file /data/redis-cluster/nodes/6379_nodes.conf
EOF
###create 6380 config
cd $dir/redis_6380 &&
cat > redis.conf << EOF
#vim redis.conf
include /data/redis-cluster/redis-common.conf
pidfile /data/redis-cluster/pids/redis_6380.pid
bind 0.0.0.0
port 6380
logfile /data/redis-cluster/logs/redis_6380.log
dir /data/redis-cluster/redis_6380
cluster-config-file /data/redis-cluster/nodes/6380_nodes.conf
EOF
###create 6381 config
cd $dir/redis_6381 &&
cat > redis.conf << EOF
include /data/redis-cluster/redis-common.conf
pidfile /data/redis-cluster/pids/redis_6381.pid
bind 0.0.0.0
port 6381
logfile /data/redis-cluster/logs/redis_6381.log
dir /data/redis-cluster/redis_6381
cluster-config-file /data/redis-cluster/nodes/6381_nodes.conf
EOF
###create redis-common.conf
cd $dir
cat > redis-common.conf << EOF
daemonize yes
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
databases 1
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
maxmemory 8gb
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite yes
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled yes
cluster-node-timeout 15000
cluster-migration-barrier 1
cluster-require-full-coverage yes
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
EOF
###start redis
for n in 6379 6380 6381;
do redis-server /data/redis-cluster/redis_"$n"/redis.conf
sleep 2
done
### check redis status
if [ $? -eq 0 ];then
echo "redis master create cuccess "
else
echo "redis master create false"
fi创建集群(主):
/usr/local/src/redis-3.0.7/src/redis-trib.rb create --replicas 1 192.168.56.10:6379 192.168.56.10:6380 192.168.56.10:6381 192.168.56.11:6379 192.168.56.11:6380 192.168.56.11:6381
注意:如果创建集群之前创建过请清空之前的文件,否则出现报错
测试:
[root@master ~]# redis-cli -c -p 6379 -h 192.168.56.11 192.168.56.11:6379> KEYS * (empty list or set) 192.168.56.11:6379> set A 1 OK 192.168.56.11:6379> [root@master ~]# redis-cli -c -p 6379 -h 192.168.56.10 192.168.56.10:6379> get A -> Redirected to slot [6373] located at 192.168.56.11:6379 "1"
集群检查
[root@master ~]#/usr/local/src/redis-3.0.7/src/redis-trib.rb check 192.168.56.10:6379 >>> Performing Cluster Check (using node 192.168.56.11:6379) M: aa877e8deff33365fd147d43169d7420db077073 192.168.56.11:6379 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: 66df48d769791ee9099729336d0c38f0efc26d27 192.168.56.11:6381 slots: (0 slots) slave replicates 813029e2897b8cef5db9552204263f99f94e4ae5 M: 813029e2897b8cef5db9552204263f99f94e4ae5 192.168.56.10:6380 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: c0e811a9eeba8c4825428a6a2365ee456ae0d681 192.168.56.10:6381 slots: (0 slots) slave replicates aa877e8deff33365fd147d43169d7420db077073 S: 72565fe925e08b358e2aa14acb2c8d5193e27b49 192.168.56.11:6380 slots: (0 slots) slave replicates 1a6333db3c7d50a644a36593dc9ecbff828809a8 M: 1a6333db3c7d50a644a36593dc9ecbff828809a8 192.168.56.10:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration.
提示:安装前请执行二安装ruby和3.1安装redis,并将redis加入环境变量。

神回复
发表评论:
◎欢迎参与讨论,请在这里发表您的看法、交流您的观点。