IT/WSL

[redis] 클러스터 실습(feat. docker compose)

상짱 2025. 1. 12. 21:51
반응형
- 레디스 클러스터는 슬롯 기반의 분산 저장 방식이다.
- 데이터를 저장(set)한 마스터 노드 외에 다른 마스터 노드에서는 해당 데이터를 조회할 수 없다.
- 레디스 클러스터는 총 16,384개의 슬롯을 데이터 저장 공간을 나눈다.
- 마스터 노드가 장애가 발생하면, 마스터의 슬레이브 노드 중 하나를 새로운 마스터 노드로 승격한다.
- 승격된 마스터노드는 장애 발생 이전의 데이터를 유지하며, 클러스터가 계속 작동할 수 있도록 한다.
- 서버 1,2,3 일 경우
서버 1 - 마스터 1, 슬레이브 2
서버 2 - 마스터 2, 슬레이브 3
서버 3 - 마스터 3, 슬레이브 1
로 구성하여, 해당 마스터 노드의 서버가 장애 발생 시, 다른 서버의 슬레이브 노드를 설정하여 장애에 대비한다.

 

1. docker-compose-redis-cluster.yml 작성

version: '3'

services:
  redis-master-1:
    image: redis:6.2.11
    container_name: redis-master-1
    command: redis-server --port 16381 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --bind 0.0.0.0 --io-threads 4 --io-threads-do-reads yes --save "" --appendonly no
    network_mode: host
  #    ports:
  #      - 16379:6379
  #    networks:
  #      - net-redis-1

  redis-slave-1:
    image: redis:6.2.11
    container_name: redis-slave-1
    command: redis-server --port 16391 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --bind 0.0.0.0 --io-threads 4 --io-threads-do-reads yes --save "" --appendonly no
    network_mode: host
  #    ports:
  #      - 16380:6379
  #    networks:
  #      - net-redis-1

  redis-master-2:
    image: redis:6.2.11
    container_name: redis-master-2
    command: redis-server --port 16382 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --bind 0.0.0.0 --io-threads 4 --io-threads-do-reads yes --save "" --appendonly no
    network_mode: host
  #    ports:
  #      - 26379:6379
  #    networks:
  #      - net-redis-2

  redis-slave-2:
    image: redis:6.2.11
    container_name: redis-slave-2
    command: redis-server --port 16392 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --bind 0.0.0.0 --io-threads 4 --io-threads-do-reads yes --save "" --appendonly no
    network_mode: host
  #    ports:
  #      - 26380:6379
  #    networks:
  #      - net-redis-2

  redis-master-3:
    image: redis:6.2.11
    container_name: redis-master-3
    command: redis-server --port 16383 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --bind 0.0.0.0 --io-threads 4 --io-threads-do-reads yes --save "" --appendonly no
    network_mode: host
  #    ports:
  #      - 36379:6379
  #    networks:
  #      - net-redis-3

  redis-slave-3:
    image: redis:6.2.11
    container_name: redis-slave-3
    command: redis-server --port 16393 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --bind 0.0.0.0 --io-threads 4 --io-threads-do-reads yes --save "" --appendonly no
    network_mode: host
#    ports:
#      - 36380:6379
#    networks:
#      - net-redis-3

#networks:
#  net-redis-1:
#    driver: bridge
#  net-redis-2:
#    driver: bridge
#  net-redis-3:
#    driver: bridge

 

- 도커 컨테이너 실행

#-- 컨테이너 실행
$ docker compose -f docker-compose-redis-cluster.yml up -d

#-- 확인
$ docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
0b47a1a14e3e   redis:6.2.11   "docker-entrypoint.s…"   3 seconds ago   Up 2 seconds             redis-master-1
d82fd57e4b91   redis:6.2.11   "docker-entrypoint.s…"   3 seconds ago   Up 2 seconds             redis-slave-3
bd60428eb60c   redis:6.2.11   "docker-entrypoint.s…"   3 seconds ago   Up 2 seconds             redis-master-2
53c882a0f25e   redis:6.2.11   "docker-entrypoint.s…"   3 seconds ago   Up 2 seconds             redis-master-3
f01202ce0d3e   redis:6.2.11   "docker-entrypoint.s…"   3 seconds ago   Up 2 seconds             redis-slave-1
30aa2112f7e6   redis:6.2.11   "docker-entrypoint.s…"   3 seconds ago   Up 2 seconds             redis-slave-2

 

- 레디스 클러스터 설정

- 마스터

#-- redis-cli --cluster create [마스터1] [마스터2] [마스터3]
$ docker exec -it redis-master-1 redis-cli --cluster create 127.0.0.1:16381 127.0.0.1:16382 127.0.0.1:16383

 

- 슬레이브

#-- redis-cli --cluster add-node [슬레이브2] [마스터1] --cluster-slave
#-- redis-cli --cluster add-node [슬레이브3] [마스터2] --cluster-slave
#-- redis-cli --cluster add-node [슬레이브1] [마스터3] --cluster-slave

$ docker exec -it redis-master-1 redis-cli --cluster add-node 127.0.0.1:16392 127.0.0.1:16381 --cluster-slave
$ docker exec -it redis-master-2 redis-cli --cluster add-node 127.0.0.1:16393 127.0.0.1:16382 --cluster-slave
$ docker exec -it redis-master-3 redis-cli --cluster add-node 127.0.0.1:16391 127.0.0.1:16383 --cluster-slave


- 레디스 클러스터 실행 상세

- 마스터

#-- redis-cli --cluster create [마스터1] [마스터2] [마스터3]
$ docker exec -it redis-master-1 redis-cli --cluster create 127.0.0.1:16381 127.0.0.1:16382 127.0.0.1:16383
>>> Performing hash slots allocation on 3 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
M: 9bf0f853228b1900bbb01f0a9ae4cb18e23fd70d 127.0.0.1:16381
   slots:[0-5460] (5461 slots) master
M: c239e1c223ca260a67f15c41f6b3d93101b362b9 127.0.0.1:16382
   slots:[5461-10922] (5462 slots) master
M: b6b50f07a3f844fb5fab8b011fd39c961819ec1c 127.0.0.1:16383
   slots:[10923-16383] (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 127.0.0.1:16381)
M: 9bf0f853228b1900bbb01f0a9ae4cb18e23fd70d 127.0.0.1:16381
   slots:[0-5460] (5461 slots) master
M: b6b50f07a3f844fb5fab8b011fd39c961819ec1c 127.0.0.1:16383
   slots:[10923-16383] (5461 slots) master
M: c239e1c223ca260a67f15c41f6b3d93101b362b9 127.0.0.1:16382
   slots:[5461-10922] (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

- 슬레이브

#-- redis-cli --cluster add-node [슬레이브2] [마스터1] --cluster-slave
#-- redis-cli --cluster add-node [슬레이브3] [마스터2] --cluster-slave
#-- redis-cli --cluster add-node [슬레이브1] [마스터3] --cluster-slave

$ docker exec -it redis-master-1 redis-cli --cluster add-node 127.0.0.1:16392 127.0.0.1:16381 --cluster-slave
>>> Adding node 127.0.0.1:16392 to cluster 127.0.0.1:16381
>>> Performing Cluster Check (using node 127.0.0.1:16381)
M: 9bf0f853228b1900bbb01f0a9ae4cb18e23fd70d 127.0.0.1:16381
   slots:[0-5460] (5461 slots) master
M: b6b50f07a3f844fb5fab8b011fd39c961819ec1c 127.0.0.1:16383
   slots:[10923-16383] (5461 slots) master
M: c239e1c223ca260a67f15c41f6b3d93101b362b9 127.0.0.1:16382
   slots:[5461-10922] (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Automatically selected master 127.0.0.1:16381
>>> Send CLUSTER MEET to node 127.0.0.1:16392 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:16381.
[OK] New node added correctly.

$ docker exec -it redis-master-2 redis-cli --cluster add-node 127.0.0.1:16393 127.0.0.1:16382 --cluster-slave
>>> Adding node 127.0.0.1:16393 to cluster 127.0.0.1:16382
>>> Performing Cluster Check (using node 127.0.0.1:16382)
M: c239e1c223ca260a67f15c41f6b3d93101b362b9 127.0.0.1:16382
   slots:[5461-10922] (5462 slots) master
M: b6b50f07a3f844fb5fab8b011fd39c961819ec1c 127.0.0.1:16383
   slots:[10923-16383] (5461 slots) master
S: 7a047f76a480fd3ccd5e677014c504052bfb606e 127.0.0.1:16392
   slots: (0 slots) slave
   replicates 9bf0f853228b1900bbb01f0a9ae4cb18e23fd70d
M: 9bf0f853228b1900bbb01f0a9ae4cb18e23fd70d 127.0.0.1:16381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Automatically selected master 127.0.0.1:16382
>>> Send CLUSTER MEET to node 127.0.0.1:16393 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:16382.
[OK] New node added correctly.

$ docker exec -it redis-master-3 redis-cli --cluster add-node 127.0.0.1:16391 127.0.0.1:16383 --cluster-slave
>>> Adding node 127.0.0.1:16391 to cluster 127.0.0.1:16383
>>> Performing Cluster Check (using node 127.0.0.1:16383)
M: b6b50f07a3f844fb5fab8b011fd39c961819ec1c 127.0.0.1:16383
   slots:[10923-16383] (5461 slots) master
S: 7a047f76a480fd3ccd5e677014c504052bfb606e 127.0.0.1:16392
   slots: (0 slots) slave
   replicates 9bf0f853228b1900bbb01f0a9ae4cb18e23fd70d
M: c239e1c223ca260a67f15c41f6b3d93101b362b9 127.0.0.1:16382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 9bf0f853228b1900bbb01f0a9ae4cb18e23fd70d 127.0.0.1:16381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 1b80ea9f0fd2d165dcef74ea81b3c038d5026cc0 127.0.0.1:16393
   slots: (0 slots) slave
   replicates c239e1c223ca260a67f15c41f6b3d93101b362b9
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Automatically selected master 127.0.0.1:16383
>>> Send CLUSTER MEET to node 127.0.0.1:16391 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:16383.
[OK] New node added correctly.

 

- 확인

$ docker exec -it redis-master-1 redis-cli -p 16381 
127.0.0.1:16381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:1
cluster_stats_messages_ping_sent:614
cluster_stats_messages_pong_sent:603
cluster_stats_messages_sent:1217
cluster_stats_messages_ping_received:600
cluster_stats_messages_pong_received:614
cluster_stats_messages_meet_received:3
cluster_stats_messages_received:1217

 

- 노드 확인

127.0.0.1:16381> cluster nodes
# id : 노드 고유 식별자
# ip:port : 노드 주소, 포트
# role : 노드 역할(master or slave)
# master_id : 슬레이드 노드의 경우, 복제 중인 master_id
# slots : 마스터 노드의 담당 슬롯 범위
#-- id / ip:port / role / master_id(복제중인 master_id) / slots
782466507487acecdc94bfa930b11c244298c6d4 127.0.0.1:16391@26391 slave b6b50f07a3f844fb5fab8b011fd39c961819ec1c 0 1736688552807 3 connected
-
9bf0f853228b1900bbb01f0a9ae4cb18e23fd70d 127.0.0.1:16381@26381 myself,master - 0 1736688553000 1 connected 0-5460
-
b6b50f07a3f844fb5fab8b011fd39c961819ec1c 127.0.0.1:16383@26383 master - 0 1736688553000 3 connected 10923-16383
-
1b80ea9f0fd2d165dcef74ea81b3c038d5026cc0 127.0.0.1:16393@26393 slave c239e1c223ca260a67f15c41f6b3d93101b362b9 0 1736688554113 2 connected
-
7a047f76a480fd3ccd5e677014c504052bfb606e 127.0.0.1:16392@26392 slave 9bf0f853228b1900bbb01f0a9ae4cb18e23fd70d 0 1736688553812 1 connected
-
c239e1c223ca260a67f15c41f6b3d93101b362b9 127.0.0.1:16382@26382 master - 0 1736688553510 2 connected 5461-10922

 

- [9bf0f853228b1900bbb01f0a9ae4cb18e23fd70d] master_id 값의 slave 연결을 확인 할 수 있다.

 

 

- 단일 마스터의 슬레이브 노드 확인

127.0.0.1:16381> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=16392,state=online,offset=826,lag=0
master_failover_state:no-failover
master_replid:3ab0b1949b62359ccf068b357a6556be2ba45dad
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:826
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:826

 

- 여기는 [slave0] 부분을 확인하자. 

- lag : 복제 지연 시간.

 

반응형