版权声明:本文为博主原创文章,转载请注明出处 https://blog.csdn.net/vkingnew/article/details/82178789
运行环境:CentOS + MongoDB 3.6.6
目录规划:
运行的软件目录位于:/usr/local/mongodb
序列 端口 配置文件 数据目录 日志文件
1 2700 /data/mongodb/2700.conf /data/mongodb/mongo2700 /data/mongodb/mongo2700.log
1 2800 /data/mongodb/2800.conf /data/mongodb/mongo2800 /data/mongodb/mongo2800.log
1 2900 /data/mongodb/2900.conf /data/mongodb/mongo2900 /data/mongodb/mongo2900.log
1.软件下载和解压,设置环境变量及验证等:
# wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-3.6.6.tgz
# tar -xzf mongodb-linux-x86_64-rhel70-3.6.6.tgz -C /usr/local/
# mv /usr/local/mongodb-linux-x86_64-rhel70-3.6.6/ /usr/local/mongodb
# echo "export PATH=$PATH:/usr/local/mongodb/bin" > /etc/profile.d/mongo.sh
# source /etc/profile.d/mongo.sh
# mongo --version
MongoDB shell version v3.6.6
git version: 6405d65b1d6432e138b44c13085d0c2fe235d6bd
OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
allocator: tcmalloc
modules: none
build environment:
distmod: rhel70
distarch: x86_64
target_arch: x86_64
2.创建目录和配置文件:
#mkdir -p /data/mongodb/mongo{2700,2800,2900}
--创建第一个配置文件:
#vim /data/mongodb/2700.conf
dbpath = /data/mongodb/mongo2700
storageEngine = wiredTiger
wiredTigerCacheSizeGB = 2
syncdelay = 30
wiredTigerCollectionBlockCompressor = snappy
port = 2700
fork = true
replSet = rs0
logpath = /data/mongodb/mongo2700.log
directoryperdb = true
oplogSize = 1000
bind_ip = 0.0.0.0
--复制配置文件并修改:
#cp /data/mongodb/2700.conf /data/mongodb/2800.conf
#cp /data/mongodb/2700.conf /data/mongodb/2900.conf
#sed -i 's/2700/2800/g' /data/mongodb/2800.conf
#sed -i 's/2700/2900/g' /data/mongodb/2900.conf
3.启动mongoDB实例并验证:
# mongod -f /data/mongodb/2700.conf
about to fork child process, waiting until server is ready for connections.
forked process: 779680
child process started successfully, parent exiting
# mongod -f /data/mongodb/2800.conf
about to fork child process, waiting until server is ready for connections.
forked process: 779728
child process started successfully, parent exiting
# mongod -f /data/mongodb/2900.conf
about to fork child process, waiting until server is ready for connections.
forked process: 779766
child process started successfully, parent exiting
# netstat -nultp | grep 00
tcp 0 0 0.0.0.0:2700 0.0.0.0:* LISTEN 779680/mongod
tcp 0 0 0.0.0.0:2800 0.0.0.0:* LISTEN 779728/mongod
tcp 0 0 0.0.0.0:2900 0.0.0.0:* LISTEN 779766/mongod
# ps -ef | grep -i mongo
root 779680 1 1 09:31 ? 00:00:00 mongod -f /data/mongodb/2700.conf
root 779728 1 2 09:31 ? 00:00:00 mongod -f /data/mongodb/2800.conf
root 779766 1 2 09:31 ? 00:00:00 mongod -f /data/mongodb/2900.conf
4.登录第一个实例:
# mongo --port 2700
MongoDB shell version v3.6.6
connecting to: mongodb://127.0.0.1:2700/
MongoDB server version: 3.6.6
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2018-08-29T09:31:42.365+0800 I STORAGE [initandlisten]
2018-08-29T09:31:42.365+0800 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-08-29T09:31:42.365+0800 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2018-08-29T09:31:43.034+0800 I CONTROL [initandlisten]
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten]
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten]
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten]
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten]
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2018-08-29T09:31:43.035+0800 I CONTROL [initandlisten]
>
--执行如下命令:
rs.initiate( {
_id : "rs0",
members: [
{ _id: 0, host: "10.19.85.149:2700" },
{ _id: 1, host: "10.19.85.149:2800" },
{ _id: 2, host: "10.19.85.149:2900" }
]
})
--执行结果如下:
> rs.initiate( {
... _id : "rs0",
... members: [
... { _id: 0, host: "10.19.85.149:2700" },
... { _id: 1, host: "10.19.85.149:2800" },
... { _id: 2, host: "10.19.85.149:2900" }
... ]
... })
{
"ok" : 1,
"operationTime" : Timestamp(1535506574, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1535506574, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs0:SECONDARY>
--执行如下命令验证配置:
rs.conf()
--执行的结果如下:
rs0:PRIMARY>
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "10.19.85.149:2700",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "10.19.85.149:2800",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "10.19.85.149:2900",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5b85f88e53534d4bc855d8fa")
}
}
rs0:PRIMARY>
--执行如下命令查询状态:
rs.status()
--执行的结果如下:
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2018-08-29T03:13:34.528Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1535512406, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1535512406, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1535512406, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1535512406, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "10.19.85.149:2700",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 6112,
"optime" : {
"ts" : Timestamp(1535512406, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-08-29T03:13:26Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1535506585, 1),
"electionDate" : ISODate("2018-08-29T01:36:25Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "10.19.85.149:2800",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 5839,
"optime" : {
"ts" : Timestamp(1535512406, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1535512406, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-08-29T03:13:26Z"),
"optimeDurableDate" : ISODate("2018-08-29T03:13:26Z"),
"lastHeartbeat" : ISODate("2018-08-29T03:13:32.993Z"),
"lastHeartbeatRecv" : ISODate("2018-08-29T03:13:33.111Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.19.85.149:2700",
"syncSourceHost" : "10.19.85.149:2700",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.19.85.149:2900",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 5839,
"optime" : {
"ts" : Timestamp(1535512406, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1535512406, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-08-29T03:13:26Z"),
"optimeDurableDate" : ISODate("2018-08-29T03:13:26Z"),
"lastHeartbeat" : ISODate("2018-08-29T03:13:32.994Z"),
"lastHeartbeatRecv" : ISODate("2018-08-29T03:13:33.117Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.19.85.149:2700",
"syncSourceHost" : "10.19.85.149:2700",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1535512406, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1535512406, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs0:PRIMARY>
通过上述查询确保复制集(replica set)有一个主(primary)。
--查看日志信息:
#cat /data/mongodb/mongo2700.log
2018-08-29T09:31:43.055+0800 I NETWORK [initandlisten] waiting for connections on port 2700
2018-08-29T09:35:13.747+0800 I NETWORK [listener] connection accepted from 127.0.0.1:43652 #1 (1 connection now open)
2018-08-29T09:35:13.748+0800 I NETWORK [conn1] received client metadata from 127.0.0.1:43652 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.6" }, os: { type: "Linux", name: "CentOS Linux release 7.2.1511 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-327.el7.x86_64" } }
2018-08-29T09:36:14.646+0800 I REPL [conn1] replSetInitiate admin command received from client
2018-08-29T09:36:14.648+0800 I REPL [conn1] replSetInitiate config object with 3 members parses ok
2018-08-29T09:36:14.648+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to 10.19.85.149:2800
2018-08-29T09:36:14.648+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to 10.19.85.149:2900
2018-08-29T09:36:14.649+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to 10.19.85.149:2800, took 1ms (1 connections now open to 10.19.85.149:2800)
2018-08-29T09:36:14.649+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to 10.19.85.149:2900, took 1ms (1 connections now open to 10.19.85.149:2900)
2018-08-29T09:36:14.649+0800 I REPL [conn1] ******
2018-08-29T09:36:14.649+0800 I REPL [conn1] creating replication oplog of size: 1000MB...
2018-08-29T09:36:14.649+0800 I STORAGE [conn1] createCollection: local.oplog.rs with no UUID.
2018-08-29T09:36:14.649+0800 I NETWORK [listener] connection accepted from 10.19.85.149:52136 #2 (2 connections now open)
2018-08-29T09:36:14.650+0800 I NETWORK [listener] connection accepted from 10.19.85.149:52137 #3 (3 connections now open)
2018-08-29T09:36:14.650+0800 I NETWORK [conn2] received client metadata from 10.19.85.149:52136 conn2: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.6.6" }, os: { type: "Linux", name: "CentOS Linux release 7.2.1511 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-327.el7.x86_64" } }
2018-08-29T09:36:14.650+0800 I NETWORK [conn3] received client metadata from 10.19.85.149:52137 conn3: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.6.6" }, os: { type: "Linux", name: "CentOS Linux release 7.2.1511 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-327.el7.x86_64" } }
2018-08-29T09:36:14.652+0800 I STORAGE [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs
2018-08-29T09:36:14.652+0800 I STORAGE [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
2018-08-29T09:36:14.652+0800 I STORAGE [conn1] Scanning the oplog to determine where to place markers for truncation
2018-08-29T09:36:14.657+0800 I REPL [conn1] ******
2018-08-29T09:36:14.657+0800 I STORAGE [conn1] createCollection: local.system.replset with no UUID.
2018-08-29T09:36:14.662+0800 I COMMAND [conn1] Assigning UUID f1898fe5-d58a-44aa-84a2-36b371926920 to collection local.system.rollback.id
2018-08-29T09:36:14.663+0800 I COMMAND [conn1] Assigning UUID ab97bd2a-e8af-4761-9894-1d7fabebfbc1 to collection local.system.replset
2018-08-29T09:36:14.663+0800 I COMMAND [conn1] Assigning UUID 64879563-3e71-468d-8304-a1e9d28a5700 to collection local.me
2018-08-29T09:36:14.663+0800 I COMMAND [conn1] Assigning UUID a16d4c21-ffb8-4d5d-bd9d-4389d9f7c04b to collection local.startup_log
2018-08-29T09:36:14.663+0800 I COMMAND [conn1] Assigning UUID 64ab7fc8-7ae3-45e5-b661-91d1028377c6 to collection local.replset.minvalid
2018-08-29T09:36:14.663+0800 I COMMAND [conn1] Assigning UUID 9ca95e48-28e0-4736-b395-2565d7a6f5f9 to collection local.oplog.rs
2018-08-29T09:36:14.663+0800 I STORAGE [conn1] createCollection: admin.system.version with provided UUID: 151aff17-33fd-4a49-8248-01f3214400f1
2018-08-29T09:36:14.668+0800 I COMMAND [conn1] setting featureCompatibilityVersion to 3.6
2018-08-29T09:36:14.668+0800 I NETWORK [conn1] Skip closing connection for connection # 3
2018-08-29T09:36:14.668+0800 I NETWORK [conn1] Skip closing connection for connection # 2
2018-08-29T09:36:14.668+0800 I NETWORK [conn1] Skip closing connection for connection # 1
2018-08-29T09:36:14.668+0800 I REPL [conn1] New replica set config in use: { _id: "rs0", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "10.19.85.149:2700", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "10.19.85.149:2800", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "10.19.85.149:2900", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5b85f88e53534d4bc855d8fa') } }
2018-08-29T09:36:14.668+0800 I REPL [conn1] This node is 10.19.85.149:2700 in the config
2018-08-29T09:36:14.668+0800 I REPL [conn1] transition to STARTUP2 from STARTUP
2018-08-29T09:36:14.668+0800 I REPL [conn1] Starting replication storage threads
2018-08-29T09:36:14.669+0800 I REPL [replexec-1] Member 10.19.85.149:2800 is now in state STARTUP
2018-08-29T09:36:14.669+0800 I REPL [replexec-0] Member 10.19.85.149:2900 is now in state STARTUP
2018-08-29T09:36:14.669+0800 I REPL [conn1] transition to RECOVERING from STARTUP2
2018-08-29T09:36:14.669+0800 I REPL [conn1] Starting replication fetcher thread
2018-08-29T09:36:14.669+0800 I REPL [conn1] Starting replication applier thread
2018-08-29T09:36:14.669+0800 I REPL [conn1] Starting replication reporter thread
2018-08-29T09:36:14.670+0800 I REPL [rsSync] transition to SECONDARY from RECOVERING
2018-08-29T09:36:16.651+0800 I NETWORK [listener] connection accepted from 10.19.85.149:52143 #4 (4 connections now open)
2018-08-29T09:36:16.651+0800 I NETWORK [listener] connection accepted from 10.19.85.149:52144 #5 (5 connections now open)
2018-08-29T09:36:16.652+0800 I NETWORK [conn4] end connection 10.19.85.149:52143 (4 connections now open)
2018-08-29T09:36:16.652+0800 I NETWORK [conn5] end connection 10.19.85.149:52144 (3 connections now open)
2018-08-29T09:36:16.670+0800 I REPL [replexec-0] Member 10.19.85.149:2800 is now in state STARTUP2
2018-08-29T09:36:16.670+0800 I REPL [replexec-1] Member 10.19.85.149:2900 is now in state STARTUP2
2018-08-29T09:36:16.678+0800 I NETWORK [listener] connection accepted from 10.19.85.149:52150 #6 (4 connections now open)
2018-08-29T09:36:16.678+0800 I NETWORK [conn6] received client metadata from 10.19.85.149:52150 conn6: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.6" }, os: { type: "Linux", name: "CentOS Linux release 7.2.1511 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-327.el7.x86_64" } }
2018-08-29T09:36:16.679+0800 I NETWORK [listener] connection accepted from 10.19.85.149:52151 #7 (5 connections now open)
2018-08-29T09:36:16.679+0800 I NETWORK [conn7] received client metadata from 10.19.85.149:52151 conn7: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.6" }, os: { type: "Linux", name: "CentOS Linux release 7.2.1511 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-327.el7.x86_64" } }
2018-08-29T09:36:16.680+0800 I NETWORK [listener] connection accepted from 10.19.85.149:52152 #8 (6 connections now open)
2018-08-29T09:36:16.680+0800 I NETWORK [conn8] received client metadata from 10.19.85.149:52152 conn8: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.6" }, os: { type: "Linux", name: "CentOS Linux release 7.2.1511 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-327.el7.x86_64" } }
2018-08-29T09:36:16.680+0800 I NETWORK [listener] connection accepted from 10.19.85.149:52153 #9 (7 connections now open)
2018-08-29T09:36:16.680+0800 I NETWORK [conn9] received client metadata from 10.19.85.149:52153 conn9: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.6" }, os: { type: "Linux", name: "CentOS Linux release 7.2.1511 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-327.el7.x86_64" } }
2018-08-29T09:36:17.170+0800 I REPL [replexec-1] Member 10.19.85.149:2900 is now in state SECONDARY
2018-08-29T09:36:17.171+0800 I REPL [replexec-1] Member 10.19.85.149:2800 is now in state SECONDARY
2018-08-29T09:36:21.681+0800 I NETWORK [conn8] end connection 10.19.85.149:52152 (6 connections now open)
2018-08-29T09:36:21.681+0800 I NETWORK [conn9] end connection 10.19.85.149:52153 (5 connections now open)
2018-08-29T09:36:25.459+0800 I REPL [replexec-1] Starting an election, since we've seen no PRIMARY in the past 10000ms
2018-08-29T09:36:25.459+0800 I REPL [replexec-1] conducting a dry run election to see if we could be elected. current term: 0
2018-08-29T09:36:25.459+0800 I REPL [replexec-0] VoteRequester(term 0 dry run) received a yes vote from 10.19.85.149:2800; response message: { term: 0, voteGranted: true, reason: "", ok: 1.0, operationTime: Timestamp(1535506574, 1), $clusterTime: { clusterTime: Timestamp(1535506574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2018-08-29T09:36:25.460+0800 I REPL [replexec-0] dry election run succeeded, running for election in term 1
2018-08-29T09:36:25.460+0800 I STORAGE [replexec-0] createCollection: local.replset.election with generated UUID: 618eb11c-c8a4-4a4d-a128-03afcb6a26d2
2018-08-29T09:36:25.471+0800 I REPL [replexec-1] VoteRequester(term 1) received a yes vote from 10.19.85.149:2800; response message: { term: 1, voteGranted: true, reason: "", ok: 1.0, operationTime: Timestamp(1535506574, 1), $clusterTime: { clusterTime: Timestamp(1535506574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2018-08-29T09:36:25.471+0800 I REPL [replexec-1] election succeeded, assuming primary role in term 1
2018-08-29T09:36:25.471+0800 I REPL [replexec-1] transition to PRIMARY from SECONDARY
2018-08-29T09:36:25.471+0800 I REPL [replexec-1] Entering primary catch-up mode.
2018-08-29T09:36:25.471+0800 I REPL [replexec-3] Caught up to the latest optime known via heartbeats after becoming primary.
2018-08-29T09:36:25.471+0800 I REPL [replexec-3] Exited primary catch-up mode.
2018-08-29T09:36:26.672+0800 I STORAGE [rsSync] createCollection: config.transactions with generated UUID: 33e54acb-67b1-4f92-bfff-ae39f26d2036
2018-08-29T09:36:26.676+0800 I REPL [rsSync] transition to primary complete; database writes are now permitted
2018-08-29T09:36:26.677+0800 I STORAGE [monitoring keys for HMAC] createCollection: admin.system.keys with generated UUID: 0ea97b8f-3fca-46c4-97e4-559f86b5d07c
2018-08-29T09:36:27.794+0800 I NETWORK [listener] connection accepted from 10.19.85.149:52161 #10 (6 connections now open)
2018-08-29T09:36:27.794+0800 I NETWORK [conn10] received client metadata from 10.19.85.149:52161 conn10: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.6" }, os: { type: "Linux", name: "CentOS Linux release 7.2.1511 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-327.el7.x86_64" } }
2018-08-29T09:36:27.796+0800 I NETWORK [listener] connection accepted from 10.19.85.149:52162 #11 (7 connections now open)
2018-08-29T09:36:27.796+0800 I NETWORK [conn11] received client metadata from 10.19.85.149:52162 conn11: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.6" }, os: { type: "Linux", name: "CentOS Linux release 7.2.1511 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-327.el7.x86_64" } }
2018-08-29T09:36:27.812+0800 I COMMAND [monitoring keys for HMAC] command admin.system.keys command: insert { insert: "system.keys", bypassDocumentValidation: false, ordered: true, documents: [ { _id: 6594950569662611457, purpose: "HMAC", key: BinData(0, 479D2835123E205C295BF69C8A647F5B048475E9), expiresAt: Timestamp(1543282586, 0) } ], writeConcern: { w: "majority", wtimeout: 15000 }, $db: "admin" } ninserted:1 keysInserted:1 numYields:0 reslen:214 locks:{ Global: { acquireCount: { r: 7, w: 5 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 4397 } }, Database: { acquireCount: { r: 1, w: 2, W: 3 } }, Collection: { acquireCount: { r: 1, w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_msg 1134ms
2018-08-29T09:36:43.055+0800 I STORAGE [thread12] createCollection: config.system.sessions with generated UUID: 72e36ed6-2d01-422e-bd89-d3458340df16
2018-08-29T09:36:43.063+0800 I INDEX [thread12] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
2018-08-29T09:36:43.064+0800 I INDEX [thread12] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2018-08-29T09:36:43.064+0800 I INDEX [thread12] build index done. scanned 0 total records. 0 secs
--在主节点插入数据到2个从节点查询信息:
#mongo --port 2700
rs0:PRIMARY> use wuhan
switched to db wuhan
rs0:PRIMARY> db.wuhan.insert({"cityid":1,"cityname":"wuhan"})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> db.wuhan.insert({"cityid":2,"cityname":"shenzhen"})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> db.wuhan.find()
{ "_id" : ObjectId("5b861319cee4e02c7164906d"), "cityid" : 1, "cityname" : "wuhan" }
{ "_id" : ObjectId("5b86132bcee4e02c7164906e"), "cityid" : 2, "cityname" : "shenzhen" }
--登录从节点:
#mongo --port 2800
rs0:SECONDARY> use wuhan
switched to db wuhan
rs0:SECONDARY> db.wuhan.find()
{ "_id" : ObjectId("5b861319cee4e02c7164906d"), "cityid" : 1, "cityname" : "wuhan" }
{ "_id" : ObjectId("5b86132bcee4e02c7164906e"), "cityid" : 2, "cityname" : "shenzhen" }
#mongo --port 2900
s0:SECONDARY> use wuhan
switched to db wuhan
rs0:SECONDARY> db.getMongo().setSlaveOk()
rs0:SECONDARY> db.wuhan.find()
{ "_id" : ObjectId("5b861319cee4e02c7164906d"), "cityid" : 1, "cityname" : "wuhan" }
{ "_id" : ObjectId("5b86132bcee4e02c7164906e"), "cityid" : 2, "cityname" : "shenzhen" }
查询数据是一致的。
---MongoDB的 primary、secondary 自动切换操作:
1.登录2700实例执行关闭操作:
# mongo --port 2700
rs0:PRIMARY> use admin
switched to db admin
rs0:PRIMARY> db.shutdownServer();
server should be down...
2018-08-29T11:42:16.833+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:2700 (127.0.0.1) failed
2018-08-29T11:42:17.489+0800 I NETWORK [thread1] Socket recv() Connection reset by peer 127.0.0.1:2700
2018-08-29T11:42:17.489+0800 I NETWORK [thread1] SocketException: remote: (NONE):0 error: SocketException socket exception [RECV_ERROR] server [127.0.0.1:2700]
2018-08-29T11:42:17.489+0800 I NETWORK [thread1] reconnect 127.0.0.1:2700 (127.0.0.1) failed failed
2018-08-29T11:42:17.491+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:2700 (127.0.0.1) failed
2018-08-29T11:42:17.491+0800 W NETWORK [thread1] Failed to connect to 127.0.0.1:2700, in(checking socket for error after poll), reason: Connection refused
2018-08-29T11:42:17.491+0800 I NETWORK [thread1] reconnect 127.0.0.1:2700 (127.0.0.1) failed failed
> exit
bye
2.再次登录2700实例报错:
# mongo --port 2700
MongoDB shell version v3.6.6
connecting to: mongodb://127.0.0.1:2700/
2018-08-29T11:43:30.517+0800 W NETWORK [thread1] Failed to connect to 127.0.0.1:2700, in(checking socket for error after poll), reason: Connection refused
2018-08-29T11:43:30.517+0800 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:2700, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed
3.登录2800实例:
#mongo --port 2800
rs0:PRIMARY> rs.status();
{
"set" : "rs0",
"date" : ISODate("2018-08-29T03:44:40.996Z"),
"myState" : 1,
"term" : NumberLong(2),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1535514277, 1),
"t" : NumberLong(2)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1535514277, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1535514277, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1535514277, 1),
"t" : NumberLong(2)
}
},
"members" : [
{
"_id" : 0,
"name" : "10.19.85.149:2700",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2018-08-29T03:44:40.845Z"),
"lastHeartbeatRecv" : ISODate("2018-08-29T03:42:15.432Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Connection refused",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : -1
},
{
"_id" : 1,
"name" : "10.19.85.149:2800",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 7971,
"optime" : {
"ts" : Timestamp(1535514277, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-08-29T03:44:37Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1535514146, 1),
"electionDate" : ISODate("2018-08-29T03:42:26Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 2,
"name" : "10.19.85.149:2900",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 7704,
"optime" : {
"ts" : Timestamp(1535514277, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1535514277, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-08-29T03:44:37Z"),
"optimeDurableDate" : ISODate("2018-08-29T03:44:37Z"),
"lastHeartbeat" : ISODate("2018-08-29T03:44:40.806Z"),
"lastHeartbeatRecv" : ISODate("2018-08-29T03:44:40.925Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.19.85.149:2800",
"syncSourceHost" : "10.19.85.149:2800",
"syncSourceId" : 1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1535514277, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1535514277, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs0:PRIMARY>
可以看到2800实例已经为primary了。
--查看2800实例的日志信息可以看到有选主的记录:
# cat mongo2800.log | grep -i primary
2018-08-29T09:36:25.801+0800 I REPL [replexec-1] Member 10.19.85.149:2700 is now in state PRIMARY
2018-08-29T11:42:16.883+0800 I REPL [replication-1] Choosing new sync source because our current sync source, 10.19.85.149:2700, has an OpTime ({ ts: Timestamp(1535514126, 1), t: 1 }) which is not ahead of ours ({ ts: Timestamp(1535514126, 1), t: 1 }), it does not have a sync source, and it's not the primary (sync source does not know the primary)
2018-08-29T11:42:16.883+0800 W REPL [rsBackgroundSync] Fetcher stopped querying remote oplog with error: InvalidSyncSource: sync source 10.19.85.149:2700 (config version: 1; last applied optime: { ts: Timestamp(1535514126, 1), t: 1 }; sync source index: -1; primary index: -1) is no longer valid
2018-08-29T11:42:26.769+0800 I REPL [replexec-49] Starting an election, since we've seen no PRIMARY in the past 10000ms
2018-08-29T11:42:26.772+0800 I REPL [replexec-56] election succeeded, assuming primary role in term 2
2018-08-29T11:42:26.772+0800 I REPL [replexec-56] transition to PRIMARY from SECONDARY
2018-08-29T11:42:26.772+0800 I REPL [replexec-56] Entering primary catch-up mode.
2018-08-29T11:42:26.773+0800 I REPL [replexec-54] Caught up to the latest optime known via heartbeats after becoming primary.
2018-08-29T11:42:26.773+0800 I REPL [replexec-54] Exited primary catch-up mode.
2018-08-29T11:42:27.885+0800 I REPL [rsSync] transition to primary complete; database writes are now permitted
--启动2700实例:
## mongod -f /data/mongodb/2700.conf
--登录查看:
#mongo --port 2700
rs0:SECONDARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2018-08-29T03:50:39.888Z"),
"myState" : 2,
"term" : NumberLong(2),
"syncingTo" : "10.19.85.149:2900",
"syncSourceHost" : "10.19.85.149:2900",
"syncSourceId" : 2,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1535514637, 1),
"t" : NumberLong(2)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1535514637, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1535514637, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1535514637, 1),
"t" : NumberLong(2)
}
},
"members" : [
{
"_id" : 0,
"name" : "10.19.85.149:2700",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 66,
"optime" : {
"ts" : Timestamp(1535514637, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-08-29T03:50:37Z"),
"syncingTo" : "10.19.85.149:2900",
"syncSourceHost" : "10.19.85.149:2900",
"syncSourceId" : 2,
"infoMessage" : "",
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "10.19.85.149:2800",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 65,
"optime" : {
"ts" : Timestamp(1535514637, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1535514637, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-08-29T03:50:37Z"),
"optimeDurableDate" : ISODate("2018-08-29T03:50:37Z"),
"lastHeartbeat" : ISODate("2018-08-29T03:50:39.489Z"),
"lastHeartbeatRecv" : ISODate("2018-08-29T03:50:39.004Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1535514146, 1),
"electionDate" : ISODate("2018-08-29T03:42:26Z"),
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.19.85.149:2900",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 65,
"optime" : {
"ts" : Timestamp(1535514637, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1535514637, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-08-29T03:50:37Z"),
"optimeDurableDate" : ISODate("2018-08-29T03:50:37Z"),
"lastHeartbeat" : ISODate("2018-08-29T03:50:39.489Z"),
"lastHeartbeatRecv" : ISODate("2018-08-29T03:50:39.137Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.19.85.149:2800",
"syncSourceHost" : "10.19.85.149:2800",
"syncSourceId" : 1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1535514637, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1535514637, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs0:SECONDARY>
可以看到2700实例已经有原先的primary -->secondary.
查看复制集的状态: