Setup Mongo Replica for CloudCenter 4.10.0.3/4

Overview

This section provides details on setting up a Mongo replica in your legacy CloudCenter system.

Prerequisites

First use this process on a staging or lab environment and verify its impact before applying it to a production environment

The procedure provided in this section uses the following gateway.conf file edit script is used to change the CloudCenter Orchestrator (CCO) database credentials.

gateway.conf file edit script
#!/usr/bin/env bash
read    -p  "Enable Mongo Auth [true/false]: " MONGO_AUTH_ENABLE
read    -p  "Username: " MONGO_REPLICA_USERNAME
read -s -p  "Password: " MONGO_REPLICA_PASSWORD
 
echo ""
 
if [ "$MONGO_AUTH_ENABLE" != "true" ]; then
    MONGO_AUTH_ENABLE=false
fi
 
echo  "" >> /etc/sysconfig/gatway.conf
sed -i '/^export MONGO_REPLICA_USERNAME=/{h;s/=.*/='"$MONGO_REPLICA_USERNAME"'/};${x;/^$/{s//export MONGO_REPLICA_USERNAME='"$MONGO_REPLICA_USERNAME"'/;H};x}' /etc/sysconfig/gateway.conf
sed -i '/^export MONGO_REPLICA_PASSWORD=/{h;s/=.*/='"$MONGO_REPLICA_PASSWORD"'/};${x;/^$/{s//export MONGO_REPLICA_PASSWORD='"$MONGO_REPLICA_PASSWORD"'/;H};x}' /etc/sysconfig/gateway.conf
sed -i '/^export MONGO_AUTH_ENABLE=/{h;s/=.*/='"$MONGO_AUTH_ENABLE"'/};${x;/^$/{s//export MONGO_AUTH_ENABLE='"$MONGO_AUTH_ENABLE"'/;H};x}' /etc/sysconfig/gateway.conf
 
echo "successfully update the mongo auth info"
 
read -p "restart the CCO? [yes/no] " RESTART_CCO
 
if [ ${RESTART_CCO,,} == yes ]; then
    systemctl restart cco
    echo "restart CCO"
else
    echo "not restart CCO"
fi

Mongo Replica Process

To set up a Mongo replica in you legacy CloudCenter 4.10.0.3/4 system, follow this procedure.

  1. Try a few deployments in your legacy CloudCenter Platform environment.

  2. When the deployments complete without any issues, then your environment is ready to set up the Mongo replica.

  3. Perform these tasks in the CCO Mongo Primary instance:

    1. Login to the CCO Mongo primary instance.

    2. Login to Mongo and verify that the shell prefix displays replicaName:PRIMARY.

    3. Run the command to backup the database.

      # The following command backs up the entire Mongo database under the current directory in your file system.
      
      >mongodump
  4. Perform these tasks in the CCO Mongo Secondary instances:

    1. Login to the CCO Mongo Secondary instance.

    2. Login to Mongo and verify that the shell prefix displays replicaName:SECONDARY.

  5. Change the HA proxy in the load balancer VM to only point only to CCO Mongo Primary instance.

    1. SSH into the CCO load balancer VM.

    2. In the /etc/haproxy/haproxy.cfg file, comment out the lines that refer to the secondary CCOs.

    3. Run the following command to restart the HA proxy server after these changes.

      >systemctl restart haproxy
  6. Try another deployment on the CCM to ensure that the deployments complete without any issues.

  7. Perform these tasks in the CCO Mongo Primary instance:

    1. Login to the CCO Mongo primary instance.

    2. Login to Mongo and verify that the shell prefix displays replicaName:PRIMARY.

    3. Create cliqruser for the cliqr database and admin for the database admin.

      ###### Replace <credentials> with the applicable password. ######
      >mongo
      
      ###### in mongo shell ######
      use cliqr
      ret = db.createUser(
        {
          user: "cliqruser",
          pwd: <credentials>,
          roles: [ { role: "readWrite", db: "cliqr" } ]
        }
      )
      use admin
      db.createUser(
        {
          user: "admin",
          pwd: <credentials>,
          roles: [ { role: 'root', db: 'admin' } ]
        }
      )
    4. Wait for a few seconds, the users will be propagated to CCO Mongo secondary instance as well.

    5. Use openssl to generate a key for the replica set.

      >openssl rand -base64 756 > /var/lib/mongo/mongo.key
    6. Modify the permissions for the mongo.key file and move the key file to the applicable location.

      >chmod 400 /var/lib/mongo/mongo.key
      >chown mongod:mongod /var/lib/mongo/mongo.key
    7. Modify the Mongo config file to enable the security settings in the primary Mongo server.

      # add authorization: "enabled" and keyFile: <generated key file in file-system>
       >sed -i 's,#security:,security:\n  authorization: "enabled"\n  keyFile: /var/lib/mongo/mongo.key,g' /etc/mongod.conf
    8. Copy the key and the Mongo config file to both secondary Mongo servers.

  8. Perform these tasks in the CCO Mongo Secondary instances:

    1. Modify the permissions for the mongo.key file.

      >chown mongod:mongod /var/lib/mongo/mongo.key
    2. Restart the Mongo daemon in the CCO Mongo Secondary instances.

      >systemctl restart mongod
  9. Now, restart the Mongo daemon on the primary, at this time, the Primary Node status may migrate to another VM.

    When you add the three replicas in the setup, the current primary may become a secondary and another primary from one of the other available replicas will take over as the primary instance.

  10. Find the new master Mongo server (the Mongo shell login prompt should display replicaName:PRIMARY).

    1. Login to the CCO Mongo primary instance.

    2. When you execute the following command...

      >mongo
      use admin
      db.auth("admin", "<creds>")
      rs.status()

      ...you should receive the following response.

      {
            "set" : "setname",
            "date" : ISODate("2019-07-01T21:27:08.652Z"),
            "myState" : 2,
            "term" : NumberLong(2),
            "syncingTo" : "gwmongo2:27017",
            "syncSourceHost" : "gwmongo2:27017",
            "syncSourceId" : 2,
            "heartbeatIntervalMillis" : NumberLong(2000),
            "optimes" : {
                 "lastCommittedOpTime" : {
                       "ts" : Timestamp(1562016424, 1),
                       "t" : NumberLong(2)
                 },
                 "appliedOpTime" : {
                       "ts" : Timestamp(1562016424, 1),
                       "t" : NumberLong(2)
                 },
                 "durableOpTime" : {
                       "ts" : Timestamp(1562016424, 1),
                       "t" : NumberLong(2)
                 }
            },
            "members" : [
                 {
                       "_id" : 0,
                       "name" : "gwmongo1:27017",
                       "health" : 1,
                       "state" : 2,
                       "stateStr" : "SECONDARY",
                       "uptime" : 140931,
                       "optime" : {
                            "ts" : Timestamp(1562016424, 1),
                            "t" : NumberLong(2)
                       },
                       "optimeDate" : ISODate("2019-07-01T21:27:04Z"),
                       "syncingTo" : "gwmongo2:27017",
                       "syncSourceHost" : "gwmongo2:27017",
                       "syncSourceId" : 2,
                       "infoMessage" : "",
                       "configVersion" : 3,
                       "self" : true,
                       "lastHeartbeatMessage" : ""
                 },
                 {
                       "_id" : 2,
                       "name" : "gwmongo2:27017",
                       "health" : 1,
                       "state" : 1,
                       "stateStr" : "PRIMARY",
                       "uptime" : 140930,
                       "optime" : {
                            "ts" : Timestamp(1562016424, 1),
                            "t" : NumberLong(2)
                       },
                       "optimeDurable" : {
                            "ts" : Timestamp(1562016424, 1),
                            "t" : NumberLong(2)
                       },
                       "optimeDate" : ISODate("2019-07-01T21:27:04Z"),
                       "optimeDurableDate" : ISODate("2019-07-01T21:27:04Z"),
                       "lastHeartbeat" : ISODate("2019-07-01T21:27:07.445Z"),
                       "lastHeartbeatRecv" : ISODate("2019-07-01T21:27:06.661Z"),
                       "pingMs" : NumberLong(0),
                       "lastHeartbeatMessage" : "",
                       "syncingTo" : "",
                       "syncSourceHost" : "",
                       "syncSourceId" : -1,
                       "infoMessage" : "",
                       "electionTime" : Timestamp(1561875498, 1),
                       "electionDate" : ISODate("2019-06-30T06:18:18Z"),
                       "configVersion" : 3
                 },
                 {
                       "_id" : 3,
                       "name" : "gwmongo3:27017",
                       "health" : 1,
                       "state" : 2,
                       "stateStr" : "SECONDARY",
                       "uptime" : 140930,
                       "optime" : {
                            "ts" : Timestamp(1562016424, 1),
                            "t" : NumberLong(2)
                       },
                       "optimeDurable" : {
                            "ts" : Timestamp(1562016424, 1),
                            "t" : NumberLong(2)
                       },
                       "optimeDate" : ISODate("2019-07-01T21:27:04Z"),
                       "optimeDurableDate" : ISODate("2019-07-01T21:27:04Z"),
                       "lastHeartbeat" : ISODate("2019-07-01T21:27:07.445Z"),
                       "lastHeartbeatRecv" : ISODate("2019-07-01T21:27:08.146Z"),
                       "pingMs" : NumberLong(0),
                       "lastHeartbeatMessage" : "",
                       "syncingTo" : "gwmongo2:27017",
                       "syncSourceHost" : "gwmongo2:27017",
                       "syncSourceId" : 2,
                       "infoMessage" : "",
                       "configVersion" : 3
                 }
            ],
            "ok" : 1
      }
  11. In all instances, update the gateway.jar and cloudblade.jar file for each CCO.

    Wait until one CCO has been fully started before updating another CCO.

  12. In the CCM Cloud Region page, register the CCO for CCO-proxy again 3 times.

    Wait until one region is fully function before registering another CCO.

  13. Try another deployment on the CCM to ensure that the deployments complete without any issues.

  14. Modify/Update existing deployments as required.

You have now set up a Mongo replica in you legacy CloudCenter 4.10.0.3/4 system.

Failure Rollback Process

If you encounter a failure during the Mongo replica process, follow this procedure to roll back to the previous state.

  1. Disable the authorization in /etc/mongod.conf, and remove the security section.

  2. At the Shell prompt, restore the corrupted database by running the following command.

    If you do not have any data impact, ignore this step.

    mongorestore <dump-file-directory>
  3. Verify if the Mongo cluster is working correctly by running the following command at the mongo shell.

    rs.status()
  4. Roll-back the gateway.jar and cloudblade.jar binary changes.

  5. Restore changes to the HA proxy in the load balancer VM.

You have now rolled back to the previous state.


  • No labels
© 2017-2019 Cisco Systems, Inc. All rights reserved