Article ID: 130182, created on Jan 18, 2017, last review on May 15, 2017

  • Applies to:
  • Operations Automation 7.0

This article contains instructions on how to migrate an Operations Automation management node host running RHEL/CentOS 6.x to a host running RHEL/CentOS 7.x.

Important: If the Operations Automation system database is located on a remote host, use this instruction for migration.

Note: The migration procedure does not include transferring of old log files in order to minimize the downtime. It is advised to save the old log files /var/log/pa/* to allow historical root cause investigation of arising issues.

Preparation

  1. Make sure that preparation stage is finished from guide Migration of Operations Automation 7.0 OSS/BSS Management Nodes from RHEL\CentOS 6 to RHEL\CentOS 7

  2. Make sure that OA 7.0.1 with 130365, 130373 is installed on the source host.
  3. Make sure that the destination host meets all the requirements.
  4. Make sure that the system on the destination node is up to date. Run command:

    yum update
    
  5. Assign any temporary IP addresses on the destination node. Keep in mind, later you will have to copy the IP addresses and interface names to the destination host as they are. That is, if the BackNet IP address of the source node is bound to the eth0 interface, then on the destination node the eth0 interface should also be configured for the BackNet IP.
  6. Check that Odin Automation Central YUM Repository is accessible from the destination node. After executing the following command, you should get the HTTP 200 code:

    curl -LI http://download.automation.odin.com/oa/7.0/repo/RHEL/7/repodata/repomd.xml 
    
  7. Make sure that your hostname of the source node is resolved to the backnet IP address of the source node. Check hostname -i and edit: /etc/hosts if necessary.
  8. Make sure that there is enough free space to store the backed up data.
  9. Make sure passwordless ssh connection is configured from source to destination OA MN
  10. Download the tarball with migration scripts to the source node.
  11. Extract the archive with the following command:

    tar -xvf mn-migration-kit-2.24-scripts.tar -C /  
    

    All the files will be placed to /usr/local/pem/bin

  12. Run the pre-check script:

    /usr/local/pem/bin/backup.sh --dst-mn-ip <DST_MN_IP> --dst-db-ip <DST_MN_IP> -o <SRC_BACKUP_DIR> -r <DST_BACKUP_DIR> -t mn-migration-precheck 
    

    where:

    • --dst-mn-ip is a backnet IP address of the destination MN host
    • --dst-db-ip is the same backnet IP address of the destination MN host
    • -o is an output directory where you want the backup to be placed on the source MN host, default is /OA_management_node_backup
    • -r is an output directory where you plan to place the backup on the destination MN host, default is /OA_management_node_backup
    • -t is a target for migration script, now it has to be mn-migration-precheck
  13. If the pre-check failed for some reason, follow the recommendations in the script output. In case there are custom services which you also need to migrate, refer to Migrating Services to Nodes Running Up-to-Date OSes. Important: Before backing up the Operations Automation management node, make sure that there are no custom services (for example, apache, bind, proftpd and so on) installed on the host.

  14. Please re-deploy migration kit (step #10) in case you installed any hotfix after the pre-check.

Backup

To back up all the necessary data from the Operations Automation management node, do the following:

  1. Log in to the management node as root.

  2. Back up the necessary data using the command:

    /usr/local/pem/bin/backup.sh --dst-mn-ip <DST_MN_IP> --dst-db-ip <DST_MN_IP> -o <SRC_BACKUP_DIR> -r <DST_BACKUP_DIR> -t mn-migration 
    

    where:

    • --dst-mn-ip is a backnet IP address of the destination MN host
    • --dst-db-ip is the same backnet IP address of the destination MN host
    • -o is an output directory where you want the backup to be placed on the source MN host, default is /OA_management_node_backup
    • -r is an output directory where you plan to place the backup on the destination MN host, default is /OA_management_node_backup
    • -t is a target for migration script, now it has to be mn-migration

    After the operation is performed, the backup data will be stored in the /OA_management_node_backup directory. The data contains all the information needed for migration:

    • entire pem folder (including tarballs, APS, binaries, etc.)
    • full DB dump
    • SSH configs
    • redis configs
    • configs of OACI, pvps APS
  3. Copy the backup to any safe external storage.
  4. Write down the host name, the external and internal IP addresses of the host.
  5. If the Windows Azure Pack component is installed on the source management node, use the "Backup" section of this instruction to backup the additional data.
  6. Shut down the original host.

Restore

To finish the migration of the Operations Automation management node, you need to restore the backed up data to a new host.

  1. Log in to the destination host as root.
  2. Assign the original host IP addresses to the new host.
  3. Configure the host name so that it matches the name of the original host. Then make sure that hostname resolution is the same as on the source node. Check hostname -i.
  4. If a virtualization technology is used, make sure that it has the correct settings (IP addresses, hostname, name servers) for the destination node.
  5. Copy the backed up data to the host.
  6. Perform the following command from the directory with the backed up data:

    cd /OA_management_node_backup
    ./install.py --migrate --communication_ip=<BACKNET_IP> [--external_ip=<FRONTNET_IP>]
    

    where:

    • --communication_ip is management node backnet IP address
    • --external_ip is management node frontnet IP address (optional parameter, omit them in case MN doesn't have external IP)
  7. If the Windows Azure Pack component is installed on the source management node, use the Restore section of this instruction to restore the additional data.

Once the data is restored, all the necessary services will be launched automatically, and you will be able to work with Operations Automation.

If something goes wrong

The restoration process may stop, indicating some failure, and ask for confirmation to proceed, for example:

2017-01-14 15:56:01.415 [ERROR] ['/usr/local/pem/bin/ppm_ctl', '-f', '/usr/local/pem/etc/pleskd.props', '-b', '-q', 'reinstall', '1', '152'] exited with non-zero status 1, stderr: None, stdout: None
An error has occurred during performing action "Reinstalling pkg other-saml-server-aps in main phase". 
Please select next action:
(r)etry/abort/ignore:

Choosing abort will stop the whole process, ignore will skip the failure and continue, and retry - retry the failed operation. If retry does not pass, the failed operation should be re-run manually after the restore ends.

Rollback

To rollback the migration:

  1. Revert the host name and IP addresses on the target host.
  2. Shut down the target host.
  3. Start the original host.

5356b422f65bdad1c3e9edca5d74a1ae caea8340e2d186a540518d08602aa065 e12cea1d47a3125d335d68e6d4e15e07 0871c0b47b3b86ae3b1af4c2942cd0ce 1941880841f714e458ae4dc3d9f3062d

Email subscription for changes to this article
Save as PDF