Article ID: 118659, created on Nov 15, 2013, last review on Jun 7, 2014

  • Applies to:
  • Operations Automation

Symptoms

All backups have status failed in CP for a VE. The log /var/log/IM/PACI-im.log contain lines:

2013-11-05 03:00:05,887 DEBUG BackupTask [VF executor thread #7] - Starting backup of VE uuid=[_477c28d3.141ca037dd9._7f7a], schedule: [daily]
2013-11-05 03:00:05,993 DEBUG Pipeline [RequestProcessor-5] - Early callback invocation on pipeline #263 [(Backup VE [Hosting]), created at node [im1], step 1 ([BACKUP]), mode: EXEC, state:CALLBACK]
2013-11-05 03:00:05,996 DEBUG Pipeline [VF executor thread #7] - pipeline #263 [(Backup VE [Hosting]), created at node [im1], step 1 ([BACKUP]), mode: EXEC, state:CALLBACK] - invoked callback for reqId 1310
2013-11-05 03:00:05,997 DEBUG Pipeline [VF executor thread #7] - Executing pipeline #264 [(Backup VE [vsprojects]), created at node [im1], step 1 ([BACKUP]), mode: EXEC, state:TASK_SUBMITTED]
2013-11-05 03:00:05,998 DEBUG Pipeline [Shared executor thread #5 @1] - Running pipeline #263 [(Backup VE [Hosting]), created at node [im1], step 1 ([BACKUP]), mode: EXEC, state:CALLBACK]
2013-11-05 03:00:05,998 INFO  GenericVm2VfCallback [Shared executor thread #5 @1] - GenericVm2VfCallback.done_with_message(-2147483063, null)
2013-11-05 03:00:05,998 WARN  GenericVm2VfTask [Shared executor thread #5 @1] - VM2VF operation [BACKUP] (reqId=1310) finished with rc=-2147483063 (0x80000249)
2013-11-05 03:00:05,999 WARN  GenericVm2VfTask [Shared executor thread #5 @1] - VM2VF operation [BACKUP] (reqId=1310) finished with rc=-2147483063 (0x80000249)

Or:

2014-05-16 14:37:55,501 () WARN  LocalVm2Vf [Callback unvocationThread] - CORBA exception OBJECT_NOT_EXIST:Server-side Exception:  caught at 'done_with_message' method invocation
2014-05-16 14:37:55,501 () WARN  LocalVm2Vf [Callback unvocationThread] - Callback 'done_with_message' will be re-invoked in 1 seconds
2014-05-16 14:37:55,793 (47e9e6af-949c-419c-8e6c-bd15bca3c85e) ERROR NativeVm2VfCode [RequestProcessor-1380] - [30834:9648978] ERR PrlSrv_LoginEx:(192.168.160.24, 0x114506): failure PRL_ERR_HANDSHAKE_FAILED @[common/common_lib.c][343][login_fill_session_uuid][8847])
2014-05-16 14:37:55,793 (47e9e6af-949c-419c-8e6c-bd15bca3c85e) ERROR NativeVm2VfCode [RequestProcessor-1380] - [30834:9648978] ERR login_fill_session_uuid(192.168.160.24): failure PRL_ERR_HANDSHAKE_FAILED @[common/common_lib.c][507][establish_connection_ex][8847])
2014-05-16 14:37:55,793 (47e9e6af-949c-419c-8e6c-bd15bca3c85e) WARN  NativeVm2VfCode [RequestProcessor-1380] - [30834:9648978] WRN PrlSrv_Logoff: failure PRL_ERR_NOT_CONNECTED_TO_DISPATCHER @[common/common_lib.c][259][__drop_connection][8847])
2014-05-16 14:37:55,793 (47e9e6af-949c-419c-8e6c-bd15bca3c85e) INFO  NativeVm2VfCode [RequestProcessor-1380] - [30834:9648978] INF dropped connection: "192.168.160.24", session uuid: "", session handle 0x114506 @[common/common_lib.c][262][__drop_connection][8847])
2014-05-16 14:37:55,793 (47e9e6af-949c-419c-8e6c-bd15bca3c85e) ERROR NativeVm2VfCode [RequestProcessor-1380] - [30834:9648978] ERR establish_connection(192.168.160.24): failure PRL_ERR_HANDSHAKE_FAILED @[common/generic_sdk_cb.c][622][__init_generic_params][8847])
2014-05-16 14:37:55,794 (47e9e6af-949c-419c-8e6c-bd15bca3c85e) INFO  NativeVm2VfCode [RequestProcessor-1380] - [30834:9648978] INF put cached connection: session uuid: "{5cce2eac-25df-4fa4-a6b5-7a4213c214e1}", session handle 0x10ea21, ref count 1 @[common/common_lib.c][567][drop_connection][8847])
2014-05-16 14:37:55,794 (47e9e6af-949c-419c-8e6c-bd15bca3c85e) DEBUG NativeVm2VfCode [RequestProcessor-1380] - [30834:9648978] Callback invocation: done_with_message(-2147482606, 2958114538496458752, null)
2014-05-16 14:37:55,797 () DEBUG ReqIdSetter [RequestProcessor-1380] - Sending CORBA reply to [backup_ve_cb], corbaReqId = 9648978; vm2vfReqId = 30834

Cause

Error 0x80000249 corresponds to PRL_ERR_CANT_CONNECT_TO_DISPATCHER and it indicates either network connection problems between IM node and backup/source node, or that the dispatcher service is not running at the backup/source node.

Resolution

  1. Use

    ~# telnet <NODE_IP> 64000
    

    From IM node to test connectivity between IM node and the node.

  2. Check the state of prl_disp_service and start it if necessary on the node:

    ~# /etc/init.d/prl_disp_service status
    

caea8340e2d186a540518d08602aa065 5356b422f65bdad1c3e9edca5d74a1ae e12cea1d47a3125d335d68e6d4e15e07

Email subscription for changes to this article
Save as PDF