Article ID: 121757, created on May 26, 2014, last review on May 26, 2014

  • Applies to:
  • Operations Automation 5.4

Symptoms

All nfs cluster deployment files placed on MN in /root/deploy_nfs_cluster folder.

After successful cluster creation, diagnostic tool crm_mon shows, that quorum host in OFFLINE state:

# crm_mon
Defaulting to one-shot mode
You need to have curses available at compile time to enable console mode

============

Last updated: <DATE>
Current DC: <HOSTNAME1> (1efdbbf8-3a9f-43b6-bc48-a95724661cc4)
3 Nodes configured.
4 Resources configured.

============

Node: <HOSTNAME1> (e5639963-7820-4af3-aa9c-bbde7aff4f63): OFFLINE
Node: <HOSTNAME2> (1efdbbf8-3a9f-43b6-bc48-a95724661cc4): online
Node: <HOSTNAME3> (1cb19be7-1852-4705-9c5e-7f759fd69680): online

Cause

Unsupported network configuration

Resolution

According to deployment guide schema:

A quote from corresponding page of deployment guide:

Each NFS server has two interfaces - eth0 and eth1 - enabled.

  • For the eth0, configure a static IP address (from BackNet) that will be used by the configuration file for the Heartbeat.
  • For the eth1, configure a static IP address (from StorageNet) that will be used by the DRBD.

The quorum server has one interface eth0 with configured static IP address from BackNet.

This explicitly shows that both NFS server and Quorum should be connected to backnet using the same interface name - eth0.

The NIC names are hardcoded in deployment scripts, A feature request POA-72997 exists to allow configuring them. Please contact your account manager to track its status.

ac82ce33439a9c1feec4ff4f2f638899 caea8340e2d186a540518d08602aa065 5356b422f65bdad1c3e9edca5d74a1ae 2554725ed606193dd9bbce21365bed4e e12cea1d47a3125d335d68e6d4e15e07

Email subscription for changes to this article
Save as PDF