Forum Discussion

Piotr_Lewandows's avatar
Piotr_Lewandows
Icon for Altostratus rankAltostratus
Jul 28, 2017

VIPRION vCMP and vGuest cluster member IP

Hi,

 

I am not very experienced in VIPRION and vCMP. Do not have access to such setup right now but would like to figure out things in advance.

 

In Overview of vCMP configuration considerations there is section Management IP addresses:

 

In addition to the cluster IP address, each blade should have its own unique cluster member IP address; this is recommended for both the host and guest systems. For example, if you have two guests that span four blades, you should have a cluster IP address for the host and also a cluster member IP address for each individual blade on the host. Then, within the guest, you should have a cluster IP address and a unique cluster member IP address for each blade in the guest. This configuration improves failover capabilities and communication within the chassis/blades.

 

On every screenshot showing vGuest config I saw there is only such entry:

 

 

Just single IP can be configured, so how those vGuest cluster member IPs can be configured?

 

Piotr

 

7 Replies

  • nathe's avatar
    nathe
    Icon for Cirrocumulus rankCirrocumulus

    Piotr, apologies i've got to do this from a distant memory. The screen you show, is this not the initial deployment via the host? I think when you access the guest once it's deployed you will see the other 4 cluster IP address options. I may be mis-remembering but i'm sure that's the case.

     

  • Hi,

     

    Yes it is, as I said I have no access to VIPRION right now and I never configured vCMP on VIPRION so I was forced to base on what I found on the Internet and in my resources.

     

    I was confused by sources I checked so tried to find out if it's as I think it's described.

     

    Anyway main question here is if setting 5 IPs (for vGuest spanning 4 blades) is indeed what is necessary - and from your reply it seems so.

     

    I wonder what is best practice here, always assign 5 IPs per vGuest or just add them when necessary. If I am not wrong those IPs can/should be from different network that vHost mgmt. IPs but all should be in the same network?

     

    Piotr

     

  • nathe's avatar
    nathe
    Icon for Cirrocumulus rankCirrocumulus

    Always assign all the IP addresses. We had a situation last year when they weren't and then when we needed them the subnet was fully allocated. required a re-IPing exercise which was a real pain. In my experience they should all be on the same Mgmt vlan, that way they can "float" across blades.

     

  • Thanks a lot, hope after I will put my hands on VIPRION everything will be easier to understand :-)

     

    Piotr

     

  • nathe's avatar
    nathe
    Icon for Cirrocumulus rankCirrocumulus

    Good luck. Report back with findings if I'm mistaken. Ta

     

  • Sure I will. Taking chance, I have quite a problem figuring out why those member IPs are necessary. I assume that only relevant scenario is when HA is configured - or not really?

     

    Even for HA I wonder how exactly those member IPs are used?

     

    I know that multi slot vGuest is actually bunch of separate VMs working together and advertising to outside world as single entity.

     

    My guess is that TMM related traffic is handled via hidden ports (0.x) using both front interfaces and backplane connections.

     

    But what member IPs are used for? In case of HA we can either set one Unicast (TMM) + Multicast (I assume Multicast is using vGuest cluster IP?) or Unicast (TMM) + Unicast member IP mesh.

     

    Side note - in v13.0.0.HF2 (I don't remember if in older versions as well) it's possible to define interface for Multicast - when it makes sense to choose other interface than eth0 - mgmt.?

     

    Now when we use Multicast or Unicast mesh what exactly happens?

     

    Failover traffic (at least based on traces I did) works like that:

     

    • Active is sending traffic to Standby
    • Standby is sending traffic to Active

    Standby decides that Active is down when will not receive any packets from Active for 3 s (default)

     

    Scenarios seems to look like that:

     

    Unicast + Multicast:

     

    • Active sends traffic to TMM port of Passive
    • Active sends traffic (from cluster IP?) to set Multicast IP (default 224.0.0.245)
    • Standby listens for packets on TMM port
    • Standby listens for packets on what? cluster IP, all member IPs (more probably)?

    When Standby will decide that Active is down? When there is both no TMM traffic and Multicast traffic - quite logic, but what is difference if Standby/Active has only vGuest cluster IP configured vs cluster member IPs?

     

    Is having all member IPs defined is a way to detect how many VMs on Active are actually up (so how many blades are running on Active)?

     

    If so how can it be dependent on all member IPs configured on Active - I assume that Multicast traffic is not send from all member IPs at the same time - or it is so?

     

    Can see logic when Unicast mesh is configured - then source and destination is clear and it's easy to find out which VM on Active is not sending traffic = it's down.

     

    If detecting vGuest VMs failures is indeed part of HA then how it is used for failover decisions?

     

    Will failover be triggered if Standby detects that Active is running on less slots?

     

    Will Active consider Standby down it it's running on less slots?

     

    Will mirroring break when number of slots on Active and Standby is different (according to info I found mirroring is not working correctly if both HA devices are not homogeneous - for vGuest = same vCPU and slots allocation)

     

    Any other outcomes? Or above it total garbage?

     

    Sorry for so many questions but as I said, right now I have no access to VIPRION so can't check it by myself.

     

    Piotr

     

  • Sure I forgot about HA Group and Cluster object that can be defined, but still question is valid - is DSC somehow reacting to slot down situation without HA Group?

     

    Other, a bit related question is about statement from

     

    here:

     

    Each guest and its equivalent guest on the other chassis are homogeneous (same slot numbers and same number of cores) and form a separate Sync-Failover device group. Note that homogeneous guests in a device group are only required when connection mirroring is enabled.

     

    In every other place it's stressed that guest should be homogeneous except above, so it's supported to create cluster with guests running on different slots with different number of cores or it is not (if mirroring is enabled)?

     

    Another question is if homogeneous config is as well required when SSL mirroring is enabled?

     

    Piotr