The recommended deployment of a viprion is that all blades should be cabled identically, so that the failure of a single blade does not force traffic to traverse the viprion's backplane to get to the network via another blade. Typical configuration would be that 1/1.1 and 2/1.1 and 3/1.1 and 4/1.1 etc should all be members of the same aggregate link (LACP)
It is possible to have the physical ports on the blades configured differently, simply by putting them into different vlans, but this means you will have no redundancy within the chassis.
The first blade to boot up will become the cluster primary (and will show '-P-' in the commandline prompt). All other blades will be secondary (and show '-S-'). The primary blade will respond to the cluster management address, so I suspect that when you could not reach the cluster address, it was because blade 2 was primary at the time.
You should also have the management ethernet port connected to all blades, so that they can be reached individually (though you can ssh to one blade and then type "ssh slot2" to connect to the CLI of blade 2 (or slot3, or slot4 etc). Each blade will respond to its own address, so in a cluster with 4 blades, you can configure 5 management addresses - one for each blade, plus the cluster address that floats to whichever blade is primary.