Forum Discussion

Tim__Cook_92180's avatar
Tim__Cook_92180
Icon for Nimbostratus rankNimbostratus
Aug 12, 2015

Openstack ltm integration

I have the following setup :

 

Openstack Juno vlan networking

 

compute and networking are on the same boxes and controller is separate .

 

I have the following problem ,

 

what ever compute the bigip ltm is hosted on is not able to talk to the bigip management ip , but if I migrate from one compute to another , I can now see the management network address from the current compute but not the compute the bigip ltm is now hosted on .

 

originally I could only provision pools on the bigip ltm with neutron, but then unable to delete the pool after, the status of the pool in horizon would not change from "pending delete", I could verify this pool was created on the bigip , but no members or vips ever went through , that is still my problem , so how do I allow a security policy for 1 specific interface of 3 on the bigip from all controllers including the controller its hosted on ?

 

2 Replies

  • John_Gruber_432's avatar
    John_Gruber_432
    Historic F5 Account

    Hey Tim,

     

    Let me make sure I am understanding you correctly and get other readers up to speed.

     

    It sounds like you are using the reference ML2 based Open vSwitch core driver with at least the following in its config files:

     

    [ml2]
    type_drivers = vlan
    tenant_network_types = vlan
    mechanism_drivers = openvswitch
    
    [ml2_type_vlan]
    network_vlan_ranges = [physical_network_name]:[first_vlan_id]:[last_vlan_id]
    
    [securitygroup]
    enable_security_group = True
    enable_ipset = True

    On you compute agents you have:

     

    [OVS]
    tenant_network_type = vlan
    network_vlan_ranges = [physical_network_name]:[first_vlan_id]:[last_vlan_id]
    bridge_mappings = [physical_network_name]:[some_linux_bridge_name_which_acceptes_8021qtagged_frames]
    
    [securitygroup]
    firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    enable_security_group = True

    and that you can use Neutron to create at least one external network (router:external) for floating IPs, a router for your tenant, and various tenant networks and subnets. We also assume you can start multiple Nova guests on different tenant networks, create the router between them, and have connectivity working between those guest instances.

     

    So far that's just owning Neutron in your cloud. Nothing f5 at all..

     

    After that, it seems you have:

     

    1) Launched a Nova guest using a VE image as its disk image, connecting at least 2 network interfaces to it, set the security group rule appropriately to allow TCP 22 and 443 to the first network interface (TMOS mgmt interface).

     

    2) Attached the network for the first interface to a Neutron router so it can be licensed appropriately (Neutron router SNATs to activate.f5.com).

     

    3) Logged into the TMOS image through the console and licensed the device OR added a floating IP to the router NATting an external network IP address to the mgmt interface of the TMOS device and used the webui to license the TMOS device.

     

    From there your message moves right on to provisioning pools. That's where we need more details...

     

    Are you trying to use our Neutron LBaaSv1 driver and agent to provision the pool, or are you simply provisioning your LTMs through their iControl APIs or management tools? If you are trying to use our LBaaSv1 driver and agent, which version did you download from devcentral.f5.com? You should at least be using 1.0.8-2. (please note BIG-IQ 4.5 included 1.0.1 is not recommended.. don't use it. BIG-IQ does not support LBaaS yet.)

     

    Couple of other things to consider if you are trying to use our LBaaSv1 solution:

     

    The LBaaSv1 solution is straight forward to setup with TMOS hardware appliances or VE devices which are setup outside of Neutron's management (meaning not subject to the generated OVS flows or iptables firewall rules). The connectivity is pretty easy to troubleshoot as well. It gets more complicated when using TMOS VEs which are Nova guest instances with network interfaces subject to the OVS flow rules and iptables security group firewall.

     

    Note: If you are using TMOS VEs as multi-tenant LBaaSv1 endpoints which are Nova guest instances, we strongly recommend you use GRE or VxLAN tunnels for your tenant networks. When using GRE or VxLAN as your tenant networks with the LBaaSv1 driver and agent, each TMOS device will need to have a non-floating SelfIP to act as a VTEP (virtual tunnel endpoint) which can route to the VTEP address of your other compute nodes (called their tunnel_ip in their configuration). Once a TMOS VE has a VTEP SelfIP address, it can encapsulate many tenant networks (overlay networks) and route IP packets (underlay network) to the compute hosts. Simply opening up the security group rules to allow for the appropriate tunnel traffic will suffice. There is not custom alteration to the compute nodes necessary. To support GRE or VxLAN connectivity to the TMOS VE instances, they must have the SDN Service license enables. It comes with 'better' bundles and higher.

     

    If you choose to use VLANs for your tenant networks, your compute nodes will require custom setups as OVS does not support Nova guest which generate 802.1q VLAN tags on their frames. OVS only supports guest with access ports (untagged interfaces). In Neutron, such access networks are not called VLANs, but Flat networks.

     

    If you choose to use Flat networks, remember that KVM limits the number of virtual interfaces to 10, which means TMOS VE instances will support 1 mgmt interface and 9 tenant networks.

     

    If you want to use VLANs for tenant networks and expect your TMOS VEs to function with our multi-tenant LBaaSv1, you will need to manually remove the VE TMOS virtual tap interfaces you want to be able to send 802.1q tagged frames from the OVS integration bridge and place them on the external bridge. This manual process must take place for each TMOS VE on each compute node and falls outside the Neutron integration. You use ovs-vsctl commands to move the appropriate vtap interfaces from one bridge to the other.

     

    The lack of VLAN tagging for guest instances is a limitation of Neutron OVS, not TMOS. There are several blueprint proposals to change this from the OpenStack community. In Kilo Neutron vlan-transparent attributes were added to allow for guest instances to insert their own 802.1q VLAN tags. However this functionality is not available for every core driver. (See: http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html)

     

    Note to all none ML2 proprietary SDN Vendors: In LBaaS v1.0.10, the agent code support the loading of SDN vendor supplied VLAN and L3 binding interfaces. The VLAN binding interfaces provides notification to SDN vendors when the TMOS device requires VLANs to be allowed or pruned from its interfaces. The L3 binding interface provides notification to SDN vendors when the TMOS device has bound or unbound an L3 address to one of its interfaces so that any L3 ACLs can be changed to allow or reject traffic forwarding. This means any SDN vendor can integrate with f5 LBaaS solutions by simply supplying a VLAN binding or L3 binding interface which will be loaded as part of the f5 LBaaS agent process.

     

    If you are using the LBaaSv1 solution or note, the next question to consider is if your management client (the agent process in LBaaSv1) can communicate to the TMOS VEs configured as its iControl endpoints. Do you need a floating IP to make this work? Does your security group allow for this?

     

  • here is my configuration :

    1 virtual controller running under vanilla kvm with management vlan trunked into bridge . 
    3 compute nodes 
    eth0 = management network 
    eth1 = vxlan endpoint network 
    
    2 controllers just for networking 
    eth0 = management network 
    eth1 = vxlan endpoint network 
    eth2 = 802.11 tagged for multiple vlans as external networks. 
    

    the problem with the vlan / tagged networks is that , the computes do not have access to talk to the tagged network, they only speak via vxlan tunnels to the network controllers via br-tun , so if I create a vxlan network I can use a vlan network as a floating ip , but I cannot use the floating ips on the guests directly, because compute does not have access to the tagged vlans.

    currently , I have moved away from the original model and Openstack is able to talk to the ltm ve in line , but still having problems setting up the vxlan portion .

    I have created a network and subnet for external and internal , still kind of vague on what networks I am supposed to use in openstack, I assumed external would be the vxlan subnet that my physical network servers use and internal would be the internal network i created .

    I created router X with neutron , then created an external network / Public with the vxlan vtep subnet trunked down and attached it to router X as the gateway subnet then created a private network and attached that network to router X . when I built the big ip ltm ve , I used a separate management network and appended a separate floating ip , and this works .

    3 interfaces attached to the bigip 
    management = internal ip with external ip as floating ip .  
    internal  = private subnet with port attached to external subnet 
    external  = public subnet of neutron vtep 
    

    I also created a separate virt with cirros and appended the internal subnet , and then used an ip from the public vxlan subnet I created as a floating ip and was able to hit the public floating ip remotely and hit external addresses from internally , basically verifying that the private and public ip pairs work properly form a neutron perspective .

    once I got the internal and external networks setup on the big ip , I changed the f5-bigip-lbaas-agent.ini config file and added the following

    f5_vtep_folder = 'Common'
    f5_vtep_selfip_name = 'vtep'
    advertised_tunnel_types = vxlan 
    

    then restarted f5 agent , then went onto the f5 ltm and added a tunnel called vtep and assigned an ip from the external network also the vtep l3 network for neutron , NOTE: the ip assigned to the tunnel is not the same ip as the external interface ip , but I did try adding the same ip as well, and still see the same error in neutron f5-bigip-lbaas-agent.log on the network servers .

    MissingVTEPAddress: device foo.foo1.com missing vtep selfip /Common/vtep
    

    This is where I am locked up now .

    Configs:

    neutron server:

    [ml2]
    type_drivers = vlan,vxlan
    tenant_network_types = vxlan,vlan
    mechanism_drivers =openvswitch
    [ml2_type_flat]
    flat_networks = physnet1
    [ml2_type_vlan]
    network_vlan_ranges = physnet1:672:677,physnet1:767:767,physnet1:703:703,physnet1:667:668
    [ml2_type_gre]
    [ml2_type_vxlan]
    vni_ranges =1001:2000
    vxlan_group =239.1.1.2
    [securitygroup]
    enable_security_group = True
    

    compute:

    tunnel_types =vxlan
    bridge_mappings =physnet1:br-ex
    

    Maybe this is not the right network model if there is a more formidable solution please do feel free to chime in .

    also based off your response, would not having the sdn module cause this error ?

    Also if I wanted to move the f5 ltm outside of neutron control , maybe onto the kvm controller that the openstack controller its self is hosted on , how could I get the private ips that are transferred over vxlan to work outside of Neutron ?