F5 & Cisco ACI Essentials - Take advantage of Policy Based Redirect

Different applications and environments have unique needs on how traffic is to be handled. Some applications due to the nature of their functionality or maybe due to a business need do require that the application server(s) are able to view the real IP of the client making the request to the application.

Now when the request comes to the BIG-IP it has the option to change the real IP of the request or to keep it intact. In order to keep it intact the setting on the F5 BIG-IP ‘Source Address Translation’ is set to ‘None’.

Now as simple as it may sound to just toggle a setting on the BIG-IP, a change of this setting causes significant change in traffic flow behavior.

Let’s take an example with some actual values. Starting with a simple setup of a standalone BIG-IP with one interface on the BIG-IP for all traffic (one-arm)

  • Client – 10.168.56.30
  • BIG-IP Virtual IP – 10.168.57.11
  • BIG-IP Self IP – 10.168.57.10
  • Server – 192.168.56.30

Scenario 1: With SNAT

From Client : Src: 10.168.56.30 Dest: 10.168.57.11

From BIG-IP to Server: Src: 10.168.57.10 (Self-IP) Dest: 192.168.56.30

With this the server will respond back to 10.168.57.10 and BIG-IP will take care of forwarding the traffic back to the client. Here the application server see’s the IP 10.168.57.10 and not the client IP

Scenario 2: No SNAT

From Client : Src: 10.168.56.30 Dest: 10.168.57.11

From BIG-IP to Server: Src: 10.168.56.30 Dest: 192.168.56.30

With this the server will respond back to 10.168.56.30 and here where comes in the complication, the return traffic needs to go back to the BIG-IP and not the real client. One way to achieve this is to set the default GW of the server to the Self-IP of the BIG-IP and then the server will send the return traffic to the BIG-IP. BUT what if the server default gateway is not to be changed for whatsoever reason. It is at this time Policy based redirect will help. The default gw of the server will point to the ACI fabric, the ACI fabric will be able to intercept the traffic and send it over to the BIG-IP.

With this the advantage of using PBR is two-fold

  • The server(s) default gateway does not need to point to BIG-IP but can point to the ACI fabric
  • The real client IP is preserved for the entire traffic flow
  • Avoid server originated traffic to hit BIG-IP, resulting BIG-IP to configure a forwarding virtual to handle that traffic. If server originated traffic volume is high it could result unnecessary load the BIG-IP


Before we get to the deeper into the topic of PRB below are a few links to help you refresh on some of the Cisco ACI and BIG-IP concepts

Now let’s look at what it takes to configure PBR using a Standalone BIG-IP Virtual Edition in One-Arm mode

Network diagram for reference:


To use the PBR feature on APIC - Service graph is a MUST

Details on L4-L7 service graph on APIC

To get hands on experience on deploying a service graph (without pbr)

Configuration on APIC

1) Bridge domain ‘F5-BD’

  • Under Tenant->Networking->Bridge domains->’F5-BD’->Policy
  • IP Data plane learning - Disabled

2) L4-L7 Policy-Based Redirect

  • Under Tenant->Policies->Protocol->L4-L7 Policy based redirect, create a new one
  • Name: ‘bigip-pbr-policy’
  • L3 destinations: BIG-IP Self-IP and MAC
  • IP: 10.168.57.10
  • MAC: Find the MAC of interface the above Self-IP is assigned from logging into the BIG-IP (example: 00:50:56:AC:D2:81)

3) Logical Device Cluster- Under Tenant->Services->L4-L7, create a logical device

  • Managed – unchecked
  • Name: ‘pbr-demo-bigip-ve`
  • Service Type: ADC
  • Device Type: Virtual (in this example)
  • VMM domain (choose the appropriate VMM domain)
  • Devices: Add the BIG-IP VM from the dropdown and assign it an interface
  • Name: ‘1_1’, VNIC: ‘Network Adaptor 2’
  • Cluster interfaces
  • Name: consumer, Concrete interface Device1/[1_1]
  • Name: provider, Concrete interface: Device1/[1_1]

4) Service graph template

  • Under Tenant->Services->L4-L7->Service graph templates, create a service graph template
  • Give the graph a name:’ pbr-demo-sgt’ and then drag and drop the logical device cluster (pbr-demo-bigip-ve) to create the service graph
  •  ADC: one-arm
  • Route redirect: true

5) Click on the service graph created and then go to the Policy tab, make sure the Connections for the connectors C1 and C2 and set as follows:

  • Connector C1
  • Direct connect – False (Not mandatory to set to 'True' because PBR is not enabled on consumer connector for the consumer to VIP traffic)
  • Adjacency type – L3
  • Connector C2
  • Direct connect - True
  • Adjacency type - L3

6) Apply the service graph template

  • Right click on the service graph and apply the service graph
  • Choose the appropriate consumer End point group (‘App’) provider End point group (‘Web’) and provide a name for the new contract
  • For the connector select the following:
  • BD: ‘F5-BD’
  • L3 destination – checked
  • Redirect policy – ‘bigip-pbr-policy’
  • Cluster interface – ‘provider’

Once the service graph is deployed, it is in applied state and the network path between the consumer, BIG-IP and provider has been successfully setup on the APIC.

7) Verify the connector configuration for PBR. Go to Device selection policy under Tenant->Services-L4-L7. Expand the menu and click on the device selection policy deployed for your service graph.

  • For the consumer connector where PBR is not enabled
  • Connector name - Consumer
  • Cluster interface - 'provider'
  • BD- ‘F5-BD’
  • L3 destination – checked
  • Redirect policy – Leave blank (no selection)
  • For the provider connector where PBR is enabled:
  • Connector name - Provider
  • Cluster interface - 'provider'
  • BD - ‘F5-BD’
  • L3 destination – checked
  • Redirect policy – ‘bigip-pbr-policy’


Configuration on BIG-IP

1) VLAN/Self-IP/Default route

  • Default route – 10.168.57.1
  • Self-IP – 10.168.57.10
  • VLAN – 4094 (untagged) – for a VE the tagging is taken care by vCenter

2) Nodes/Pool/VIP

  • VIP – 10.168.57.11
  • Source address translation on VIP: None

3) iRule (end of the article) that can be helpful for debugging

Few differences in configuration when the BIG-IP is a Virtual edition and is setup in a High availability pair

1) BIG-IP: Set MAC Masquerade (https://support.f5.com/csp/article/K13502)

2) APIC: Logical device cluster

  • Promiscuous mode – enabled
  • Add both BIG-IP devices as part of the cluster 

3) APIC: L4-L7 Policy-Based Redirect

  • L3 destinations: Enter the Floating BIG-IP Self-IP and MAC masquerade

------------------------------------------------------------------------------------------------------------------------------------------------------------------

Configuration is complete, let’s take a look at the traffic flows

Client-> F5 BIG-IP -> Server



Server-> F5 BIG-IP -> Client

In Step 2 when the traffic is returned from the client, ACI uses the Self-IP and MAC that was defined in the L4-L7 redirect policy to send traffic to the BIG-IP



iRule to help with debugging on the BIG-IP

when LB_SELECTED {
 log local0. "=================================================="
 log local0. "Selected server [LB::server]"
 log local0. "=================================================="
}
 
when HTTP_REQUEST {
  set LogString "[IP::client_addr] -> [IP::local_addr]"
  log local0. "=================================================="
  log local0. "REQUEST -> $LogString"
  log local0. "=================================================="
}
 
when SERVER_CONNECTED {
 log local0. "Connection from [IP::client_addr] Mapped -> [serverside {IP::local_addr}] \
   -> [IP::server_addr]"
}
when HTTP_RESPONSE {
  set LogString "Server [IP::server_addr] -> [IP::local_addr]"
  log local0. "=================================================="
  log local0. "RESPONSE -> $LogString"
  log local0. "=================================================="
}

Output seen in /var/log/ltm on the BIG-IP, look at the event <SERVER_CONNECTED>

Scenario 1: No SNAT -> Client IP is preserved
Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11
Rule /Common/connections <SERVER_CONNECTED>: 
Src: 10.168.56.30 Mapped -> 10.168.56.30 -> Dest: 192.168.56.30
Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30

If you are curious of the iRule output if SNAT is enabled on the BIG-IP - Enable AutoMap on the virtual server on the BIG-IP

Scenario 2: With SNAT -> Client IP not preserved
Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11
Rule /Common/connections <SERVER_CONNECTED>: 
Src: 10.168.56.30 Mapped -> 10.168.57.10-> Dest: 192.168.56.30
Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30

References:

ACI PBR whitepaper:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739971.html

Troubleshooting guide:

https://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/troubleshooting/Cisco_TroubleshootingApplicationCentricInfrastructureSecondEdition.pdf

Layer4-Layer7 services deployment guide

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401_chapter_011.html

Service graph:

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401_chapter_0111.html

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401.pdf

Published May 19, 2020
Version 1.0

Was this article helpful?

3 Comments