BIG-IP VE in Red Hat OpenShift Virtualization
Running BIG-IP VE in OpenShift Virtualization allows for VM and modern apps/Kubernetes to convergence, simplifying management and operations. This article covers best practices for running BIG-IP VE in the KubeVirt (KVM) implementation by Red Hat. All the best practice recommendations in this article are aligned with the OpenShift Virtualization Reference Implementation Guide.41Views1like0CommentsUsing BIG-IP GTM to Integrate with Amazon Web Services
This is the latest in a series of DNS articles that I've been writing over the past couple of months. This article is taken from a fantastic solution that Joe Cassidy developed. So, thanks to Joe for developing this solution, and thanks for the opportunity to write about it here on DevCentral. As a quick reminder, my first six articles are: Let's Talk DNS on DevCentral DNS The F5 Way: A Paradigm Shift DNS Express and Zone Transfers The BIG-IP GTM: Configuring DNSSEC DNS on the BIG-IP: IPv6 to IPv4 Translation DNS Caching The Scenario Let's say you are an F5 customer who has external GTMs and LTMs in your environment, but you are not leveraging them for your main website (example.com). Your website is a zone sitting on your windows DNS servers in your DMZ that round robin load balance to some backend webservers. You've heard all about the benefits of the cloud (and rightfully so), and you want to move your web content to the Amazon Cloud. Nice choice! As you were making the move to Amazon, you were given instructions by Amazon to just CNAME your domain to two unique Amazon Elastic Load Balanced (ELB) domains. Amazon’s requests were not feasible for a few reasons...one of which is that it breaks the RFC. So, you engage in a series of architecture meetings to figure all this stuff out. Amazon told your Active Directory/DNS team to CNAME www.example.com and example.com to two AWS clusters: us-east.elb.amazonaws.com and us-west.elb.amazonaws.com. You couldn't use Microsoft DNS to perform a basic CNAME of these records because of the BIND limitation of CNAME'ing a single A record to multiple aliases. Additionally, you couldn't point to IPs because Amazon said they will be using dynamic IPs for your platform. So, what to do, right? The Solution The good news is that you can use the functionality and flexibility of your F5 technology to easily solve this problem. Here are a few steps that will guide you through this specific scenario: Redirect requests for http://example.com to http://www.example.com and apply it to your Virtual Server (1.2.3.4:80). You can redirect using HTTP Class profiles (v11.3 and prior) or using a policy with Centralized Policy Matching (v11.4 and newer) or you can always write an iRule to redirect! Make www.example.com a CNAME record to example.lb.example.com; where *.lb.example.com is a sub-delegated zone of example.com that resides on your BIG-IP GTM. Create a global traffic pool “aws_us_east” that contains no members but rather a CNAME to us-east.elb.amazonaws.com. Create another global traffic pool “aws_us_west” that contains no members but rather a CNAME to us-west.elb.amazonaws.com. The following screenshot shows the details of creating the global traffic pools (using v11.5). Notice you have to select the "Advanced" configuration to add the CNAME. Create a global traffic Wide IP example.lb.example.com with two pool members “aws_us_east” and “aws_us_west”. The following screenshot shows the details. Create two global traffic regions: “eastern” and “western”. The screenshot below shows the details of creating the traffic regions. Create global traffic topology records using "Request Source: Region is eastern" and "Destination Pool is aws_us_east". Repeat this for the western region using the aws_us_west pool. The screenshot below shows the details of creating these records. Modify Pool settings under Wide IP www.example.com to use "Topology" as load balancing method. See the screenshot below for details. How it all works... Here's the flow of events that take place as a user types in the web address and ultimately receives the correct IP address. External client types http://example.com into their web browser Internet DNS resolution takes place and maps example.com to your Virtual Server address: IN A 1.2.3.4 An HTTP request is directed to 1.2.3.4:80 Your LTM checks for a profile, the HTTP profile is enabled, the redirect request is applied, and redirect user request with 301 response code is executed External client receives 301 response code and their browser makes a new request to http://www.example.com Internet DNS resolution takes place and maps www.example.com to IN CNAME example.lb.example.com Internet DNS resolution continues mapping example.lb.example.com to your GTM configured Wide IP The Wide IP load balances the request to one of the pools based on the configured logic: Round Robin, Global Availability, Topology or Ratio (we chose "Topology" for our solution) The GTM-configured pool contains a CNAME to either us_east or us_west AWS data centers Internet DNS resolution takes place mapping the request to the ELB hostname (i.e. us-west.elb.amazonaws.com) and gives two A records External client http request is mapped to one of the returned IP addresses And, there you have it. With this solution, you can integrate AWS using your existing LTM and GTM technology! I hope this helps, and I hope you can implement this and other solutions using all the flexibility and power of your F5 technology.2.8KViews1like14CommentsF5 BIG-IP deployment with Red Hat OpenShift - keeping client IP addresses and egress flows
Controlling the egress traffic in OpenShift allows to use the BIG-IPfor several use cases: Keeping the source IP of the ingress clients Providing highly scalable SNAT for egress flows Providing security functionalities for egress flows80Views0likes0CommentsOWASP Tactical Access Defense Series: Broken Function Level Authorization (BFLA)
Broken Function Level Authorization (BFLA) is a type of security vulnerability in web applications where an attacker can access functionality or perform actions they should not be authorized to perform. This problem happens when an application doesn’t check access control on functions or endpoints correctly. This lets users do things that are not allowed. In this article, we are going through API5 item from OWASP Top 10 API Security risks and exploring F5 BIG-IP Access Policy Manager (APM) as a role in our arsenal Let’s consider our test application for each retail agent to submit their sales data, but without the ability to retrieve any from the system. In HTTP terms, the retail agent can POST but not allowed to perform GET, while the manager can perform GET to check agents performance, and collected data. Mitigating Risks with BIG-IP APM BIG-IP APM per-request granularity: with per-request granularity, organizations can dynamically enforce access policies based on various factors such as user identity, device characteristics, and contextual information. This enables organizations to implement fine-grained access controls at the API level, mitigating the risks associated with Broken Function Level Authorization. Key Features: Dynamic Access Control Policies: BIG-IP APM empowers organizations to define dynamic access control policies that adapt to changing conditions in real-time. By evaluating each API request against these policies, BIG-IP APM ensures that authorized users can only perform specific authorized functions (actions) on specified resources. Granular Authorization Rules: BIG-IP APM enables organizations to define granular authorization rules that govern access to individual objects or resources within the API ecosystem. By enforcing strict permission checks at the object level, F5 BIG-IP APM prevents unauthorized functions. Related Content F5 BIG-IP Access Policy Manager | F5 Introduction to OWASP API Security Top 10 2023 OWASP Top 10 API Security Risks – 2023 - OWASP API Security Top 10 API Protection Concepts OWASP Tactical Access Defense Series: How BIG-IP APM Strengthens Defenses Against OWASP Top 10 OWASP Tactical Access Defense Series: Broken Object-Level Authorization and BIG-IP APM F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller) OWASP Tactical Access Defense Series: Broken Authentication and BIG-IP APM OWASP Tactical Access Defense Series: Broken Object Property-Level Authorization and BIG-IP APM OWASP Tactical Access Defense Series: Unrestricted Resource Consumption66Views0likes0CommentsTransparent load balancing in Azure, Part 2
This article explains an advanced configuration of F5 BIG-IP behind Azure LB when running in Active/Standby mode and using the Floating IP option for forwarding traffic. This is intended to be a sequel to a previous article I read, but did not author, titled Transparent Load Balancing in Azure. If you read the previous article, you may see I’ve left a commentthat briefly describes some necessary details left out of the original article. This article covers an advanced scenario for running an Active/Standby BIG-IP pair behind Azure LB. Read and follow these instructions; allof the following conditions are met: You want to run 2x BIG-IP devices in Active/Standby configuration You plan to use Azure LB to provide High Availability (HA), as opposed to other methods like DNS or the Cloud Failover Extension Your intention is to have your BIG-IP’s Virtual Server IP address (VIP) be the same as the frontend ipconfig on the Azure LB. You have checked the “Floating IP” checkbox on your Azure LB rule that forwards traffic to BIG-IP Options for High Availability (HA) of Network Virtual Appliances (NVA’s) in Azure This article does not cover the advantages and disadvantages of different methods to achieve HA in Azure. There are multiple approaches: Azure Load Balancer. This approach uses a simple, L4 load balancer in Azure to disaggregate traffic across multiple devices. Active/Standby, Active/Active, or multiple standalone devices are options here. This article focuses on a scenario using Active/Standby BIG-IP devices behind Azure LB, when the “Floating IP” checkbox is checked on the Azure LB rule. F5 Cloud Failover Extension (CFE). This automation approach uses software on the BIG-IP device to ensure High Availability across devices that are Active/Standby, without requiring Azure LB. It can move IP addresses between Azure network interfaces (emulating Gratuitous ARP that happens on-prem) It can update a route table to ensure the default route (0.0.0.0/0) for a subnet sends response traffic to remote clients back via the active BIG-IP. It can update a route table to point an entire CIDR block dedicated for VIPs at the active device. This is also referred to as an “alien range” approach. DNS Load Balancing. A simple method for High Availability between devices is using DNS and multiple standalone appliances. However DNS load balancing within a local site is not common (although GSLB is a common practice across geographic regions) Other approaches. Azure offers a Gateway Load Balancer that F5supportsbut this requires advanced knowledge, and you might even consider BGP insome cloud scenarios. These are out of scope for this article. Common Architecture of Azure LB with BIG-IP pair The most common way to run Active/Standby BIG-IP devices behind Azure LB looks like the following diagram. Taking a closer look at IP addressing with this common architecture Let’s take a look at where we configure IP addresses based on the most common approach: There’s a few things here that sometimes confuse the first-time cloud admin: this solution requires 2x IP addresses for every Virtual Server on BIG-IP. You could create 2 separate VS’s, each with 1 IP address. You could also create a single VS with 2x IP’s using Shared Address lists, or your Virtual Address could be a /30 range. Either way, it’s different than what you’re accustomed to on-prem. this means IP addresses will get used up twice as fast as we’re accustomed to multiple destination NAT’s can sometimes confuse app owners (although normally a network admin has no problem understanding this) When and why to use Azure LB’s “Floating IP” option Thefloating IP checkboxon your Azure LB rule could be understood as telling Azure LB: “do not perform Destination NAT for this traffic”. This is similar to F5’snPath (aka asymmetric, or Direct Server Return) architecture. Why would you configure as above? No destination NAT at Azure LB can make overall IP addressing easier 1x IP Address on your BIG-IP VIP is more like on-prem config we are familiar with How to configure BIG-IP when “Floating IP” is used The previous option is an alternative approach, but it requires a semi-advanced workaround for Azure health checks.It’s important to understand this workaround and if you don’t, just stick with the common approach outlined first in this article. If you use Floating IP with your Azure LB rule, health probes from Azure LB will target the primary ipconfig on the Azure NIC. In BIG-IP, that’s your Self IP. And your Self IP will always respond healthy if it is probed, even on the Standby device (of course, port lockdown settings on Self IP’s must allow health checks). Put another way: your Standby BIG-IP will respond as healthy to Azure LB, and Azure LB will send data plane traffic to it. This will cause problems, so we must make our Standby BIG-IP “unhealthy” in Azure LB. Enter VIP targeting and iRules. Do this: Create LB rule on Azure LB sending traffic to the primary ipconfig on the dataplane NIC on the BIG-IP devices. Configure a health probe for this rule. A HTTP health check with default settings is fine. Create an iRule on BIG-IP: when HTTP_REQUEST { HTTP::respond 200 content "device is active" } Create a VIP called /Common/unroutable_vip and give it an IP address of 255.255.255.254. Attach the iRule from the previous step. This VIP will only be reachable on the Active device, and is not routable from outside of BIG-IP. Create another iRule: when HTTP_REQUEST { virtual /Common/unroutable_vip } On BIG-IP Device 1, create a VIP with the same IP addresses as the Self IP.This is allowed. Listen on port 80, add HTTP profile, and attach the above iRule. On BIG-IP Device 2, notice that the VIP created in the above step is not sync’d to Device 2. Repeat the above step with a VIP created on the same IP address as the Self IP. Now, your Azure LB will health check both devices, sending HTTP health checks to both devices and hitting the VIP’s you created on the Self IP’s. However, only the active device will successfully forward traffic to this VIP called “unroutable” and the Standby device will fail to do this. This means that Azure LB will believe that the Active is health and the Standby is down. Don’t forget you’ll need theEnable IP Forwardingcheckbox checked on your BIG-IP’s interfaces in the Azure portal. Conclusion If all of the above makes sense, feel free to use Azure LB’s Floating IP checkbox on your Azure LB rules so that you can have the benefits of our last diagram. They are operational benefits only (fewer IP addresses used, potentially easier to understand for operators) but there is no functional or performance benefit to this method (no performance benefits or features/functionality enabled). Thanks for reading, and please ask questions via comments or message me directly using this website.268Views0likes0CommentsF5 Hybrid Security Architectures: Part 3 F5 XC API Protection and NGINX Ingress
Here in this example solution, we will be using DevSecOps practices to deploy an AWS Elastic Kubernetes Service (EKS) cluster running the Arcadia Finance test web application serviced by F5 NGINX Ingress Controller for Kubernetes. For protection, will provide API Discovery and Security with F5 Distributed Cloud's Web App and API Protection service. Introduction: For those of you following along with the F5 Hybrid Security Architectures series, welcome back! If this is your first foray into the series and would like some background, have a look at the intro article. This series is using theF5 Hybrid Security ArchitecturesGitHub repo and CI/CD platform to deploy F5 based hybrid security solutions based on DevSecOps principles. This repo is a community supported effort to provide not only a demo and workshop, but also a stepping stone for utilizing these practicesin your own F5 deployments. If you find any bugs or have any enhancement requests, open a issue or better yet contribute! API Security: APIs are an integral part of our daily routine, facilitating everything from critical to mundane tasks. From banking and ride-sharing apps to the weather updates we check before stepping out, APIs enable these functionalities. Given the sensitive nature of the data that can be exposed by unprotected APIs, the need for effective security cannot be stressed enough. With F5 Distributed Cloud Web App and API protection security teams can discover, inventory, and secure these critical APIs. Here in this example solution, we will be using DevSecOps practices to deploy an AWS Elastic Kubernetes Service (EKS) cluster running the Brewz test web application serviced by F5 NGINX Ingress Controller. To secure our application and APIs, we will deploy F5 Distributed Cloud's Web App and API Protection service. This will provide us API Discovery and Security as well as a traditional Web Application Firewall and Malicious User Detection. Distributed Cloud WAAP:Available for SaaS-based deploymentsand provides comprehensive security solutions designed to safeguard web applications and APIs from a wide range of cyber threats. This solution utilizes a distributed cloud architecture, which enables it to provide real-time protection and scale to meet the needs of large enterprises. NIGNX Ingress Controller for Kubernetes: A lightweight software solution that helps manage app connectivity at the edge of a Kubernetes cluster by directing requests to the appropriate services and pods. It provides advanced load balancing, routing, identity, and security, as well as montioring and observability features. XC WAAP + NGINX Ingress Controller Workflow GitHub Repo: F5 Hybrid Security Architectures Prerequisites: F5 Distributed Cloud Account (F5 XC) Create an F5 XC API certificate NGINX Ingress Controller license AWS Account- Due to the assets being created, free tier will not work. Terraform Cloud Account GitHub Account Assets xc: F5 Distributed Cloud WAAP nic:NGINX Ingress Controller infra: AWS Infrastructure (VPC, IGW, etc.) eks:AWS Elastic Kubernetes Service brewz: Brewz SPA test web application Tools Cloud Provider: AWS Infrastructure as Code: Terraform Infrastructure as Code State: Terraform Cloud CI/CD: GitHub Actions Terraform Cloud Workspaces: Create a workspace for each asset in the workflow chosen Workflow: xc-nic Workspaces: infra, eks, nic, brewz, xc Your Terraform Cloud console should resemble the following: Variable Set: Create a Variable Set with the following values. IMPORTANT:Ensure sensitive values are appropriatelymarked. AWS_ACCESS_KEY_ID: Your AWS Access Key ID - Environment Variable AWS_SECRET_ACCESS_KEY: Your AWS Secret Access Key - Environment Variable AWS_SESSION_TOKEN: Your AWS Session Token - Environment Variable VOLT_API_P12_FILE: Your F5 XC API certificate. Set this to api.p12 - Environment Variable VES_P12_PASSWORD: Set this to the password you supplied when creating your F5 XC API key - Environment Variable nginx_jwt: Your NGINX Java Web Token associatedwith your NGINX license - Terraform Variable ssh_key: Your ssh key for access to created compute assets - Terrraform Variable tf_cloud_organization: Your Terraform Cloud Organization name - Terraform Variable Your Variable Set should resemble the following: GitHub Fork and Clone Repo:F5 Hybrid Security Architectures ctions Secrets: Create the following GitHub Actions secrets in your forked repo XC_P12: The base64 encoded F5 XC API certificate TF_API_TOKEN: Your Terraform Cloud API token TF_CLOUD_ORGANIZATION: Your Terraform Cloud Organization TF_CLOUD_WORKSPACE_workspace: Create for each workspace used in your workflow. EX:TF_CLOUD_WORKSPACE_XCwould be created with the value xc Your GitHub Actions Secrets should resemble the following: Setup Deployment Branch and Terraform Local Variables: Step 1: Check out a branch for the deploy workflow using the following naming convention xc-nic deployment branch: deploy-xcapi-nic Step 2:Rename infra/terraform.tfvars.examples to infra/terraform.tfvars and add the following data #Global project_prefix = "Your project identifier" resource_owner = "You" #AWS aws_region = "Your AWS region" ex: us-west-1 azs = "Your AWS availability zones" ex: ["us-west-1a", "us-west-1b"] #Assets nic = true nap = false bigip = false bigip-cis = false Step 3: Rename xc/terraform.tfvars.examples to xc/terraform.tfvars and add the following data #XC Global api_url = "https://.console.ves.volterra.io/api" xc_tenant = "Your XC Tenant Name" xc_namespace = "Your XC namespace" #XC LB app_domain = "Your App Domain" #XC WAF xc_waf_blocking = true #XC AI/ML Settings for MUD, APIP - NOTE: Only set if using AI/ML settings from the shared namespace xc_app_type = [] xc_multi_lb = false #XC API Protection and Discovery xc_api_disc = true xc_api_pro = true xc_api_spec = ["Path to uploaded API spec"] *See below screen shot for how to obtain this value. #XC Bot Defense xc_bot_def = false #XC DDoS xc_ddos = false #XC Malicious User Detection xc_mud = true For Path to API Spec navigate to Manage->Files->Swagger Files, click the three dots next to your OAS, and choose "Copy Latest Version's URL". Paste this into the xc_api_spec in the xc/terraform.tfvars. Step 4: Modify line 16 in the .gitignore and comment out the *.tfvars line with # and save the file Step 5: Commit your changes Deployment: Step 1: Push your deploy branch to the forked repo Step 2:Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build Step 3: Once the pipeline completes, verify your assets were deployed to AWS and F5 XC Step 4:Check your Terraform Outputs for XC and verify your app is available by navigating to the FQDN API Discovery andSecurity Dashboards: After leaving the Brewz test app deployed for a while we can start to see the API graph form. The F5 XC WAAP platform learns the schema structure of the API by analyzing sampled request data, then reverse-engineering the schema to generates an OpenAPI spec. The platform validates what is deploy versus what is discovered and tags any Shadow APIs that are found. We can also check the dashboards for any attacks that may have occurred while we were waiting for discovery to finish. The internet being what it is, it didn't take long for the platform to protect us against some attacks. Deployment Teardown: Step 1:From your deployment branch check out a branch for the destroy workflow using the following naming convention xc-nic destroy branch: destroy-xcapi-nic Step 2: Push your destroy branch to the forked repo Step 3: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build Step 4:Once the pipeline completes, verify your assets were destroyed Conclusion: In this article we have shown how to utilize the F5 Hybrid Security Architectures GitHub repo and CI/CD pipeline to deploy a tiered security architecture utilizing F5 XC WAAP and NGINX Ingress Controller to protect a test API running in AWS EKS. While the code and security policies deployed are generic and not inclusive of all use-cases, they can be used as a steppingstone for deploying F5 based hybrid architectures in your own environments. Workloads are increasingly deployed across multiple diverse environments and application architectures. Organizations need the ability to protect their essential applications regardless of deployment or architecture circumstances. Equally important is the need to deploy these protections with the same flexibility and speed as the apps they protect. With the F5 WAF portfolio, coupled with DevSecOps principles, organizations can deploy and maintain industry-leading security without sacrificing the time to value of their applications. Not only can Edge and Shift Left principles exist together, but they can also work in harmony to provide a more effective security solution. Article Series: F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility) F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF) F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller) F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller) For further information or to get started: F5 Distributed Cloud Platform (Link) F5 Distributed Cloud WAAP Services (Link) F5 Distributed Cloud WAAP YouTube series (Link) F5 Distributed Cloud WAAP Get Started (Link)4.9KViews5likes2CommentsF5 Distributed Cloud (XC) Global Applications Load Balancing in Cisco ACI
Introduction F5 Distributed Cloud (XC) simplify cloud-based DNS management with global server load balancing (GSLB) and disaster recovery (DR). F5 XC efficiently directs application traffic across environments globally, performs health checks, and automates responses to activities and events to maintain high application performance with high availability and robustness. In this article, we will discuss how we can ensure high application performance with high availability and robustness by using XC to load-balance global applications across public clouds and Cisco Application Centric Infrastructure (ACI) sites that are geographically apart. We will look at two different XC in ACI use cases. Each of them uses a different approach for global applications delivery and leverages a different XC feature to load balance the applications globally and for disaster recovery. XC DNS Load Balancer Our first XC in ACI use case is very commonly seen where we use a traditional network-centric approach for global applications delivery and disaster recovery. We use our existing network infrastructure to provide global applications connectivity and we deploy GSLB to load balance the applications across sites globally and for disaster recovery. In our example, we will show you how to use XC DNS Load Balancer to load-balance a global application across ACI sites that are geographically dispersed. One of the many advantages of using XC DNS Load Balancer is that we no longer need to manage GSLB appliances. Also, we can expect high DNS performance thanks to XC global infrastructure. In addition, we have a single pane of glass, the XC console, to manage all of our services such as multi-cloud networking, applications delivery, DNS services, WAAP etc. Example Topology Here in our example, we use Distributed Cloud (XC) DNS Load Balancer to load balance our global application hello.bd.f5.com, which is deployed in a hybrid multi-cloud environment across two ACI sites located in San Jose and New York. Here are some highlights at each ACI site from our example: New York location XC CE is deployed in ACI using layer three attached with BGP XC advertises custom VIP 10.10.215.215 to ACI via BGP XC custom VIP 10.10.215.215 has an origin server 10.131.111.88 on AWS BIG-IP is integrated into ACI BIG-IP has a public VIP 12.202.13.149 that has two pool members: on-premise origin server 10.131.111.161 XC custom VIP 10.10.215.215 San Jose location XC CE is deployed in ACI using layer three attached with BGP XC advertises custom VIP 10.10.135.135 to ACI via BGP XC custom VIP 10.10.135.135 has an origin server 10.131.111.88 on Azure BIG-IP is integrated into Cisco ACI BIG-IP has a public VIP 12.202.13.147 that has two pool members: on-premise origin server 10.131.111.55 XC custom VIP 10.10.135.135 *Note:Click here to review on how to deploy XC CE in ACI using layer three attached with BGP. DNS Load Balancing Rules A DNS Load Balancer is an ingress controller for the DNS queries made to your DNS servers. The DNS Load Balancer receives the requests and answers with an IP address from a pool of members based on the configured load balancing rules. On the XC console, go to "DNS Management"->"DNS Load Balancer Management" to create a DNS Load Balancer and then define the load balancing rules. Here in our example, we created a DNS Load Balancer and defined the load balancing rules for our global application hello.bd.f5.com (note: as a prerequisite, F5 XC must be providing primary DNS for the domain): Rule #1: If the DNS request tohello.bd.f5.com comes from United States or United Kingdom, respond with BIG-IP VIP 12.203.13.149 in the DNS response so that the application traffic will be directed to New York ACI site and forwarded to an origin server that is located in AWS or on-premise: Rule #2: If the DNS request tohello.bd.f5.com comes from United States or United Kingdom and if New York ACI site become unavailable, respond with BIG-IP VIP 12.203.13.147 in the DNS response so that the application traffic will be directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure: Rule #3: If the DNS request tohello.bd.f5.comcomes from somewhere outside of United States or United Kingdom, respond with BIG-IP VIP 12.203.13.147 in the DNS response so that the application traffic will be directed to San Jose ACI and forwarded to an origin server that is located on-premise or in Azure: Validation Now, let's see what happens. When a machine located in the United States tries to reach hello.bd.f5.comand if both ACI sites are up, the traffic is directed to New York ACI site and forwarded to an origin server that is located on-premise or in AWS as expected: When a machine located in the United States tries to reachhello.bd.f5.comand if the New York ACI site is down or becomes unavailable, the traffic is re-directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure as expected: When a machine tries to accesshello.bd.f5.com from outside of United States or United Kingdom, it is directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure as expected: On the XC console, go to"DNS Management"and select the appropriate DNS Zone to view theDashboardfor information such as the DNS traffic distribution across the globe, the query types etc andRequests for DNS requests info: XC HTTP Load Balancer Our second XC in ACI use case uses a different approach for global applications delivery and disaster recovery. Instead of using the existing network infrastructure for global applications connectivity and utilizing XC DNS Load Balancer for global applications load balancing, we simplify the network layer management by securely deploying XC to connect our applications globally and leveraging XC HTTP Load Balancer to load balance our global applications and for disaster recovery. Example Topology Here in our example, we use XC HTTP load balancer to load balance our global applicationglobal.f5-demo.com that is deployed across a hybrid multi-cloud environment. Here are some highlights: XC CE is deployed in each ACI site using layer three attached with BGP New York location: ACI advertises on-premise origin server 10.131.111.161 to XC CE via BGP San Jose location: ACI advertises on-premise origin server 10.131.111.55 to XC CE via BGP An origin server 10.131.111.88 is located in AWS An origin server 10.131.111.88 is located in Azure *Note:Click here to review on how to deploy XC CE in ACI using layer three attached with BGP. XC HTTP Load Balancer On the XC console, go to “Multi-Cloud App Connect” -> “Manage” -> “Load Balancers” -> “HTTP Load Balancers” to “Add HTTP Load Balancer”. In our example, we created a HTTPS load balancer named globalwith domain name global.f5-demo.com. Instead of bringing our own certificate, we took advantage of the automatic TLS certificate generation and renewal supported by XC: Go to “Origins” section to specify the origin servers for the global application. In our example, we included all origin servers across the public clouds and ACI sites for our global application global.f5-demo.com: Next, go to “Other Settings” -> “VIP Advertisement”. Here, select either “Internet” or “Internet (Specified VIP)” to advertise the HTTP Load Balancer to the Internet. In our example, we selected “Internet” to advertise global.f5-demo.com globally because we decided not to manage nor to acquire a public IP: In our first use case, we defined a set of DNS load balancing rules on the XC DNS Load Balancer to direct the application traffic based on our requirement: If the request toglobal.f5-demo.com comes from United States or United Kingdom, application traffic should be directed to an origin server that is located on-premise in New York ACI site or in AWS. If the request toglobal.f5-demo.com comes from United States or United Kingdom and if the origin servers in New York ACI site and AWS become unavailable, application traffic should be re-directed to an origin server that is located on-premise in San Jose ACI site or in Azure. If the request toglobal.f5-demo.com comes from somewhere outside of United States or United Kingdom, application traffic should be directed to an origin server that is located on-premise in San Jose ACI site or in Azure. We can accomplish the same with XC HTTP Load Balancer by configuring Origin Server Subset Rules. XC HTTP Load Balancer Origin Server Subset Rules allow users to create match conditions on incoming source traffic to the XC HTTP Load Balancer and direct the matched traffic to the desired origin server(s). The match condition can be based on country, ASN, regional edge (RE), IP address, or client label selector. As a prerequisite, we create and assign a label (key-value pair) to an origin server so that we can specify where to direct the matched traffic to in reference to the label in Origin Server Subset Rules. Go to “Shared Configuration” -> “Manage” -> “Labels” -> “Known Keys” and “Add Know Key” to create labels. In our example, we created a key named jy-key with two labels: us-uk and other : Now, go to "Origin pool" under “Multi-Cloud App Connect” and apply the labels to the origin servers: In our example, origin servers in New York ACI site and AWS are labeledus-uk while origin servers in San Jose ACI site and Azure are labeled other : Then, go to “Other Settings” to enable subset load balancing. In our example, jy-key is our origin server subsets class, and we configured to use default subset original pool labeled other as our fallback policy choice based on our requirement that is if the origin servers in New York ACI site and AWS become unavailable, traffic should be directed to an origin server in San Jose ACI site or Azure: Next, on the HTTP Load Balancer, configure the Origin Server Subset Rules by enabling “Show Advanced Fields” in the "Origins" section: In our example, we created following Origin Server Subset Rules based on our requirement: us-uk-rule: If the request toglobal.f5-demo.com comes from United States or United Kingdom, direct the application traffic to an origin server labeled us-uk that is either in New York ACI site or AWS. other-rule: If the request to global.f5-demo.comdoes not come from United States or United Kingdom, direct the application traffic to an origin server labeled other that is either in San Jose ACI site or Azure. Validation As a reminder, we use XC automatic TLS certificate generation and renewal feature for our HTTPS load balancer in our example. First, let's confirm the certificate status: We can see the certificate is valid with an auto renew date. Now, let’s run some tests and see what happens. First, let’s try to access global.f5-demo.com from United Kingdom: We can see the traffic is directed to an origin server located in New York ACI site or AWS as expected. Next, let's see what happens if the origin servers from both of these sites become unavailable: The traffic is re-directed to an origin server located in San Jose ACI site or Azure as expected. Last, let’s try to access global.f5-demo.com from somewhere outside of United States or United Kingdom: The traffic is directed to an origin server located in San Jose ACI site or Azure as expected. To check the requests on the XC Console, go to "Multi-Cloud App Connect" -> “Performance” -> "Requests" from the selected HTTP Load Balancer. Below is a screenshot from our example and we can see the request to global.f5-demo.com came from Australia was directed to the origin server 10.131.111.55 located in San Jose ACI site based on the configured Origin Server Subset Rules other-rule: Here is another example that the request came from United States was sent to the origin server 10.131.111.88 located in AWS based on the configured Origin Server Subset Rules us-uk-rule: Summary F5 XC simplify cloud-based DNS management with global server load balancing (GSLB) and disaster recovery (DR). By deploying F5 XC in Cisco ACI, we can securely deploy and load balance our global applications across ACI sites (and public clouds) efficiently while maintaining high application performance with high availability and robustness among global applications at all times. Related Resources *On-Demand Webinar*Deploying F5 Distributed Cloud Services in Cisco ACI Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Two Attached Deployment361Views0likes0CommentsDeploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Two Attached Deployment
Introduction F5 Distributed Cloud (XC) Services are SaaS-based security, networking, and application management services that can be deployed across multi-cloud, on-premises, and edge locations. This article will show you how you can deploy F5 Distributed Cloud’s Customer Edge (CE) site in Cisco Application Centric Infrastructure (ACI) so that you can securely connect your application and distribute the application workloads in a Hybrid Multi-Cloud environment. F5 XC Layer Two Attached CE in Cisco ACI Besides Layer Three Attached deployment option, which we discussed in another article, a F5 Distributed Cloud Customer Edge (CE) site can also be deployed with Layer Two Attached in Cisco ACI environment using an ACI Endpoint of an Endpoint Group (EPG). As a reminder, Layer Two Attached is one of the deployment models to get traffic to/from a F5 Distributed Cloud CE site, where the CE can be a single node or a three-nodes cluster. F5 Distributed Cloud supports Virtual Router Redundancy Protocol (VRRP) for virtual IP (VIP) advertisement. When VRRP is enabled for VIPs advertisement, there is a VRRP Master for each of the VIPs and the VRRP Master for each of the VIPs can possibly be distributed across the CE nodes within the cluster. In this article, we will look at how we can deploy a Layer Two Attached CE site in Cisco ACI. F5 XC VRRP Support for VIPs Advertisement F5 XC Secure Mesh Sites are specifically engineered for non-cloud CE deployments, which support additional configurations that are not available using Fleet or regular Site management functionalities such as VRRP for VIPs advertisement. We recommend Secure Mesh Sites for non-cloud CE deployment and specifically, in Layer Two Attached CE deployment model, we recommend deploying CE site as a Secure Mesh Site to take advantage of the VRRPs support for VIPs advertisement. With VRRP enabled for VIPs advertisement, one of the CE nodes within the cluster will become the VRRP Master for a VIP and starts sending gratuitous ARPs (GARPS) while the rest of the CE nodes will become the VRRP Backup. Please note that in CE software, VRRP virtual MAC is not used for the VIP. Instead, the CE node, which is the VRRP Master for the VIP uses its physical MAC address in ARP responses for the VIP. When a failover happens, a VRRP Backup CE will become the new VRRP Master for the VIP and starts sending GARPs to update the ARP table of the devices in the broadcast domain. As of today, there isn't a way to configure the VRRP priority and the VRRP Master assignment is at random. Thus, if there are multiple VIPs, it is possible that a CE node within the cluster can be the VRRP Master for one or more VIPs, or none. F5 XC Layer Two Attached CE in ACI Example In this section, we will use an example to show you how to successfully deploy a Layer Two Attached CE site in Cisco ACI fabric so that you can securely connect your application and distribute the application workloads in a Hybrid Multi-Cloud environment. Topology In our example, CE is a three nodes cluster (Master-0, Master-1 and Master-2) which connects to the ACI fabric using an endpoint of an EPG namedexternal-epg: Example reference - ACI EPG external-epg endpoints table: HTTP load balancersite2-secure-mesh-cluster-app has a Custom VIP of 172.18.188.201/32 epg-xc.f5-demo.com with workloads 10.131.111.66 and 10.131.111.77 in the cloud (Azure) and it advertises the VIP to the CE site: F5 XC Configuration of VRRP for VIPs Advertisement To enable VRRP for VIPs advertisement, go to "Multi-Cloud Network Connect" -> "Manage" -> "Site Management" -> "Secure Mesh Sites" -> "Manage Configuration" from the selected Secure Mesh Site: Next, go to "Network Configuration" and select "Custom Network Configuration" to get to "Advanced Configuration" and make sure "Enable VRRP for VIP(s)" is selected for VIP Advertisement Mode: Validation We can now securely connect to our application: Note from above, after F5 XC is deployed in Cisco ACI, we also use F5 XC DNS as our primary nameserver: To check the requests on the F5 XC Console, go to"Multi-Cloud App Connect" -> "Overview: Applications" to bring out our HTTP load balancer, then go to "Performance Monitoring" -> "Requests": *Note: Make sure you are in the right namespace. As a reminder, VRRP for VIPs advertisement is enabled in our example. From the request shown above, we can see that CE node Master-2 is currently the VRRP Master for VIP 172.18.188.201 and if we go to the APIC, we can see the VIP is learned in the ACI endpoint table for EPG external-epgtoo: Example reference - a sniffer capture of GARP from CE node Master-2 for VIP 172.18.188.201: Summary A F5 Distributed Cloud Customer Edge (CE) site can be deployed with Layer Two Attached deployment model in Cisco ACI environment using an ACI Endpoint of an Endpoint Group (EPG). Layer Two Attached deployment model can be more desirable and easier for CE deployment when compared to Layer Three Attached. It is because Layer Two Attached does not require layer three/routing which means one less layer to take care of and it also brings the applications closer to the edge. With F5 Distributed Cloud Customer Edge (CE) site deployment, you can securely connect your on-premises to the cloud quickly and efficiently. Next Check out this video for some examples of Layer Two Attached CE use cases in Cisco ACI: Related Resources *On-Demand Webinar* Deploying F5 Distributed Cloud Services in Cisco ACI F5 Distributed Cloud (XC) Global Applications Load Balancing in Cisco ACI Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment Customer Edge Site - Deployment & Routing Options Cisco ACI Endpoint Learning White Paper420Views0likes0CommentsUsing F5 Distributed Cloud DNS Load Balancer health checks and DNS observability
Introduction This article is a continuation of my previous article that covers how to configure F5 Distributed Cloud (XC) DNS Load Balancer to provide geo-proximity and disaster recovery, in addition to other failover scenarios. This article builds on the previous configuration to add health checks and shows how the Distributed Cloud DNS service is performing. DNS Load Balancer Configuring DNS LB Health Checks F5 XC can perform health checks on all IP members in a DNS Load Balancer Pool. To configure health checks for a pool, go to DNS Management > DNS Load Balancer Management > DNS Load Balancer Health Checks, then click "Add DNS Load Balancer Health Check". Name the rule, for example, "europe-healthcheck", and choose an appropriate health check type. The following health check types are supported for DNS LB: HTTP HTTPS TCP TCP (Hex payload) UDP ICMP Each health check type, except ICMP, supports sending a custom string payload, and looks for a response to match. For example, choosing the HTTPS health check, F5 XC will first confirm whether it received a valid SSL certificate from the member. Passing the SSL certificate check, it then sends the configured "Send String" (an HTTP request). By default, the string is "HEAD / HTTP/1.0\r\n\r\n", although more complex strings are supported. The "Receive String", in regex (re2) format, validates the application layer response. The default receive string forHTTP(S) requests is "HTTP/1." A custom TCP or UDP port can also be configured to support services running on non-standard ports. Configuring the port with "0" uses the default port belonging to the intended protocol. To apply the health check to a DNS LB Load Balancing rule, navigate to DNS Load Balancer Management > DNS Load Balancer Pools. Locate the pool to apply the health check to, and use the Manage Configuration action. Within the pool configuration, click Edit Configuration, scroll down to DNS Load Balancer Health Check, enable it, and then choose the health check created above. Save and Exit the Pool. Status information about the health of the DNS LB pools and pool members can be found at the DNS Load Balancers Overview page. In the following example, one of the members in the "eu-pool" is unhealthy. Details about each specific pool member can be found by clicking on the pool. Distributed Cloud DNS Observability The F5 XC DNS Performance Overview dashboards provide usage details for up to a 24-hour interval. Navigate to DNS Management > Overview > Performance for a high-level view showing how many requests a domain has received. To see where DNS requests are coming from, the most requested services, and specific response details, click on each DNS zone. The DNS performance dashboards provide the following views for each DNS zone: Traffic Distribution Top Requests Total Queries Query Type Response Type (by RCODE) DNS Query Rate (by Query Type) The DNS Dashboards also include showing the type and frequency of each DNS request. Query logging is available and located in the Requests tab. This view provides up to a 24-hour interval of each DNS query. The dashboard can be filtered to show requests from a particular geo location, resource record type, which record or records are being requested, in addition to the client IP and return code. The following image illustrates a filtered list. Records in the table below can be downloaded in a CSV formatted file. Details about an individual request can be viewed by clicking on the ">" symbol, and the detailed record can be shown in either JSON or YAML format. Logging & Analytics The Global Log Receiver provides the logging of DNS Requests in addition to other security services in Distributed Cloud, including WAF and Bot Defense. This article explains how to configure Global Log Receiver and send logs using HTTPS to an ELK Stack (elasticsearch, logstash, and kibana). It also shows how to configure logstash and kibana to process GLR formatted logs from Distributed Cloud. To configure the Global Log Receiver for DNS logging, go to Shared Configuration > Global Log Receiver, and create a new entry. In the Log Type field, choose DNS Request Logs. In the following example, I've configured Global Log Receiver to send via HTTPS, but many other logging platforms are also supported. See this product documentation page for up to date information on all the features available with Global Log Receiver. With DNS Request Logs configured, we can now see every DNS request to our Distributed Cloud tenant, processed by logstash, in the Kibana dashboard. The following output in Kibana shows DNS requests for all zones configured in the Distributed Cloud tenant. Additional Resources Previous article in series: Using Distributed Cloud DNS Load Balancer with Geo-Proximity and failover scenarios Technical article:How I did it - "Remote Logging with the F5 XC Global Log Receiver and Elastic" Product Documentation: DNS LB Product Documentation DNS Zone Management Global Log Receiver More information about Distributed Cloud DNS Load Balancer and DNS service: https://www.f5.com/cloud/products/dns-load-balancer https://www.f5.com/cloud/products/dns833Views2likes0CommentsDeploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment
Introduction F5 Distributed Cloud (XC) Services are SaaS-based security, networking, and application management services that can be deployed across multi-cloud, on-premises, and edge locations. This article will show you how you can deploy F5 Distributed Cloud Customer Edge (CE) site in Cisco Application Centric Infrastructure (ACI) so that you can securely connect your application in Hybrid Multi-Cloud environment. XC Layer Three Attached CE in Cisco ACI A F5 Distributed Cloud Customer Edge (CE) site can be deployed with Layer Three Attached in Cisco ACI environment using Cisco ACI L3Out. As a reminder, Layer Three Attached is one of the deployment models to get traffic to/from a F5 Distributed Cloud CE site, where the CE can be a single node or a three nodes cluster. Static routing and BGP are both supported in the Layer Three Attached deployment model. When a Layer Three Attached CE site is deployed in Cisco ACI environment using Cisco ACI L3Out, routes can be exchanged between them via static routing or BGP. In this article, we will focus on BGP peering between Layer Three Attached CE site and Cisco ACI Fabric. XC BGP Configuration BGP configuration on XC is simple and it only takes a couple steps to complete: 1) Go to "Multi-Cloud Network Connect" -> "Networking" -> "BGPs". *Note: XC homepage is role based, and to be able to configure BGP, "Advanced User" is required. 2) "Add BGP" to fill out the site specific info, such as which CE Site to run BGP, its BGP AS number etc., and "Add Peers" to include its BGP peers’ info. *Note: XC supports direct connection for BGP peering IP reachability only. XC Layer Three Attached CE in ACI Example In this section, we will use an example to show you how to successfully bring up BGP peering between a F5 XC Layer Three Attached CE site and a Cisco ACI Fabric so that you can securely connect your application in Hybrid Multi-Cloud environment. Topology In our example, CE is a three nodes cluster(Master-0, Master-1 and Master-2) that has a VIP 10.10.122.122/32 with workloads, 10.131.111.66 and 10.131.111.77, in the cloud (AWS): The CE connects to the ACI Fabricvia a virtual port channel (vPC) that spans across two ACI boarder leaf switches. CE and ACI Fabric are eBGP peers via an ACI L3Out SVI for routes exchange. CE is eBGP peered to both ACI boarder leaf switches, so that in case one of them is down (expectedly or unexpectedly), CE can still continue to exchange routes with the ACI boarder leaf switch that remains up and VIP reachability will not be affected. XC BGP Configuration First, let us look at the XC BGP configuration ("Multi-Cloud Network Connect" -> "Networking" -> "BGPs"): We"Add BGP" of "jy-site2-cluster" with site specific BGP info along with a total of six eBGP peers (each CE node has two eBGP peers; one to each ACI boarder leaf switch): We "Add Item" to specify each of the six eBPG peers’ info: Example reference - ACI BGP configuration: XC BGP Peering Status There are a couple of ways to check the BGP peering status on the F5 Distributed Cloud Console: Option 1 Go to "Multi-Cloud Network Connect" -> "Networking" -> "BGPs" -> "Show Status" from the selected CE site to bring up the "Status Objects" page. The "Status Objects" page provides a summary of the BGP status from each of the CE nodes. In our example, all three CE nodes from "jy-site2-cluster" are cleared with "0 Failed Conditions" (Green): We can simply click on a CE node UID to further look into the BGP status from the selected CE node with all of its BGP peers. Here, we clicked on the UID of CE node Master-2 (172.18.128.14) and we can see it has two eBGP peers: 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2), and both of them are Up: Here is the BGP status from the other two CE nodes - Master-0 (172.18.128.6) and Master-1 (172.18.128.10): For reference, here is an example of a CE node with "Failed Conditions" (Red) due to one of its BGP peers is down: Option 2 Go to "Multi-Cloud Network Connect" -> "Overview" -> "Sites" -> "Tools" -> "Show BGP peers" to bring up the BGP peers status info from all CE nodes from the selected site. Here, we can see the same BGP status of CE node master-2 (172.18.128.14) which has two eBGP peers: 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2), and both of them are Up: Here is the output of the other two CE nodes - Master-0 (172.18.128.6) and Master-1 (172.18.128.10): Example reference - ACI BGP peering status: XC BGP Routes Status To check the BGP routes, both received and advertised routes, go to "Multi-Cloud Network Connect" -> "Overview" -> "Sites" -> "Tools" -> "Show BGP routes" from the selected CE sites: In our example, we see all three CE nodes (Master-0, Master-1 and Master-2) advertised (exported) 10.10.122.122/32 to both of its BPG peers: 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2), while received (imported) 172.18.188.0/24 from them: Now, if we check the ACI Fabric, we should see both 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2) advertised 172.18.188.0/24 to all three CE nodes, while received 10.10.122.122/32 from all three of them (note "|" for multipath in the output): XC Routes Status To view the routing table of a CE node (or all CE nodes at once), we can simply select "Show routes": Based on the BGP routing table in our example (shown earlier), we should see each CE node has two Equal Cost Multi-Path (ECMP) installed in the routing table for 172.18.188.0/24: one to 172.18.128.11 (ACI boarder leaf switch 1) and one to 172.18.128.12 (ACI boarder leaf switch 2) as the next-hop, and we do (note "ECMP" for multipath in the output): Now, if we check the ACI Fabric, each of the ACI boarder leaf switch should have three ECMP installed in the routing table for 10.10.122.122: one to each CE node (172.18.128.6, 172.18.128.10 and 172.18.128.14) as the next-hop, and we do: Validation We can now securely connect our application in Hybrid Multi-Cloud environment: *Note: After F5 XC is deployed, we also use F5 XC DNS as our primary nameserver: To check the requests on the F5 Distributed Cloud Console, go to"Multi-Cloud Network Connect" -> "Sites" -> "Requests" from the selected CE site: Summary A F5 Distributed Cloud Customer Edge (CE) site can be deployed with Layer Three Attached deployment model in Cisco ACI environment. Both static routing and BGP are supported in the Layer Three Attached deployment model and can be easily configured on F5 Distributed Cloud Console with just a few clicks. With F5 Distributed Cloud Customer Edge (CE) site deployment, you can securely connect your application in Hybrid Multi-Cloud environment quickly and efficiently. Next Check out this video for some examples of Layer Three Attached CE use cases in Cisco ACI: Related Resources *On-Demand Webinar*Deploying F5 Distributed Cloud Services in Cisco ACI F5 Distributed Cloud (XC) Global Applications Load Balancing in Cisco ACI Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Two Attached Deployment Customer Edge Site - Deployment & Routing Options Cisco ACI L3Out White Paper1.3KViews4likes0Comments