We use cookies on our website to ensure we provide you with the best experience on our website. By using our website, you agree to the use of cookies for analytics and personalized content.This website uses cookies. More Information
It seems like your browser didn't download the required fonts. Please revise your security settings and try again.
Barracuda CloudGen Firewall

This Firmware Version Is End-Of-Support

Documentation for this product is no longer updated. Please see End-of-Support for CloudGen Firewall Firmware for further information on our EoS policy.

AWS Implementation Guide - High Availability Firewall Cluster with Route Shifting

  • Last updated on

To build highly available services in AWS, each layer of your architecture should be redundant over multiple Availability Zones. Each AWS region is made up of at least two isolated Availability Zones. In case one Availability Zone goes down, your application continues to run in the other datacenter without interruption or minimal failover time. For the Barracuda NextGen Firewall, this means deploying two firewall instances to two public subnets, each in a different Availability Zone. The firewalls are in an active-passive cluster. Both firewalls share a virtual server containing such services as the Forwarding Firewall or VPN service. Should the primary firewall become unavailable, the virtual server is immediately started on the secondary firewall. Using IAM access keys, the now-active secondary firewall connects to the underlying cloud platform and rewrites the AWS route table to use the now-active firewall as the gateway device for the backend instances. Once the route table is rewritten, normal operations are resumed, even if one of the two Availability Zones is experiencing an outage.

multi_AZ_routeshifting_ha0.png

Limitations

  • The number of firewall instances is static. This means you must size your firewall instances for the highest expected load to avoid the firewalls from bottlenecking your application. 
  • Failing over the virtual server, although fast, is not transparent to the user. Existing stateful connections will time out.
  • It is not possible to use a single public IP address as the endpoint for the firewall cluster. You have the option to use two public IP addresses, one for each firewall, or a single FQDN with either an Elastic Load Balancer or Route 53 for incoming traffic.

Example CloudFormation template

To deploy the AWS infrastructure of this architecture quickly, use the CloudFormation template below. This template only deploys the AWS infrastructure. The NextGen Firewall must be configured manually. This template deploys the following AWS resources:

  • Two NextGen Firewall F-Series m3.medium (PAYG) instances.
  • Two t2.micro Linux clients in the private subnets.
  • One Elastic Load balancer.

Download the Example CloudFormation Template and the Network Diagram containing the IP addresses.

For step-by-step instructions on how to deploy a CloudFormation template, see How to Deploy an F-Series Firewall in AWS via CloudFormation Template.

Deploying a high availability firewall cluster

Complete the following configuration steps to deploy an active-passive Barracuda NextGen Firewall F high-availability cluster into a VPC. For more detailed descriptions, follow the links for step-by-step instructions.

Create a VPC and deploy the firewall instances

Create a VPC in an AWS region of your choice. Create two public subnets in two different Availability Zones for the firewalls and private subnets for the instances using the firewall as the default gateway.

  • Public subnets – These are the subnets for the firewall instances and all other instances with public IP addresses.
  • Private subnets – These are the subnets for all instances without external connectivity. All traffic is routed over the firewall. To create a robust architecture, create the private subnets in the different Availability Zones.

route_shifting_ha_1.png

Add an AWS Internet gateway and modify the AWS routing table for the public subnets to use the Internet gateway as the target for the default route. Verify that the route table is associated with the public subnets. Instances deployed to the public subnets can now connect to the Internet via the AWS Internet gateway.

route_shifting_ha_2.png

Launch a firewall instance into each public subnet using either the BYOL or PAYG images from the AWS Marketplace. Before choosing the instance type, make sure you understand the throughput requirements so you can size the instances accordingly. Also take into account that only one firewall instance is actively forwarding traffic; the secondary firewall is in standby. Use the same instance type for both firewalls. Depending on how you want to access your firewall cluster, you can use either automatically assign public IP addresses or associate an Elastic IP address with each instance. Using Elastic IP addresses allows you to keep the same public IP addresses, even when you redeploy your firewall instances. If you are planing on using Route 53 to access the services on or behind the firewall, using the automatically assigned DNS name may also be an option. As long as the internal IP address for the firewall is reserved, the DNS name is persistent across reboots.

If you selected the BYOL image type, activate the license. For both PAYG and BYOL, install the available hotfixes as displayed in the DASHBOARD > Firmware Update element when logged in via NextGen Admin.

route_shifting_ha_4.png

Route tables for private and public subnets

Create two AWS route tables for the VPC:

  • Public Route Table – The public route table is associated with both public subnets. Since the instances in this subnet need to be able to connect to the Internet, create a default route with the VPC's Internet gateway as the target. Instances in the public subnets can now be accessed by their public IP addresses.
  • Private Route Table – For the private subnets, a separate route table forwards all traffic to the active firewall. Use the instance ID of the primary firewall as the target for default route.
route_shifting_ha_3.png

For step-by-step instructions, see Steps 9 and 10 in How to Configure a High Availability Cluster in AWS using the Web Portal.

Disable source/destination check

By default, the ENI attached to the firewall only accepts traffic with a destination IP address that matches one of the IP addresses assigned to the network interface. To be able to forward traffic, you must disable the source/destination check. This must be completed for each firewall instance.

For step-by-step instructions, see Step 8 in How to Configure a High Availability Cluster in AWS using the Web Portal.

Join the two firewalls into a high availability cluster

Before creating the high availability cluster, the firewalls must be reconfigured to use static interfaces and IP addresses instead of the dhcp interface the firewall is deployed with by default.

High availability cluster with PAYG licensing

Export the license file from the secondary firewall and import it on the primary firewall before creating the HA cluster.

  1. Log into the secondary firewall.
  2. Go to CONFIGURATION > Configuration Tree > Box > Licenses.
  3. Click Lock.
  4. Select the license file, click the export icon, and select Export to File.
  5. Click Unlock.
  6. Log into the primary firewall.
  7. Go to CONFIGURATION > Configuration Tree > Box > Licenses.
  8. Click Lock.
  9. Click + and select Import from File.
  10. Select the license file exported from the secondary firewall.

route_shifting_ha_5.png

For step-by-step instructions, see Steps 13 and 16 in How to Configure a High Availability Cluster in AWS using the Web Portal and How to Set Up a High Availability Cluster.

Configure Cloud Integration for route shifting

Cloud Integration allows the firewall instance to use API calls to the underlying cloud platform. It is used to populate the cloud information element in the NextGen Admin dashboard and, more importantly, to rewrite AWS route tables. Rewriting the VPC route tables is necessary every time the virtual server fails over. During the failover, the now-active firewall rewrites the target of every route to use the active firewall running the virtual server services. This works for all route tables in the VPC. The active firewall continues to poll the route tables to ensure that the active firewall is always used. 

Primary firewall active

route_shifting_ha_failover_01.png

Secondary firewall active

route_shifting_ha_failover_02.png

On the firewall, go to CONTROL > Network >  AWS Routes. All the routes tables for the VPC are listed. Routes that use one of the firewalls are shown with a green icon. During takeover, the icon temporary turns red to indicate that a failover is in progress. After the route table rewrite, the network interface ID (eni-123456) matches the now-active firewall.

AWS_route_table_active.png

For step-by-step instructions, see How to Configure Cloud Integration for AWS.

Single endpoint for incoming traffic: Route 53 or Elastic Load Balancer

Using two public IP addresses for the active-passive high availability cluster may not always be possible. To use a single FQDN that always sends traffic over the active firewall, you can use either a classic Elastic Load Balancer or Route 53. Both services are similar in that they use health checks  and send traffic to the healthy destination. For TCP-only services, either service can be used. For UDP-based services, such as IPsec, use Route 53.

Classic Elastic Load Balancer

The classic Elastic Load Balancer is a managed layer 4 load balancer. The load balancer can only be addressed by the DNS name associated with it. It is not possible to work with the IP address the hostname resolves to directly because the underlying load balancing instances may change at any time.

The Elastic Load Balancer is responsible for distributing traffic to all healthy instances it is associated to. To make sure that traffic is only sent to the active firewall, define the health check for a service on the virtual service. For example, use TCP:691 as the health check target if a VPN service is running on the virtual server. The load balancer continuously polls the VPN service and considers the instance healthy if the TCP connection succeeds. Since the virtual server is only running on the active firewall, the health check always fails for the passive firewall. The passive firewall is considered unhealthy, and no traffic is forwarded to this instance by the load balancer.

Traffic passing through an Elastic Load Balancer rewrites the source IP address to that of the load balancer instance. If your application requires the public IP address of the client, use Route 53 instead.

For step-by-step instructions, see How to Configure an AWS Elastic Load Balancer for F-Series Firewalls in AWS.

Route 53

Route 53 is an authoritative DNS service by AWS. Route 53 allows you to monitor endpoints and change the returned record set according to the state of the health check. Create a health check for a service running on the virtual server of your high availability cluster. Create two record sets using a failover routing policy and attach the health check to the primary firewall. No distinct health check is created for the secondary firewall. If everything fails, it is better to attempt to reach at least one firewall in the cluster than to return nothing at all. The secondary firewall is also a better choice as a fail-safe because the default behavior of a high availability cluster favors the secondary firewall. For example, if both the primary and secondary firewall start the virtual server at the same time, the secondary firewall continues to run while the primary firewall shuts the virtual server down.

For step-by-step instructions, see How to Configure Route 53 for F-Series Firewalls in AWS

Access rules

By default, the Forwarding Firewall service blocks all traffic. To allow traffic through the firewall, you must create access rules with an allow action, such as Pass or Dst NAT. When creating the rules, make sure you create them so they will match the same type of traffic independent of which virtual server the firewall service is running on. For Dst NAT and App Redirect rules, enter both the management IP address of the primary and secondary firewalls, or use the All Firewall IPs.

Intrusion Prevention System (IPS)

For incoming traffic, enable the IPS on the matching access rule to actively monitor traffic for malicious activities and to also block suspicious traffic. The IPS engine analyzes network traffic and continuously compares the bitstream with its internal signatures database for malicious code patterns. Use the IPs in read-only mode to learn the traffic patterns and to create, edit, and override default and custom IPS signature handling policies. Then, switch the IPS to actually block malicious traffic. IPS policies can be deployed on a per-access-rule basis.

For more information, see Intrusion Prevention System (IPS)

Dst NAT access rule to forward traffic to Barracuda Web Application Firewalls (WAF)

If you have one or more non-autoscaled Barracuda Web Application Firewalls behind the F-Series Firewall high availability cluster, forward traffic to the WAFs with a Dst NAT access rule.

Create a network object for the internal IP addresses of the WAF instances

fwd_to_WAF01.png

For step-by-step instructions on how to create network objects, see Network Objects.

Create a Dst NAT access rule
  • Source – Select Any.
  • Service – Select HTTP+S.
  • Destination – Select All Firewall IPs.
  • Redirection Target List – Select the network object containing the internal IP addresses of the Barracuda Web Application Firewalls.
  • Redirection Policy – Select Cycle.
  • Connection Method –  Select Original Source IP.

fwd_to_WAF02.png

For step-by-step instructions on how to create Dst NAT access rules, see How to Create a Destination NAT Access Rule.

Last updated on