Cisco

The Correct(ish) Way to Migrate from Cisco ASA to Palo Alto

The Correct(ish) Way to Migrate from Cisco ASA to Palo Alto
In: Cisco, Palo Alto, NetDevOps

In this post, I want to share my recent experience where I had to migrate over from a Cisco ASA firewall to a Palo Alto firewall. This wasn't a small task by any means. The Cisco ASA had hundreds of objects and rules. Imagine trying to move all of these manually—it would take weeks!

My first thought was to use Palo Alto's Expedition tool. This seemed like the obvious choice, but things didn't quite work out as I expected. I'll get into the details of that a bit later in the post.

When the Expedition tool didn't pan out, I decided to take matters into my own hands. I used Python to write some custom scripts to handle the migration. In this blog, I'll break down the steps I took and the approach I used, which might just convince you that this is indeed the 'correct' way to do it :)

The Problem with Expedition Tool

During the migration, one of the first tools I considered was the Expedition tool from Palo Alto. I'll be honest here—I wasn't very familiar with this tool, which probably contributed to my initial reluctance. My goal was straightforward, migrate the configuration from ASA to Panorama. However, the process with Expedition wasn't as easy as I hoped.

Expedition involves importing the configurations from Panorama into the tool, and then merging them with the ASA configuration before uploading everything back. I wanted a more direct method, where I could push new configurations to Panorama instead of going through the hassle of importing and merging.

Another issue is the state of the ASA configuration. It was filled with legacy configurations and unused objects. The naming of the objects and object-groups was chaotic and disorganized. Before I even thought about moving to the new Palo Alto firewall, I knew I had to clean up this mess.

So, with these challenges in mind, I decided to step back from using the Expedition tool. This led me to explore a different path, one that involved custom scripting and a bit more control over the entire process.

Semi-Automated Approach

When I decided to DIY the migration, I knew that a fully automated approach might not be feasible, given the complexities involved. So, I settled on a semi-automated approach, leveraging custom Python scripting to handle a significant portion of the task. However, there were still several manual tasks that I needed to do.

It's crucial to remember that a 100% automated migration isn't always possible in certain scenarios, especially when dealing with a large number of complex and disorganized configurations like in my case. For my use case, creating the rules manually is not a problem. I can just create them one-by-one but at least I wanted to create all the address-objects / service-objects. The problem with creating the rules automatically is that the ASA was using TCP-Ports but I want to use app-id in Palo Alto. I also want to ensure to apply specific security profiles and log profiles.

Some Differences between ASA and Palo Alto

When migrating from Cisco ASA to Palo Alto firewalls, it's important to understand the key differences between these two firewalls.

Firstly, the way rules are applied is different in ASA, you apply access control lists (ACLs) to each interface separately. But in Palo Alto, all rules are created in one place, and you specify the source and destination zones for each rule. This means if you're moving from an ASA with many interface-specific ACLs to Palo Alto, you'll need to reorganize these rules into a unified format, considering the zones.

Another difference is how multiple interfaces are handled when they allow traffic to the same destination. In ASA, you would create separate ACLs for each interface. In Palo Alto, you would want to create one rule covering all relevant source subnets, reducing the need for multiple rules.

These differences show why fully automating the migration can be challenging. The unique aspects of each item require careful planning and sometimes manual adjustments.

💡
In this blog post, I'll only focus on migrating objects, object-groups, ports, and rules from ASA to Palo Alto, explaining how to manage these elements during the migration.

Migrating the Address Objects

The first step in my migration process was to move all the 'used' address objects from the Cisco ASA to the Palo Alto firewall. My strategy was to organize these objects in a structured format and then push them to Palo Alto using their REST API.

Finding the Used Objects - The first task was to identify all the address objects, object-groups, and service-objects that were actually being used in the ACLs. I didn't want to blindly copy everything over to Palo Alto, as that would include unnecessary clutter.

Exporting ACLs to CSV - A straightforward way to start this process was by exporting all the ACLs applied to each interface into a CSV file (you can do this via ASDM). This export gave me a clear view of what was being used. One interesting thing I noticed during this export was how the CSV file looked like.

  • If an ACL contains an address-object, the CSV file converts this object into the actual IP address or subnet.
  • If the ACL includes an address-group, the CSV retains the group's name.
  • Service objects and groups are treated similarly.

With the ACLs exported to a CSV file, which now contains all the objects (converted to IP/Subnet) and object-groups, the next step is to do some scripting.

import pandas as pd
import csv
import ipaddress
import json
import requests
from netaddr import IPAddress

def is_ipv4(string):
    try:
        ipaddress.IPv4Network(string)
        return True
    except ValueError:
        return False

src_file = 'files/asa_interface.csv'
dst_file = 'files/asa_interface_dst.csv'

df = pd.read_csv(src_file)
df = df[df.Hits != 0]
df = df[['Source', 'Destination', 'Service', 'Description', 'Action']]
df.to_csv(dst_file, index=False)

all_addresses = []

with open (dst_file) as f:
    reader = csv.DictReader(f)
    for row in reader:
        src_ip = row['Source'].split(',')
        for i in src_ip:
            all_addresses.append(i)        
        dst_ip = row['Destination'].split(',')
        for n in dst_ip:
            all_addresses.append(n)

list_ip_network = [a for a in all_addresses if is_ipv4(a)]
list_object = [a for a in all_addresses if (not is_ipv4(a)) and ('any' not in a)]


with open('files/asa_object_group.json', 'r') as f_group:
  data_group = json.load(f_group)

with open('files/asa_object.json', 'r') as f_object:
  data_object = json.load(f_object)

all_objects = []

for b in list_object:
    for group in data_group:
        if b == group['name'] and group['host'] != '':
            list_ip_network.append(group['host'])
        elif b == group['name'] and group['net_object'] != '':
            all_objects.append(group['net_object'])
        elif group['mask'] != '':
            mask = group['mask']
            cidr = IPAddress(mask).netmask_bits()
            if b == group['name'] and group['network'] != '':
                list_ip_network.append(group['network'] + '/' + str(cidr))
        


for c in all_objects:
    for object in data_object:
        if c == object['name']:
            if object['host'] != '':
                list_ip_network.append(object['host'])
            else:
                mask = object['mask']
                cidr = IPAddress(mask).netmask_bits()
                list_ip_network.append(object['network'] + '/' + str(cidr))

Detailed Code Explanation

  1. Importing Necessary Libraries - The script starts by importing several Python libraries. pandas is used for data manipulation and analysis, particularly for handling CSV files. ipaddress is a library for creating, manipulating, and operating on IPv4 and IPv6 addresses and networks. json is used for parsing JSON files, requests for making HTTP requests (though it's not directly used in the provided code), and netaddr for handling network addresses.
  2. Defining a Function to Check for IPv4 Addresses - The is_ipv4 function is defined to check if a given string is a valid IPv4 address. It attempts to create an IPv4 network from the string, and if successful, returns True, indicating the string is a valid IPv4 address. If it fails (throws a ValueError), it returns False.
  3. Reading and Filtering CSV Data - The script reads a CSV file (asa_interface.csv) using Pandas, which contains the firewall rules from the Cisco ASA. It filters these rules to include only those with non-zero hit counts, indicating they are actively used. The script then selects specific columns relevant to the migration ('Source', 'Destination', 'Service', 'Description', 'Action') and saves this filtered data into a new CSV file (asa_interface_dst.csv).
  4. Extracting Address Information - The script opens the newly created CSV file and reads each row. It splits the source and destination IP addresses (which could be individual IPs or groups) and adds them to the all_addresses list.
  5. Separating Individual IPs and Address Groups - The all_addresses list is processed to separate individual IP addresses/networks from address groups. This is done by using the is_ipv4 function. IP addresses and networks are added to list_ip_network, while address groups are added to list_object, excluding any entries that are 'any'.
  6. Processing JSON Data for Address Groups - The script loads JSON files (asa_object_group.json and asa_object.json) which contain details of the address objects and groups from ASA. It then iterates over each address group in list_object. For each group, it finds corresponding IP addresses or network objects in the JSON files. If a direct IP address (host) is found, it's added to list_ip_network. For network objects, it calculates the CIDR notation from the subnet mask and adds the network address in CIDR format to the list.
  7. Finalizing the List of IP Networks - Finally, the script processes all_objects, which contains network objects from address groups. For each object, it finds the corresponding entry in the JSON data and, similar to the previous step, adds either the direct IP or the calculated network in CIDR format to list_ip_network.

I started by using the Pandas library to clean up the CSV files. This involved dropping columns that weren't needed and also removing any rules that had a hit count of zero. This step was important to ensure that I was only working with relevant and actively used data. After cleaning the data, I separated the actual IP addresses and put them into one list, while the address-groups went into a different list.

To get the IP addresses and subnets from the address-groups, I used Netmiko to connect to the ASA and run commands like 'show run object network' and 'show object-group network'. I then used ntc_templates with TextFSM to parse this output into JSON format, which was saved to a file.

The script processes the two lists created earlier

  1. list_ip_network contains actual IP addresses and networks.
  2. list_object contains address-group names.

For each address-group name in list_object, the script looks up corresponding IP addresses or network objects in the JSON files (asa_object_group.json and asa_object.json). If a direct IP address is found, it's added to list_ip_network. For network objects, the script calculates the CIDR notation from the subnet mask and adds the network address in CIDR format to the list.

This process effectively converts all address objects and groups into a standardized IP address or network format suitable for pushing to Palo Alto via the REST API. Now, you can juse use simple POST requests to push the objects to the Palo Alto. Here is an example of how to create an address object using the REST API.

import requests
import json

# Disable self-signed warning
requests.packages.urllib3.disable_warnings()


location = {'location': 'device-group', 'device-group': 'lab', 'name': 'google_dns'}
headers = {'X-PAN-KEY': 'YOUR_API_KEY'}
api_url = "https://Firewall_IP/restapi/v10.2/Objects/Addresses"

body = json.dumps(
    {
        "entry":
        {
            "@name": "google_dns",
            "ip-netmask": "8.8.8.8",
        }
    }
)

r = requests.post(api_url, params=location, verify=False, headers=headers, data=body)
print(r.text)

Depending on your use case, you need to use the appropriate address-name in Palo Alto. Either, you can do a reverse nslookup and find the name of an IP or you can prepend each IP with a string before pushing it to the firewall. For exampl, if an IP is 192.168.10.2/32, you can use the name addr_192.168.10.2_32. You can do this very easily with Python.

lis_ip_network = ['existing objects']
post_list = []

for n in lis_ip_network:
    dict = {}
    if '/' in n:
        subnet_name = 'addr' + n.replace('/', '-')
        dict['@name'] = subnet_name
        dict['ip-netmask'] = n
        post_list.append(dict)
        
    else:
        object_name = 'addr' + n
        dict['@name'] = object_name
        dict['ip-netmask'] = n
        post_list.append(dict)

The same goes with service-objects too, you can run the appripriate show commands on the ASA, parse the output and then push it to Panorama/Palo Alto firewall.

So far, we would have pushed all the address-objects and service-objects to the Palo Alto. Though, please remember, if there is an address-ggroup, we may need to create them manually in Panorama/Firewall. I could have automated this too but didn't have too many groups so, decided to do it manually.

Firewall Rules

In the last phase of migrating from Cisco ASA to Palo Alto, I focused on the firewall rules. As I've mentioned earlier, this part of the process was a mix of manual creation and custom scripting. It's important to note that the scripts I used to create the rules were tailored to my specific needs and environment, meaning they might not be directly applicable to all scenarios so, I exclude them from this post.

However, one significant advantage I had while creating these rules was knowing that all the necessary objects already existed in the Panorama, thanks to the earlier steps in the migration process. This prior setup of address objects and groups sped up the rule creation phase considerably. When I was adding a new rule, I didn't need to worry about whether the referenced objects were present or not, as they had already been migrated and organized.

In summary, the combination of manual rule creation and custom scripting, backed by the pre-migration preparation of objects and groups, made this final stage of the migration more manageable.

Closing Up

If you've also experienced migrating from ASA to Palo Alto, I’d love to hear about your approach. Feel free to share your experiences in the comments below. And if you know a better or more efficient way to handle such migrations, please don't hesitate to share that as well. Your insights could be incredibly valuable to others facing similar challenges.

Written by
Suresh Vina
Tech enthusiast sharing Networking, Cloud & Automation insights. Join me in a welcoming space to learn & grow with simplicity and practicality.
Comments
More from Packetswitch
Table of Contents
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Packetswitch.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.