AIX Patch Management

AIX Patch Management

One of the key tasks for an AIX System Administrator is to carry out regular operating system upgrades and patching. It is recommended by IBM to ensure that any AIX Operating System is kept up-to-date for the following reasons:

It is not always practical to carry out an ad-hoc upgrade to satisfy these recommendations, so it’s important to have a planned patching schedule, to ensure servers are running the latest level of AIX. It’s normally best practise for non-production servers to be upgraded and tested with any database or application, before upgrading any production environment. It is also recommended that any other IBM software such as PowerHA is also  upgraded at this time – if required.

AIX upgrades can involve updating either the OS version, Technology Level or Service Pack. In order to plan for this, there are a number of approaches that should be considered and the method chosen is dependant on the existing server OS level, system availability and requirements.

Using NIM for installing updates

IBM Network Install Manager (NIM) provides an efficient and reliable method of deploying AIX upgrades over the LAN from a central repository, called the NIM master. Servers which receive OS updates from the NIM master are called NIM clients.

NIM can be used to upgrade client operating systems in two ways. Firstly, a fileset-based installation, which uses a NIM lpp_source object for the installation. The NIM master can host a number of different lpp_sources corresponding to the different versions of the AIX OS. The alternative method of operating system upgrade involves the NIM master making available a mksysb backup taken from the existing server.

Whichever method of client installation is chosen, an additional NIM resource is required on the NIM master – the SPOT (shared product object tree). This is a directory of code (installed filesets) that is used during the client booting procedure. For each lpp_source defined on the NIM master, an accompanying SPOT must be defined.

NIM can also be used to deploy automated customisations to its clients (via NIM firstboot fb_script resources), allowing consistent and repeatable configurations and toolset installations, minimising the risk of misconfigurations arising from manual steps.

NIM also significantly reduces the time required to patch a server or LPAR, enabling large environments to be upgraded using only a minimum of manpower and cost.

High Level upgrade example using nimadm:

  • Take a mksysb of the NIM client to the NIM Master
  • Split the OS LVM mirror or present an additional disk to the NIM client
  • From the NIM master, initiate the alt_disk_install to the “spare” disk on the NIM client
  • Reboot the NIM client from the upgraded disk
  • If there are any issues with the server post migration, the server can be rebooted from the original OS disk – which can be retained until not needed
  • Re-mirror the OS disk
Advantages of using NIM

Availability

The ability to carry out the upgrades with the client systems available to users, with minimal outage for the reboot and testing of the servers

Minimal Risk

Simple back-out to return to original server state using the original OS disk

Scalability

Established upgrade mechanism fully supported by IBM which allows multiple NIM upgrades concurrently

Manageability

Centrally managed repositories accessed over the network

Service Update Management Assistant (SUMA)

To assist with keeping the NIM master up-to-date with the latest updates, IBM SUMA can be configured. SUMA is an automated tool that downloads fixes from an IBM fix distribution website to a local NIM master, in readiness for NIM client upgrades. SUMA can be configured to periodically check the availability of the latest software packages so updates do not have to be retrieved manually from the web.

AIX Live Update

One of the issues when trying to schedule patching, especially in a busy production environment, is organising a maintenance window to allow for a reboot of the server. In this situation, it would be worthwhile considering using AIX Live Update.

Prior to AIX 7.1, if an update changed the AIX kernel, the server would need to be rebooted. To address this issue, IBM provided concurrent enabled fixes that allowed the deployment of fixes to a running LPAR.  However, there were still times when fixes could not be provided as concurrent enabled fixes, therefore a reboot would still be required.

From AIX Version 7.2, Live Update functions could be used to eliminate the downtime that is associated with the AIX updates meaning that applications on the system would not need to be stopped and a reboot would not be required. 

In order to do this, an AIX Live Update is initiated from the source LPAR (known as the original partition) and a copy of the original partition is  automatically created as a new LPAR (known as the surrogate partition). The surrogate partition is then patched and when it’s completed, the running workloads are migrated from the original partition to the patched surrogate partition, with a short “blackout” time where the workload is automatically paused and then restarted on the surrogate partition.

The AIX Live Update process can be run either using the geninstall command or through NIM and can be run in the following two modes:

Preview mode: this allows the Live Update process to be validated and gives estimated timings for the operation, blackout time and resource usage.

Automated mode: this runs the Live Update automatically which includes the creation of the surrogate partition, the patching and when successfully completed, the removal of the original partition. The original rootvg is kept and is available if required.

Pre-requisites for AIX Live Update:

  • All I/O devices must be virtualised
  • Temporary CPU and memory is required for the surrogate partition
  • Two disks are required for the boot disk and the new rootvg disk for the surrogate partition

There are a number of restrictions that need consideration prior to planning the AIX Live update process, which can be found in the following IBM documentation: Planning Restrictions and Live Update Concepts

Upgrading AIX with Ansible

The Red Hat Ansible Automation Platform provides an AIX collection which can be used to automate AIX upgrades. This utilises NIM to carry out the updates, and it allows multiple systems to be upgraded concurrently using the provided ansible scripts.

High Level Steps for using Ansible to upgrade an AIX server:

1. Install ansible onto a linux server (Red Hat):

2. Install the Ansible AIX collection:

3. Define a default inventory file in /etc/ansible/hosts:

4. Ensure the SSH public key for the user on the ansible server is configured on each of the target clients:

5. Using the ansible ping module, test connectivity to the hosts:

6. Install python and yum on the AIX servers. As part of the ansible collection, IBM provide a “bootstrap” ansible playbook for this to check and install, if needed:

7. Upgrade the relevant servers defined in ansible – IBM provide an alt_disk_migration playbook within the ansible collection, which can be customised and run:

8. Once the ansible alt_disk_migration has completed, reboot the server and it will boot from the upgraded disk. Check this by running the following ansible playbook:

9. Any server included in the ansible alt_disk_migration will then show the new OS level:

Ansible can also be used to apply Technology Levels and Service Packs in the same way.

Whichever method is used whether it be NIM, AIX Live Update or automating patching using Ansible the key factor is to plan and ensure that the patching strategy fits in with the business requirements and all AIX servers are kept updated, as per IBM’s recommendation.