Let’s Talk Dirty Shutdowns (NetApp ONTAP Update Problems!)

Netapp (Talk Dirty)

Are Dirty Shutdowns causing your NetApp ONTAP Update to Loop?

In this post, I’ll share what I learned recently about NetApp ONTAP v8.x. Let’s talk “Dirty-Shutdowns!”

Beginner Alert:

Mark this page because someday you might need it. This is a must-know for newbies managing NetApp storage.

What are Dirty-Shutdowns?

A dirty shutdown can happen for a number of reasons, for example:

  • A software bug has caused a filer to fail over and trigger a loop. The “dirty-shutdown” is now keeping you from updating ONTAP to fix the bug.
  • Or a loop was caused by a forced take-over during a normal ONTAP update. The “dirty shutdown” is now keeping you from completing the normal ONTAP update.

Both cases put the filer head into dirty-shutdown mode and force ONTAP to continue booting to the primary partition.

Here’s the Scenario:

You’ve decided to update your NetApp software to a new version of ONTAP¬†v8.x and while going through the documented process of clicks, take-overs, and reboots something goes terribly wrong and now your filer head is looping back to the old firmware.

After reviewing your steps and freaking out because this is a production system and now you are stuck failed-over on one filer, you call technical support.

From here you go through the rigmarole of redoing all the steps you just did, again.

Then after about an hour, you get transferred again and now you rehash all the steps again with someone else. But this time they decide they want you to pull down the update on a web server so you can do a NETBOOT which installs the upgrade over the primary OS partition.

From here things really get hairy because you don’t have a web server available to do a NETBOOT and time is ticking away. Now it’s 3 AM and your change window is quickly running out.

Jumping ahead…you finally set up a web server, pull down the file, run the update, and reboot. Success. You are back up on the new version. Then you do the takeover and rerun the update on the second filer.

But this should have been non-disruptive and quick but instead, it took 3 – 4 hours and a call to NetApp support.

Why did this happen?

Here’s the Explanation:

I’ve had to endure this problem on production systems in the past and what I found out is this. The new version of ONTAP, v8.x, now runs on BSD. There is a trigger now that wasn’t there in v7.x that detects what is known as a dirty shutdown¬†which forces ONTAP¬†to boot to the primary flash partition.

Here’s the Solution:

When your filer is looping after a dirty shutdown you can try this option before doing the NETBOOT.

Run this command with the environment variable set to LOADER.

For example: ¬†“LOADER-B >boot_backup” to boot to the secondary partition or “boot_primary” for the primary partition.

If this works you will bypass the primary and boot to the secondary partition where the updated OS should be sitting.

If this fails then the NETBOOT is the next option.

Note: Always call your technical support when you are not sure.


Storage is a big part of virtualization and knowing how to resolve problems fast is important for managing your virtual service.

Remember, no matter how much you spend on hardware and software — outages will still happen, guaranteed!

To be indispensable you need to learn how to get your services back up with the least amount of disruption.

I hope this “Dirty-Talk” has helped you?

Leave a Reply