Interacting with NetApp APIs, Part 3

In Part 2 of this series we made our first API call and received over 200 lines of XML as a result. The reason why we received so much output is because we didn’t remove any desired-attributes  and therefore the call retrieved about 90 pieces of information. When you multiply that by the number of queries we ran (2), you get 180 pieces of information being requested.

This post will discuss why you should limit your queries to only the pieces of information you’re interested in. It will also cover how you can use ZExplore to convert its XML configuration to languages such as Python, Perl and Ruby.

Continue reading

Interacting with NetApp APIs, Part 2

Picking up where I left off in Part 1 of this series, let’s continue our exploration of ZExplore :)

Mandatory Parameters

In part 1 I touched on the fact that the API documentation can be a little confusing when it comes to mandatory fields. Unfortunately the same is true for ZExplore. However, NetApp’s documentation explains it well:

Red colored arrows indicate mandatory parameters whereas Blue colored arrows indicate optional parameters. 

Note: In some APIs when either one of the two input parameters is required, both these input parameters are marked by “Blue” color arrow and not “Red”.

As I mentioned in my previous post, by doing this NetApp avoids confusing users who might otherwise try to set both parameters if they were both marked as required.

Continue reading

Interacting with NetApp APIs, Part 1

If you’re a regular reader of this blog, you’ll see that I’ve been posting about automation and Python quite a lot recently. The reason being that it’s not only fun, but I feel it’s the way of the future. But I digress…

The reason for this post is to discuss my recent experience with NetApp’s APIs. As I got off to a pretty slow start (which I feel was due to lack of documentation), I’ll also provide setup and usage guidance in the hopes that you can get up and running sooner than I did.

Continue reading

NetApp From the Ground Up – A Beginner’s Guide Part 13

Snap Reserve

Overview

Snapshot copy reserve sets a specific percent of the disk space for storing Snapshot copies. If the Snapshot copies exceed the reserve space, they spill into the active file system and this process is called snapreserveSnapshot spill.

The Snapshot copy reserve must have sufficient space allocated for the Snapshot copies. If the Snapshot copies exceeds the reserve space, you must delete the existing Snapshot copies from the active file system to recover the space, for the use of the file system. You can also modify the percent of disk space that is allotted to Snapshot copies.

Defaults

The Snapshot copy reserve sets a specific percent of the disk space for Snapshot copies. For traditional volumes, the default Snapshot copy reserve is set to 20 percent of the disk space. For FlexVol volumes, the default Snapshot copy reserve is set to 5 percent of the disk space.

The active file system cannot consume the Snapshot copy reserve space, but the Snapshot copy reserve, if exhausted, can use space in the active file system.

Continue reading

NetApp From the Ground Up – A Beginner’s Guide – Index

Below is a list of all the posts in the “NetApp From the Ground Up – A Beginner’s Guide” series:

Continue reading

NetApp From the Ground Up – A Beginner’s Guide Part 12

Volume and Aggregate Reallocation

Summary

  • Volume Reallocation: Spreads a volume across all disks in an aggregate
  • Aggregate Reallocation: Optimises free space in the aggregate by ensuring free space is contiguous.

Details

One of the most misunderstood topics I have seen with NetApp FAS systems is reallocation. There are two types of reallocation that can be run on these systems; one for files and volumes and another for aggregates. The process is run in the background, and although the goal is to optimize placement of data blocks both serve a different purpose. Below is a picture of a 4 disk aggregate with 2 volumes, one orange and one yellow.

ag1

If we add a new disk to this aggregate, and we don’t run volume level reallocation all new writes will happen on the area in the aggregate that has the most contiguous free space. As we can see from the picture below this area is the new disk. Since new data is usually the most accessed data you now have this single disk servicing most of your reads and writes. This will create a “hot disk”, and performance issues.

Continue reading

NetApp From the Ground Up – A Beginner’s Guide Part 11

Capacity

Right-Sizing

Disk drives from different manufacturers may differ slightly in size even though they belong to the same size category. Right sizing ensures that disks are compatible regardless of manufacturer. Data ONTAP right sizes disks to compensate for different manufacturers producing different raw-sized disks.

More Information

Much has been said about usable disk storage capacity and unfortunately, many of us take the marketing capacity number given by the manufacturer in verbatim. For example, 1TB does not really equate to 1TB in usable terms and that is something you engineers out there should be informing to the customers.

NetApp, ever since the beginning, has been subjected to the scrutiny of the customers and competitors alike about their usable capacity and I intend to correct this misconception. And the key of this misconception is to understand what is the capacity before rightsizing (BR) and after rightsizing (AR).

(Note: Rightsizing in the NetApp world is well documented and widely accepted with different views. It is part of how WAFL uses the disks but one has to be aware that not many other storage vendors publish their rightsizing process, if any)

Continue reading

NetApp From the Ground Up – A Beginner’s Guide Part 10

OnCommand Overview

  • Reference: NetApp Training – Fast Track 101: NetApp Portfolio

OnCommand management software helps your customers to monitor and manage their NetApp storage as well as multi-vendor storage environments, offering cost-effective and efficient solutions for their clustered, virtualized and cloud environments. With OnCommand, our customers are able to optimize utilization and performance, automate and integrate processes, minimize risk and meet their SLAs. Our objective is to simplify the complexity of managing today’s IT infrastructure, and improve the efficiency of storage and service delivery.

Multiple Clustered NetApp Systems

  • Reference: NetApp Training – Fast Track 101: NetApp Portfolio

Manage and automate your NetApp storage at scale. For your customers who are growing and require a solution to manage multiple clustered NetApp systems, they can turn to OnCommand Unified Manager, Performance Manager, and Workflow Automation. These three products work together to provide a comprehensive solution for today’s software-defined data center. Also your customers can analyze their complex virtualized environment and cloud infrastructure using NetApp OnCommand Balance.

Continue reading

NetApp From the Ground Up – A Beginner’s Guide Part 9

SnapRestore

NetApp SnapRestore software uses stored Snapshot copies to recover entire file systems or data volumes in seconds.

 Whether you want to recover a single file or a multi-terabyte data volume, SnapRestore software makes data recovery automatic and almost instantaneous, regardless of your storage capacity or number of files. With a single simple command, you can choose and recover data from any NetApp Snapshot copy on your system.

Whereas traditional data recovery requires that all the data be copied from the backup to the source, the SnapRestore process is fast and takes up very little of your storage space. With SnapRestore, you can:

  • Restore data files and databases fast
  • Test changes with easy restores to your baseline copy
  • Recover at once from virus attacks, or after user or application error

In addition, SnapRestore software requires no special training, which reduces both the likelihood of operator error and your costs to maintain specialized staffing resources.

SnapManager

The more a backup application understands about the way an application works, the more efficient the backup process will be. Unfortunately, back-end storage systems typically know little or nothing about the application data they contain, so you either have to use brute-force methods to perform backups on the storage system or you have to let each application perform its own backup. Neither alternative is particularly desirable.

Continue reading

NetApp From the Ground Up – A Beginner’s Guide Part 8

HA Pair

 Summary

HA Pair is basically two controllers which both have connection to their own and partner shelves. When one of the controllers fails, the other one takes over. It’s called Cluster Failover (CFO). Controller NVRAMs are mirrored over NVRAM interconnect link. So even the data which hasn’t been committed to disks isn’t lost.

Note: HA Pair can’t failover when disk shelf fails, because partner doesn’t have a copy to service requests from.

Mirrored HA Pair

 Summary

You can think of a Mirrored HA Pair as HA Pair with SyncMirror between the systems. You can implement almost the same configuration on HA pair with SyncMirror inside (not between) each system (because the odds of the whole storage system (controller + shelves) going down is highly unlikely). But it can give you more peace of mind if it’s mirrored between two system.

It cannot failover like MetroCluster, when one of the storage systems goes down. The whole process is manual. The reasonable question here is why it cannot failover if it has a copy of all the data? Because MetroCluster is a separate functionality, which performs all the checks and carry out a cutover to a mirror. It’s called Cluster Failover on Disaster (CFOD). SyncMirror is only a mirroring facility and doesn’t even know that cluster exists. 

Continue reading