- Reference: NetApp Training – Fast Track 101: NetApp Portfolio
OnCommand management software helps your customers to monitor and manage their NetApp storage as well as multi-vendor storage environments, offering cost-effective and efficient solutions for their clustered, virtualized and cloud environments. With OnCommand, our customers are able to optimize utilization and performance, automate and integrate processes, minimize risk and meet their SLAs. Our objective is to simplify the complexity of managing today’s IT infrastructure, and improve the efficiency of storage and service delivery.
Multiple Clustered NetApp Systems
- Reference: NetApp Training – Fast Track 101: NetApp Portfolio
Manage and automate your NetApp storage at scale. For your customers who are growing and require a solution to manage multiple clustered NetApp systems, they can turn to OnCommand Unified Manager, Performance Manager, and Workflow Automation. These three products work together to provide a comprehensive solution for today’s software-defined data center. Also your customers can analyze their complex virtualized environment and cloud infrastructure using NetApp OnCommand Balance.
In my previous post Connecting the NetApp Simulator to your Virtual and Physical Labs, I explained the steps you need to follow in order to connect the NetApp simulator to GNS3. By doing this you’re able to connect the simulator to Cisco routers, Virtual Steelheads, ASA firewalls, F5 load balancers… to put it simply, just about any physical or virtual piece of equipment you can think of! This entry builds on that post and demonstrates how, with just a few extra steps, you’re able trunk VLANs between the simulator and GNS3.
To demonstrate its capabilities, I’ll explain three different methods GNS3 is able to handle the VLANs which are passed to it by the simulator. (Note that additional methods become available when you integrating other appliances into GNS3. For example, you could have the simulator hand off the VLANs to a Palo Alto firewall).
Creating VLANs & LIFs on the Simulator
Let’s begin by configuring the simulator side. We’ll need to create the VLANs and LIFs which we plan to make accessible to the device(s) in GNS3. For this example I’ll configure them in the following way:
- e0d-20 = 10.0.20.9 /24
- e0d-30 = 10.0.30.9 /24
- e0d-40 = 10.0.40.9 /24
I have written a few pieces on making the most out of your virtual and physical labs. These include:
Today I’m going to cover how you can add another piece of virtual equipment to your lab – the NetApp Simulator.
Note: As there are plenty of NetApp Simulator documents, blog posts and forum discussions, I won’t be covering things like licence keys, initial configuration, etc.
As with my previous guides, GNS3’s “cloud” object is used to enable the simulator to connect to the rest of the lab virtual and physical lab. For those who are unfamiliar with the cloud object, it allows you to bind your network adapters (both physical and virtual) to an object inside of your GNS3 topology. This object can then be connected to your GNS3 equipment and/or other clouds, enabling the devices to communicate with one another.
NetApp SnapRestore software uses stored Snapshot copies to recover entire file systems or data volumes in seconds.
Whether you want to recover a single file or a multi-terabyte data volume, SnapRestore software makes data recovery automatic and almost instantaneous, regardless of your storage capacity or number of files. With a single simple command, you can choose and recover data from any NetApp Snapshot copy on your system.
Whereas traditional data recovery requires that all the data be copied from the backup to the source, the SnapRestore process is fast and takes up very little of your storage space. With SnapRestore, you can:
- Restore data files and databases fast
- Test changes with easy restores to your baseline copy
- Recover at once from virus attacks, or after user or application error
In addition, SnapRestore software requires no special training, which reduces both the likelihood of operator error and your costs to maintain specialized staffing resources.
The more a backup application understands about the way an application works, the more efficient the backup process will be. Unfortunately, back-end storage systems typically know little or nothing about the application data they contain, so you either have to use brute-force methods to perform backups on the storage system or you have to let each application perform its own backup. Neither alternative is particularly desirable.
SyncMirror mirror aggregates and work on a RAID level. You can configure mirroring between two shelves of the same system and prevent an outage in case of a shelf failure.
SyncMirror uses a concept of plexes to describe mirrored copies of data. You have two plexes: plex0 and plex1. Each plex consists of disks from a separate pool: pool0 or pool1. Disks are assigned to pools depending on cabling. Disks in each of the pools must be in separate shelves to ensure high availability. Once shelves are cabled, you enable SyncMiror and create a mirrored aggregate using the following syntax:
aggr create aggr_name -m -d disk-list -d disk-list
WAFL is our Write Anywhere File Layout. If NVRAM’s role is the most-commonly misunderstood, WAFL comes in 2nd. Yet WAFL has a simple goal, which is to write data in full stripes across the storage media. WAFL acts as an intermediary of sorts — there is a top half where files and volumes sit, and a bottom half (reference) that interacts with RAID, manages SnapShots and some other things. WAFL isn’t a filesystem, but it does some things a filesystem does; it can also contain filesystems. WAFL contains mechanisms for dealing with files & directories, for interacting with volumes & aggregates, and for interacting with RAID. If Data ONTAP is the heart of a NetApp controller, WAFL is the blood that it pumps.
Although WAFL can write anywhere we want, in reality we write where it makes the most sense: in the closest place (relative to the disk head) where we can write a complete stripe in order to minimize seek time on subsequent I/O requests. WAFL is optimized for writes, and we’ll see why below. Rather unusually for storage arrays, we can write client data and metadata anywhere.
Vservers : contain one or more FlexVol volumes, or a single Infinite Volume
Volume : is like a partition that can span multiple physical disks
LUN : is a big file that is inside the volume. the LUN is what gets presented to the host.
RAID, Volumes, LUNs and Aggregates
- An aggregate is the physical storage. It is made up of one or more raid groups of disks.
- A LUN is a logical representation of storage. It looks like a hard disk to the client. It looks like a file inside of a volume.
Raid groups are protected sets of disks. consising of 1 or 2 parity, and 1 or more data disks. We don’t build raid groups, they are built automatically behind the scene when you build an aggregate. For example:
In a default configuration you are configured for RAID-DP and a 16 disk raid group (assuming FC/SAS disks). So, if i create a 16 disk aggregate i get 1 raid group. If I create a 32 disk aggregate, i get 2 raid groups. Raid groups can be adjusted in size. For FC/SAS they can be anywhere from 3 to 28 disks, with 16 being the default. You may be tempted to change the size so i have a quick/dirty summary of reasons.