My Synology in one of my labs is being replaced by an NetApp. Sort of interesting in that they both are connected to AD and provide NFS and SMB. So they are similar, and yet so different. I have projects with them the past when I was a PS guy with NetApp gear but I had someone who loved playing with them so they did the config. So I thought I would do this one all by myself - thought it might be fun and interesting. Good Grief. Here is my notes on my bringing a FAS2040 to life.
The excitement starts after the rack and stack.
I install the NetApp OnCommand System Manager on my local Windows admin desktop. I won’t share how hard it was to find that utility or how hard it was to know which utility to use.
But it is installed, and yet it cannot connect to the array. This is what I get.
Of course my credentials are fine as I can connect to the console. A friend suggests I need to do a few little things. What I don’t hear from him - thanks Eric - is what are you doing - are you crazy - doing this yourself?
I log in on the console and execute two commands:
options ssl.enable on
options tls.enable on
Now I can log in. Where do I start? I just wing it.
I see a vol0 and figure it has the main OS on it so I ignore it.
- Licensing - Config \ System Tools.
- DNS - Config \ Networking
- Networking - I have a lot of ports, but pick what is connected and IP it. Where it is weird for me is VLAN. You do VLAN which then picks an interface. Then you IP the VLAN.
- Now connect the array to AD - Protocols \ CIFS
- Check Protocols \ NFS section and confirm the versions you are using.
- Aggregate - I do two, and give sort of half the disks to each. I call them aggr1 and aggr2. RAID-DP, Enable Flash=no, Enable Mirror=no. I have the spare drive left over and one broken drive.
- Volumes - I do two. vol01 is aggr1 and vol02 is aggr2. I enable storage efficiency with the defaults. BTW, I found it very easy to make volumes that had no space in them. Read the creation screen carefully and after you create them make sure they have the space you think they should.
- This is where I would do LUNs if I wanted iSCSI. But I don’t - who likes looking after iSCSI anyway?
- Exports - NFS. I use vol01 which is now /vol/vol01. Check permissions and make sure it is UNIX for ESXi. It defaults to NTFS. If you miss this step you get this odd error when trying to mount in vSphere (see below). Also make sure that you configure Client permissions for each ESXi host - allow root access.
- Shares / CIFS - create it at the point of /vol/vol02. You cannot pull AD users in but you can enter their name - mwhite for example, and when you save it will show domain\mwhite. Disable Virus scanning.
- I remove the Home share. Misleading.
Now we have an NFS share for our hosts, and a CIFS share for my users. I know that I have missed many things - not the least of managing snaps, HA and who knows what else. I will deal with that later - especially when I actually know more of what I am missing.
Thanks for reading! And, BTW, this was written with the support of the Black-Eyed Peas - The END and the Beginning. Thanks Duncan for recommending them.
BTW, this article is mostly so that if I have to do it again I have something to help. If this helps anyone else out, and you think I need more info, or more screenshots let me know.
Suggestions, comments are definitely welcome!
Michael
Leave a Reply