Increasing storage, and resiliency in my Nexenta Array!

Hello all,

I have a drive issue in my Nexenta storage array.  And I want to increase the amount of storage - I originally did it with 5 1 TB drives, and I wanted to improve resiliency so I can handle a drive failing without an issue.  When discussing this with a friend - Michael - he suggested I also add in some cache.  So that is what I am going to do.

I have six slots to work with, but one has an SSD (120 GB) that I boot too so I have five slots - each with a 1 TB drive in it.  I am going to pull all five out, put three 2 TB drives in, and one 500 GB SSD for the cache.

Lets get started!

  • We need a backup of those VMs.
  • We need to vacate the array, all VMs need to move off! Storage vMotion or migration works great at this.
  • Now we need to remove the NFS share on my array.

  • Now we remove our pool. Make sure you are not removing the System pool with is normally rpool.

  • Now we remove all the drives. In my case I will leave in the 120 GB SSD that is my OS partition.
  • Insert the new drives.  I have four 2 TB data drives, one spare 2 TB drive for the resiliency, and one 480 GB SSD for the read cache (which is much bigger than it needs to be).

  • We can see SSD 0 which is my boot SSD, and SSD 5 which will be my read cache. One of the 1.8 TiB drives will be my spare, and the other three my data.
  • Create our pool, but unlike in the past, where I would use all the drives, I can only select three 2 TB ones.

  • Select - using the + sign the first three drives to use for your pool.

  • When prompted for the cache select the 480 GB SSD.

  • Skip the Log option, as it is a write cache and needs more resources than I have.
  • When prompted for spare, select the last 2 TB drive.

  • You now use the Create Pool button to create your pool.

  • If you were to use the gear icon and select Status, followed by selectibng Disks, you would see something like below.  Which is a nice confirmation of what we did.

  • Now we need to change from the Pools tab to the Filesystems tab so we can create our NFS share.
  • We use the gear icon and select Add New Filesystem.

  • In the next screen I only add a name - nfs01 in this case.

  • Now we have a share created, and we need to share it out using NFS.

  • We will be prompted with the export path - so be sure to write it down.  But this is also were you could add some security such as only some hosts could access this share.  But in my case I have a dedicated and provide - non - routing - storage network so I do not do anything in this dialog.

So we have less space now, a spare drive, and a read cache.  So quite an improvement. Wait, why less space?  It turns out out my three 2 TB drives are only providing 3.51 TiB which is a bit low. I expect between 4 and 5 somewhere.  But I am asking around about that. Update: it turns out that my 2 TB drives are actually 1.8 TB and between that and the Raid overhead I get only the 3.51 TiB.

I have mounted the export to my two clusters and all is good!


=== END ===

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.