Building out my Nexenta Shared Storage for my home lab

Hi there,

While I have Synology and a QNAP NAS devices in my home lab – and like both of them, I also have a FalconStor array providing the real shared storage for my home lab.  Both of the NAS can handle only one or two VMs running on them.  This FalconStor array has been a real champ though – it has not been patched or adjusted in any way for about 3 or maybe 4 years and I can run a fair number of VMs on it.  I need more storage, and I need more performance as well 10 GB networking and my FalconStor cannot do that.  So I am going to use an Dell R710 that has been replaced by a Supermicro server to be my new shared storage. Since it was an ESXi host for a bunch of years it does have 40 GB RAM and multiple 1 GB network ports plus a recent addition of a 10 GB network port.

So this article will be about making Nexenta work good for my shared storage.

BTW, why did I choose Nexenta?  It and FreeBSD were the only two choices I knew of in this sort of space.  Meaning what I could afford and a good amount of supported storage. With Nexenta it would support my gear, but also, it has a kick ass vSphere Web plug-in, and VVols support too.

Also, I know that my hardware – Dell R710, is not on the Nexenta HCL.  But it runs vSphere, and Windows, and Linux pretty darn good, so I am taking a chance on it.  And yes, I paid for it, I have issues around warm restarts and sometimes cold starts. Outside of this it seems to work well though.

Lets get started!

Things to get ready

  • The server you are going to use. In my case with lots of memory and network ports.
  • Storage array FQDN and IP for both your management network, and IP for the storage network.
  • Fusion FQDN and IP for the management network.
  • Bits can be found here, be aware you will need to log in and ask for the Community Edition which gives you 10 TB which is very generous indeed. Make sure you get the ISO as well as the OVA for Fusion.
  • You can find more product type info here, including docs.
  • Burn the bits to a CD and you are ready to go.

My host had been an ESXi host, so that means it had a dual SD boot that I was using.  So I had to change the boot order so that the first HD was boot, and that first boot was CD.  In addition, I had some smaller disk in this host so changed things around so I had 6x 1 TB disks.  You may need to tweak things after you first reboot – I did.  The hard drive I picked for boot was not the one I thought it should be.

Initial Setup

I do not have any screenshots of this since I am installing to physical, but I would have taken pictures if necessary but it is pretty simple.

  1. Boot to the keyboard choice.  Default was good for me as I am in Canada.
  2. There was a hostname choice that I could not make and it continued very quick.
  3. Accept EULA
  4. Select Manual or Profile and since it is almost my first I could only choose Manual.
  5. Select the disks you wish to include for the root pool.  Make sure you don’t select any SD / USB choices. Important Note – the disk(s) you choose here are for the system pool (rpool).  You will not get to use them in any other pools. If you select one it will work and if you select two it will work but will be mirrored.  In my home lab I will only use one (and do backups).
  6. Now you can do the hostname.
  7. Take care of your networking.
    1. You need to select which is your management – or first network – and my first four choices were all down.  Which technically they were not.  So not sure what that means.  My 10 GB was up which was correct.  But I select what I thought was right. Update – turns out while they were all down, the fact is they are working after the install is done and restarted.
    2. Take care of your networking – IP, mask, DNS, etc.
  8. Select your optimization profile – which for me was default.
  9. Select your timezone.
  10. Make sure the proper date / time is seen.
  11. Configure your NTP – but you only get one choice.
    1. So for Canada enter 0.ca.pool.ntp.org which is a pool
    2. And for US use 1.us.pool.ntp.org which is also a pool
    3. For everywhere else select what you normally do!
  12. Define your admin password – and make it good enough you cannot remember it and need 1Password to track it for you.
  13. Define your replication password and again make it good enough you cannot remember it and need 1Password to protect it.  Likely I will not be doing replication.
  14. Review the configuration and F2 to install.
  15. It does take a while!
  16. You can use F4 now to check out the log, or F8 to proceed with the reboot.

Many times I have done this, at this point the server comes up and spends a lot of time on a line that starts Loading smf…. and twice I have done a cold boot after being tired of waiting, and both times it came up fine. Sometimes I get odd messages after that line – when I don’t restart – and one of the messages is Killing contract 61. Sometimes I have restarted here, but sometimes it continues on all OK. BTW, I sometimes press the power button twice quickly and the OS should recognize it and deal with it. If it doesn’t respond, or restart then you have a clue. Which I just had happen so cold boot it is. This happens on 5.0.2 (and similarly most every boot) and I suspect that this means somehow it is my hardware.

Now, if it comes up, that is good.  For me it did not, and I had to tweak the boot order of things – I first used the boot manager, select Hard Disk C and things started fine.  So next time will need to see if it starts OK and if not tweak things in BIOS (which I had to do). With that change it always boots – maybe has issues, but it does boot (albeit with issues often).

License, and Updates

Now, I log on as admin via an SSH connection and this is what I see.  BTW I am using SecureCRT on the Mac to access my Nexenta host.

firstlogin

So it looks pretty good. I do see an important alert but will leave that for now. So first thing is to activate my license. BTW, the reason we are doing this via an SSH session and not at the console is so I can copy and paste the key.

You use the following command:

license activate <activation key>

You will need to space-bar through an EULA and hit y but after that you should be good.

BTW, two of the install times this activation did not work and connect was in the error message.  Turns out I did not do the right default gateway.  This also means you have an expired license and that means you cannot use Fusion. So use this command on the console:

route create default 192.168.9.100

You can confirm the gateway with:

route list default

or

route list

Now do the license activation again and it should worked fine.

Once it is done you can use the following command to make sure.

license show

Here is what I see.

license

Another good command here is:

system status

Which again on mine shows the following.

status

Notice how fixing the license also fixed the critical alert?

Another useful command is shown below and it is quite detailed and shows everything which can help you confirm settings.

config list

I like to do the updates next. So make sure you are still logged in as admin on the console, or via SSH. BTW, before I do the updates I like to do a system status to confirm my version which is 5.0.1.5.

I also use in my lab syslog so I configure syslog on my array using the following command:

config set system.loghost=”IP_address”

You can confirm this works by using the config list command mentioned above our visiting your syslog host. Update – this is sort of problematic for me.  Investigating.

We can use the following command to do the updates.

software upgrade

But, there was nothing.  And, I suggest you do not use that command alone but rather follow the process in this article.

Installing Fusion

This is not installing VMware Fusion – which is interesting as I am working on a Mac, but it is the management interface – without it you get REST API or CLI.  I am told it cannot do license management but it does everything else.

I should mention that Fusion provides the management UI, and just as important it makes it possible to manage more then one array.

So lets deploy the OVA that is Fusion.

After it starts up, access the console and configure the networking with the static IP that you had ready.  Be aware, there is two networks.  That is a little confusing for me. Also check for updates on the same screen.  I had none outstanding during the initial setup. If you do have Fusion patches be sure to check out my article on the update process.

Fusion initial config

Now connect to Fusion – we need to make sure it works.

https://fusion_fqdn:8457

We need to agree to an EULA, and then log in as:

admin / nexenta

But, the first time it did not work.  After trying a few things, and complaining a bit, I decided to restart the VM and after the restart I was able to log in.  The first time you log in you are forced to change the password.  Once that is done exit and confirm the new password works.

Once you are logged in the first time you will see something like below.

firstloggedin

Now lets connect my storage appliance with Fusion. We use the Register Appliance blue button – see it above in the top right corner and a dialog pops that we use to connect to the appliance.

register

You will see a screen next that has a few important things on it.

reg2

You can see above where you need to add your credentials, but also check to confirm you trust that cert.  I like the Download certificate option as I think that will be able to be imported to my browser and mean I don’t see the browser security message each time. I confirmed that this is true.  I used this information to get it into my browser.

As we continue we need to confirm some appliance details and your mail server info.

reg3

After we confirm, we will see the main display.

reg4

After a few minutes it would change and would have a different health.

reg5

It shows as healthy, and it shows roughly the storage I have in it so things are looking good.

We now need to do some additional configuration. So we need to use the gear (called COG by Nexenta it seems) icon in the top right corner.

config1

The gear in red / white is the one we need.  The blue one is specific to the appliance.

config2

We need to select Settings.  We will see the list of settings as seen below.

config3

The first one I will look at is Auditing but no changes there for me.

config4

Very nice to see that you can trim logs as necessary but also decide how much to keep.  Next up is Date / Time and we do need to make changes there.

config5

You can see the wrong info here – date is right, but time and timezone is wrong.  I am going to use NTP here but it seems that I end up with the correct time even with the PST timezone seen here. I have heard that you may have some additional flexibility in this area in the future.

config6

You can see good Canadian NTP choices above.  You could also use the following US NTP choices.

  • 1.us.pool.ntp.org
  • 2.us.pool.ntp.org
  • 3.us.pool.ntp.org

Not sure how to fix the TZ but we will see. In 1.0.1.7 you can change the TZ.  I believe you will need a restart after that though – of Fusion that is. Next is Email setup.

config7

The rest of the options don’t need any attention from us at this time.  Note that this is Fusion email settings, and the previous email settings was for the appliance.  So no AD connections, and no useful changes needed in Logging, Monitoring, Network, or Session settings.  There is no place to configure syslog but for many customers AD config is important.

So initial config is done for now.

Network Config

I need now to configure my storage array so that it works on the storage 10 GB network. This can be done easily in Fusion.  So lets log in.

At the top we can point at Appliances List and select our array.

network1

We can select my array and we see the appliance home page.  Which looks pretty good so I will share it below.

network2

Some pretty nice widgets are seen but should be more interesting when this is an actual working storage appliance! But lets change to the Management tab which is seen roughly in the middle of the top tab bar in the screenshot above.

newpoolrpoolreplace

We need to change to Networks as seen above in blue on the far right.

network4

Now, we select our – in my case – 10 GB link, and see what happens. (BTW, the 1 GB links seen above, if you apply a network address to them and perhaps with a reboot, will turn green.)

network5

We only need to do Add Address and maybe Assign VLAN so the Gear icon – which is Advanced Settings is not necessary.

Once we select the Add Address and add in our network info we see the following.  In my case the storage network is non – routing and on a different scheme.

network6

I would now do a ping test between the hosts and Nexenta storage array.  Can the hosts ping each other on this new storage network, and can they ping Nexenta, and can Nexenta ping them?  The answer is yes, so we continue.

Storage Configuration

Pool

I am only going to provision NFS for my hosts. It is easy and simple and I like that. So if I understand correctly I need to do a pool, than a file system, and than share it out.

So lets start on our appliance Dashboard, and select Management. As we see below we have a pool already.

storage1

What we are looking at is the system pool – for things like OS, logs and similar things.

In the top right, we can see an odd blue button that is labeled as Create Pool.  Lets use that to get started. We fill in a pool name and select the build method – manual.

newpoolupdate

I select the manual method of build from the Build method drop down.

storage3

In our next screen there is a lot to look at.

diskavail

BTW, the one square in the Nexenta Legacy SAS JBOD section is the SD’s I used to have VMware installed on.  I left them in but now think I should have pulled them out.

So we need to select a redundancy for our pool.

redun

You can see I have selected Non-Redundant – home lab and not much disk. Now we click on the green drive icons with the + sign.

nowdisk

We use the Next button to continue.

nocache

Home lab and limited disk so I will Skip the cache.  Too bad.

logrednope

I thought this had to do with logging but it is actually about ZFS Intent logging which can improve performance.  More learning for me needed to see if that is for me or not.

I also skip the Special device creation since I don’t even know what it is!

I also skip the spare as I have no spare disk.

createpool

I have no more disk, so Auto expand is unnecessary but I do select the Create Pool option.

newpool

So we have our pool now.  Next is to turn it into NFS.

Filesystem

We start on the Management / Filesystems page.

newfs

We want to use the gear icon indicated by number 3.

Selecting it gives us two options

choice

Properties gives you the chance to change some of the things you made a decision about during creation of the pool.  Such as auto expand but even some we did not see.

We want the Add New Filesystem choice. Wow.  A lot of choice if you expand Optional. I am interested in performance and max capacity, with max capacity being sort of key.

fs01

So I add a name and am leaving the rest blank.  This will mean there will be a 128 block size, and no minimum size or quota.  Since this is going to be use for NFS and hosting VMs I do not need to do a bunch of the config I would if it was going to be SMB. So after we use the Create button we see something different.

nfs01

We can see there is a filesystem on the pool pool01. When we use the gear on the left we can see a Share with NFS option.

nfs02

There is lots of good info on this page.  Note the extra tabs for any NFS clients or the Advanced tabs?  One is for Linux or Windows customers using NFS client, and Advanced helps with security.  I would normally use that to decide which hosts could access this NFS export but I have a private network for NFS so that is some small security.

Note the Export path – in my case /pool01/nfs01 as we will need that in VMware if this works. So lets Save.

nfsseen1

We can see a check-mark for NFS so we can see how the pool / filesystem are being used.

So we took a bunch of disks and made a simple pool, then created a filesystem on it and shared it out using NFS.  We missed doing caching or write enhancement stuff, and even assigning a spare as this was a home lab.  But some of that would be easy to do in a production system with a bunch of drives.

vSphere Configuration

Lets configure our hosts to access this new storage. So we need to work in vSphere Web Client. Access the Cluster that you wish to anchor this storage with, then Actions \ Storage \ New Datastore.  As seen below.

newds

Now you work with a wizard.

  • Type is NFS
  • NFS version is NFS 3.
  • The naming for me will look something like below.

dsnaming

I consider it important to use a datastore name that makes sense and will allow me to connect to the appropriate resources.  So in this case nfs01 will connect with pool01 and the host.  So good troubleshooting.

Next you are prompted for the host(s) and I select all of them.

Now, if this works it works very fast indeed.  But, if it doesn’t, it is often access security on the NFS server, or wrong IP address in the datastore config for the NFS server, or the export name is wrong.  But it my case it works.

complete

Test Time

We have some new storage on our hosts.  Don’t we?  Where would we look if we don’t?

In no specific order here is the things I would do.

  • I would log in on one of the storage network devices – the Nexenta, or a host, and do a vmkping if on host or a ping on the Nexenta, and see if the storage network devices can ping each other.
  • I would do at the cluster level a Storage \ Rescan storage.
  • Confirm the NFS connection between VMware and Nexenta.
  • Confirm pool / filesystem and share in Nexenta.

BTW, you may not see any statistics on the dashboard for things that you enable like NFS in my case.  There is a bug – to be fixed shortly, where probes are not enabled when they are supposed to be.  If you have this issue you can enable them yourself.

probe

Change to the Administration page, then select Data Settings, and scroll down to Active Probes.  You can see what I have selected and you can select what you need and use the Save button.  Very soon after that you can expect to see things in the dashboard.  For me specifically this enabled NFS and CPU related statistical info.

What is next?

So with storage available on your vSphere hosts you are ready to start consuming it.  Quite excited as this is bringing in approximately 4 TB shared storage for my lab which is in fact going to help my job out a lot!

Before you start Storage vMotion virtual machines on the array you should check for updates.  You can use this for updating Fusion, and this to update the array.

If you want to use VVols, and I sure do, you need to do iSCSI as NFS doesn’t support them yet.

I would also like to thank Michael Letschin for the help with this!

BTW, all of my Nexenta articles can be found using this tag. I hope to have additional articles soon – such as getting the Nexenta vCenter plugin working.

Updates:

  • 1/10/17 – issues with syslog working as I expected it would.  Will update as I learn more.

Thanks for reading,

Michael

=== END ===

Tagged with:
Posted in Home Lab, How To

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: