Setting up my NUC vSphere host

I have decided to grow my lab a bit.  I think it will help for my work, and let me do more on my own. So I increased memory on each of my Dell hosts to 40 GB, as that was cheap and easy and of course it was the resource I needed most and ran low on most.  I have two Intel white box servers and I think I will leave them at 24 GB and likely add them to their own cluster. One cluster can be protected side, and one can be recovery side.  But, I also decided to buy – on Amazon Prime day another server.  I wanted to do like many and get a NUC server.  So after some research I picked what I would buy.  Be aware that in Canada the Skull Canyon servers are still around 1200 CAD so I did not get that.  But here is what I did get:

  • Intel NUC615SYH –
  • two of Crucial 16 GB single DDR 4 –
  • One of Samsung 850 EVO – 500 GB – M.2 SATA III Internal SSD –
  • One of USB32000SPT USB 3.0 Dual Port Gigabit Ethernet – Amazon.caImportant Note: see below in updates but I cannot recommend these adapters they are consistently broken after a host reboot, or after a ‘certain’ amount of time running.
  • Testing three j5create single port USB 3 Ethernet Adapters –
  • One of Samsung 850 EVO 500 GB 2.5 SATA III Internal –

So for approximately 1,034 CAD I have a pretty good server.  32 GB RAM, 3x network ports, a very fast internal drive, and a quite fast internal disk, and a decent processor in the form of an Core i5 6260 / 1.8 GHz.  Not as good as the Skull Canyon but it would have been 1102 CAD alone.

The things to be aware of is that there is only one native 1-GB network port while I am adding two addition through USB 3.0 it is something to be aware of.  There is no easy 10 GB network and that sort of sucks.  I have almost completed my 10 GB VSAN network and this means this server will not not contribute and consume will get against my own best practice of using a mix of 10 GB and 1 GB network in VSAN.  All one, or all the others. In addition, the 32 GB of RAM I have in is the max for now. I guess one day maybe two 32 GB modules will give me 64?


There is a foldout card in the box that was more then good enough to get the memory, and disks installed.  So I assume you have got that done.

Things to have handy / ready

  • You will need a monitor, power, keyboard.
  • Interesting to note that I used one of my monitors that normally is plugged into a Mac, so I used the MiniDisplay port that I use with the Mac with the NUC and it worked fine.
  • I saw that very recently there was a BIOS upgrade so I downloaded it from here.  The place to watch for updates is here. The file I downloaded was SYSKLI35.86A.0045.BL and it came down as a zip.
  • I need to have a USB to do the BIOS update, but I also need a USB to run ESXi from.  I would rather use the SD slot but I have heard people do not see that as working.
  • vSphere 6.0 U2 ISO
  • Use this information to prepare your USB to run ESXi and actually install ESXi to it.
  • FQDN and IP info for your vSphere server.

BIOS upgrade

  • You need to extract your BIOS update file and copy the file to a USB.
  • Now start your NUC, and hit F2 to enter the BIOS.
  • You can use the space bar and arrow keys to move around until you select the Update option. It is indicated with the red arrow below in the screenshot.


  • Insert your USB
  • Use space and arrow keys to move around and select the BIOS file.
  • The NUC will reboot and upgrade the BIOS.
  • I went from version 42 to 45.
  • Remove the USB.

BIOS changes

  • I set the time correctly,
  • Disable Network Boot.

Memory Test

Many people I know  do not do this.  Or some do it with tools in the BIOS.  But I believe in it.  I also think that outside the BIOS is the only way to test properly.  Every single server I put into customers production or lab environment had a memory test done – generally over night  before I put vSphere on it.  I still do that in my own lab.  Normally it is easy and you just boot your host to the Memtest86 CD and let it run.

But on my NUC that will not work – even a very high quality external CD plugged into the powered USB (yellow) did not boot.

So we need to make a bootable USB to run the test.  So first download the build tool – called Rufus from here. Next, download the Windows image for creating bootable USB drives.  You can expand that image file and use Rufus to load in the memtest86-usb.img file.  Rufus will burn it to a USB with no issue.  Then boot your NUC and let it run for a few hours.  I have several times returned memory for credit with only a screenshot of a failed memtest as proof.

I do not have a good way to take a Memtest image file and burn it to USB on a Mac.

BTW, when you boot memtest86 from CD it does two passes and on my Dell hosts that takes maybe 2 – 3 hours.  When I boot my NUC from USB it does 4 passes, and so far it is at 17 hours and still going.  You can easily change at boot from 4 to 1 pass if you like.

Prepare your USB with ESXi

I use the information from here to do this. The reason we do this is we are not able to use a CD in the NUC to install ESXi to the USB (or SD if that works one day). So the process will allow us to restart on the USB and install ESXi to a different USB and all will be good.

So yes, use USB 1 and the Rufus tool to make the installer USB.  Use that to boot the NUC and install to the second USB.  I tried but was not able to make this work with only one USB.  I used an ugly and big DataTraveler USB to boot and install from and then installed to the low profile SANdisk USB.

You will need your FQDN, IP and whatever your root password is going to be.


And now we have vSphere running. And in a tiny, quiet, powerful little box.

BTW, it looks like if you can boot with one USB stick and install to it if you have the rear ports in use rather than the front.  But what I did was use two USB and that worked fine.


Here are some useful links, or maybe not useful but interesting.


You should now have a new ESXi host.  Make sure you add it to cluster that has similar resources in it. I should mention I did nothing that was not mentioned in this article.  So I did not disable any ports, nor did I build a special vSphere image.


  • 11/21/16 – I did this article with vSphere 6.0 and all was good.  But, now I have updated the host to 6.5, and I need to use a different USB driver.  Full info and driver found here.
  • 8/6/16 – since the 20th, my NUC has run fine.  But the issue came back yesterday.  So no reboot and two of my ports when done.  So this morning replace both two port dongles with three one port dongles.  I am using this j5create dongle. BTW, I was not able to unplug the two port dongles and plug in the one port dongles and have things work.  The port order was scrambled.  So I had to move the dongles around and do vmkping tests.  Did not take long to work things out.
  • 7/20/16 – I figured out the issue with the two port USB network dongle.  I was consuming USB0, USB1, and USB3 and not using USB2.  That was just the way things worked out. It all worked fine until I rebooted the host.  Then only one or two of the ports would work.  That was completely repeatable. Of course the cleanup was hard as you had to delete vmKernel ports, remove the VIB, reboot the hsot and remove the virtual switch constructs.  Then reboot and try again. The solution is to consume USB0, USB1, and USB2 and that way you can restart your host and not lose your network config. I think that the single network port might be a better choice.
  • 7/20/16 – I had to remove the USB VIB before I could remove the virtual switches. So I have restarted, and installed everything again.  Confirmed all three ports in use worked fine. Then restarted.  Now only 2 ports work.  Actually better then last time I tested it was only 1 worked. Cleaning this up would mean delete all I could, restart, uninstall the VIB, delete remaining so frustrating. I have ordered three of the j5create ones that are supposedly tested.  I will update this article when I know more – such as do they work!
  • 7/19/16 – Important: I can report that I ran 3 or 4 VMs over iSCSI / NFS with no issues, and did a few vMotions with no problems and all over 1 GB USB networks. Now, I had to insert my SSD, which actually took two tries to get it right, and what I noticed is that 2 out of 3 used ports were not working any longer.  This problem has resisted my solving it rather well.  I am pulling this server out of production use, and will start over with it.  I should mention that only 2 out of 3 ports stopped working and one has – my iSCSI connection which is still working.  BTW, when I tried removing the switch I am stuck now where anything I do results in an error where the text that doesn’t change is specified parameter is missing.  Host and vCenter restarts don’t help, and so my idea of removing switch and vmkernel and redoing did not work.  Removing the vmkernel ports for the ports not working did work but nothing after that.  Will update when I figure it out.
  • 7/19/16 – someone asked why I bought this NUC with two very nice SSD.  I thought I might use it for VSAN so I have a very nice SSD for cache and a nice SSD for capacity.  But, I may not do that as I want to do 10GB for VSAN.
  • 7/18/16 – I tried out a Mac Mini DisplayPort adapter to DVI and I did not get a picture.  Next time I try this I will try Mini DisplayPort to VGA and see how that works. In my rack where the NUC  I hae only DVI and VGA for display support.
  • 7/18/16 – I used Williams excellent instructions to make the USB networking work. I installed the .VIB with no issue on the NUC host and the network from the two dual network adapters was easy to consume.  I don’t believe that these USB networking adapters will be as good as a wired connection.  William does point out some issues, but they should do just fine for me.


=== END ===



Leave a Reply