Well first off license wise it looks like your good for what you want to accomplish:
License Info:
VMware vSphere Essentials Kits
VMware vSphere Essentials Kits are all-in-one solutions for small
environments (up to three hosts with two CPUs each) available in
two editions— Essentials and Essentials Plus (see Figure 3). Both
editions include vSphere processor licenses and vCenter Server
for Essentials for an environment of up to three hosts (up to 2
CPUs each). Scalability limits for the Essentials Kits are product-
enforced and cannot be extended other than by upgrading the
whole kit to an Acceleration Kit (see Paid Edition Upgrades
section below). vSphere Essentials and Essentials Plus Kits are
self-contained solutions and may not be decoupled, or combined
with other vSphere editions
The only thing missing that you might want is Storage vMotion. Lets cover this really quickly. When you have shared storage to your ESXi hosts you can vMotion a VM from one host to another. What this does is moves the VM from ESXi01 to ESXi02 without the users knowing, it does it live. It essentially just tells the other host to run the VM from the shared storage and then copies all the information that is going on in memory over to the new host and cuts over. It gives you the abiltilty to say move all your VM's over to another host prior to doing maintence on a host, do better load balancing, ect, ect. Storage vMotion does the same thing but with the acutal storage. So say you have a LUN/Datastore that is almost full and you want to move a VM, without having Storage vMotion you would have to shutdown the VM to move it. The move is fairly quick but would require outage time. With Storage vMotion you can just right click the VM and say Migrate storage and tell it which new LUN/Datastore you want it to move to. This is all done live without any downtime to your users. Now is this worth the extra licensing cost, is up to you and your company.
Now going back to weather or not to use your NAS SAN for ESXi is up to you, however there is some things to consider.
1.) Is your NAS SAN on VMwares hardware compatability list, if its not you might be able to get away with it, but I wouldn't reccomened it
2.) Does your NAS SAN have enough IOPS to supply your VM's, if the disks are to slow or can't keep up it will make your VM's seem laggy, now since your putting a small load on the NAS box this probably wouldn't be the case but its something to consider or look into
3.) Currently NAS has no way to Multipath in ESXi, its apperently coming in future revisions, but isn't here yet. What this means is you have to setup your connects to the NAS a little differently then say iSCSI. It also means your limited to 1GB of throughput to your NAS, which is something you have to consider as is that enough throughput to run all the VM's running on your hosts an supply the IO requirements, if your keeping your enviroment low and there isn't a bunch of IO requirements it could do. I would feel better with 2GB+ but there is no way to really do that with NAS currently.
Now the big benifit to using shared storage is you get HA. Without shared storage you can't do HA, you would get everything else on that list above but without Shared Storage VM's can't start up on another host in the event one fails. So if you setup your NAS to talk to your ESXi hosts as shared storage and ESXi01 dies (motherboard pops, a space rock smashes through the roof and turfs just that host, whatever) all the VM's running on ESXi01 will have an unexpected shutdown, but within mintues be starting up on ESXi02 as ESXi02 noticed its partner in the cluster has gone down.
With that said it also looks like you have 8 Disk drives in each ESXi host to be? is that true? if thats the case you have made an investment here and you may want to leverage that disk if possible. Another thing you could do is run a hybrid solution where all critical systems run on a NAS datastore, while all non-critical systems run on the local storage on each box. This is a decision you will have to make according to budget ect ect.
When it comes to vCenter you can install it as a VM and it doesn't require another Psyhical server to run. You can even install the vCenter Applicance, which is pre-packaged VM with vCenter installed ready to go that runs off a little linux kernel. You just deploy the OVF and configure it and your off to the races with vCenter.
Another thing to consider is if you use the NAS for VMware then your bacukp repository has to find a new home. This could mean buying new hardware ect ect.
As far as virtulizing systems go, Vmware can pretty well virtulize anything that doesn't require a unique device or card. For instance if your backup server has a tape drive connected to it, you won't be able to virtulize this as you can't pass your tape drive through to a VM or install the tape drive in one of the ESXi hosts. Now you might find people that have done some hackory and got a setup like this to work, but its not best practice. So systems like your backup server, camera system, ect typically stay psyhical as they have a hardware requirement. For instance many camera systems I see are prapritary and need special PCI cards to link in, ect, ect, so we don't virtulize those.
Aslo a good source of self pace training material is:
http://www.pluralsight.com/training/trainsignal
they have HOURs and hours of recorded training courses for Vmware. The DCA examp prep one by Jason Nash I went through prior to writting my VCAP-DCA and found it very insightfull. This is more of an advanced course but they have many entry level ones and ones to prep you for your VCP if you want to right it. They have a free trial you can use to see how it goes.
I hope this helps point you in the right direction.