Consequences will be pretty poor performance, basically, due to disk load generated by swapping and scheduling delays generated by the 4 vCPU per guest allocation.
But try it certainly. I'd suggest measuring for example file transfer throughput from a physical machine to one of the VMs, for example copying up DVD ISO or similar, with just one VM running. Then start all the others and, once it settles down, repeat the test.
over commit would makes thing very slow, you might be surprised a lowly configured such as 1 vCPU and 2GB setup might even run better ..
If the machines are powered on then they will be using CPU and will have to be scheduled. You could see some delays in scheduling with 4 vCPU machines. I would simply start with a lower configuration and monitor performance in both the guest and the ESX host.
I have an ESXi 4.0 white box at home and it has a six-core CPU and 8 GB of RAM.
The server is mostly idle but sometimes I am transcoding video and want the resources available to the virtual machine when it needs them.
If I have four virtual machines each having 4 CPUs and 4 GB of RAM assigned, what are the consequences? Usually only one of them will be actively using the CPUs but all of the machines will be powered on.
In a nutshell you are going to start swapping on the memory and read/write speed from disk is not as fast as from RAM. Your'e CPU is going to become very slow due vCPU to pCPU (core) scheduling. You'll be overwhelming the HOST server physical resources.
You can have all four VMs on, if 3 are idle you may not be hitting the overcommitment during that time. Monitor the performace to see what your CPU and Memory usage are during your video transcoding sessions. If you are hitting the overcommitment power down the VMs that you are not using.
I also agree with the suggestions to always start with 1 vCPU and add to it IF needed.