It's unclear here exactly what your situation is regarding the virtualization platform (Hyper-V? VMware?). You mention Virtual Server, which was replaced by Hyper-V 3.5 years ago, so I'm assuming that's not it.
Regarding the rest of it, you need to understand that a virtualized infrastructure does not behave the same way as a physical infrastructure. Typically your virtualization administrator will know better than you how to maximize the performance of your VMs, and can assist with performance tuning to ensure that your performance requirements are being met.
The biggest glaring problem that I see so far is the discussion about using a locally installed disk mapped to the VM rather than storing data on the SAN. Using that sort of configuration will preclude you from being able to leverage the high availability features available with a virtualized infrastructure. On top of that, it may also provide worse performance. Typically a SAN will consist of dozens (or even hundreds) of high performance disks, as opposed to one (or a handful) of disks available with direct-attached storage. Having the data spread across more disks can dramatically improve performance from a throughput AND latency standpoint. Obviously this can vary depending on your SAN configuration, but I think that you'd be hard-pressed to match the SAN performance using direct-attached storage.
The second glaring error is when it comes to resource allocation. If your application requires 8GB of RAM, why would you insist on 16GB? By allocating more resources than the application can use/would need you're actually wasting resources that could be allocated to other VMs (unless they're using a feature like Dynamic Memory). Similarly, your assumption is that with 4 quad-core CPUs each of the 4 VMs is getting a full 4 CPU cores. That's not the case, as the hypervisor also reserves some compute cycles for itself. More to the point, unless your virtualization administrators are doing a lot of seemingly unnecessary babysitting of the VMs, there are no CPU affinity rules created. Your program may execute on one physical CPU for one command/instruction and then onto another physical CPU for the next command/instruction, depending on what the overall workload is for the server. With features like Core Parking you may find that with a low workload that half of your CPU cores are idled in a low-power state and all of your VMs are executing on the other half.
Generally speaking, I recommend that you simply provide the ACTUAL requirements to your virtualization staff and let them worry about the details of how disk/CPU/memory are allocated to meet those requirements. I find that typically when application owners who don't understand the virtualization platform get involved in making those decisions they end up doing things that they THINK will help performance that ACTUALLY end up hurting them. Then they complain that virtualized solutions can't meet their needs.
CompTIA A+, Network+, Server+, Security+
MCITP:Virtualization Administrator 2008 R2
Certified Quest vWorkspace Administrator