Hi,
I have an VMware ESXi 5.1 update 2 infrastructure. It consists of five hosts in a cluster.
I have attached a fiber channel storage array to the hosts.
I have shown a LUN to the hosts.
I have created two windows 2012 R2 VM's, and put them in separate hosts.
I have attached the LUN to the virtual machines as a shared raw mapped phisical disk. I have created a MSCS file server cluster with those two Windows 2012 R2 servers and the shared storage.
I have installed IOmeter in one of the virtual machines.
I also have two Windows 2003. They are phisical machines. There is a file server cluster installed on them. Another fiber channel storage array is connected to them, to provide the shared space needed for the cluster.
I have installed IOMeter in one Windows 2003.
The array disk used for ESXi scenario, is at least, twice faster in performance than the used for Windows 2003 phisical scenario.
I run the standard tests in both WIndows 2003 and Windows 2012: 4 KB, 100% Read, 100% sequential. 4 KB 75% Read, 100% sequential. 4 KB, 50% Read, 100% sequential. 4KB, 25% Read, 100% sequential. 4KB, 0% Read, 100% sequential.
The results are suprising, at least I think so: all the measurements (I/Os per second, Total MB per second, Average I/Os response time, Maximun I/Os response time and % CPU utilization) are between twice and four times better in the Windows 2003 scenario. I mean, for example, in any of the tests, if the I/O's per second are 1200 in Windows 2012 machine, in the Windows 2003 machine is 4000.
Since the array attached to the ESX hosts is supposed to give twice much better performance than the attached to Windows 2003 phisical machines, I guess there is something not well configured, or something not optimized in the ESX host o Virtual Machine configuration. But I can't imagine what can it be.
I need help to improve the disk performance of these VM's.