Upon upgrading from 5.0U1 to 5.1 and doing a fresh vanilla install of 5.1, I've noticed a significant slowdown on Windows VMs (didn't test Linux) running on NFS datastores either using a Standard vSwitch or DVS. When I boot a VM, I get about 5MB/s throughput over NFS, which causes Windows 2008 to take about 4 minutes to present the Login Screen. Subsequent disk I/O of the running VM remains slow as well.
What makes this interesting is that when I perform a Storage vMotion or cold migrate on this 5.1 VM (or any NFS-based VM), I get ~100MB/sec throughput for these operations on the same NFS datastores. If I migrate the 5.1 VM to local storage I get expected VM disk performance, so this eliminates it being a Guest OS or driver issue for me. So, it appears that when the VM is running using NFS it's slow, but when ESXi is performing a host-based NFS disk operation, the disk I/O is unaffected and performs well.
Lastly, if I re-install ESXi 5.0U1, all NFS (host & VM) performance works as expected (~100MB/sec) as was the case prior to upgrading to 5.1.
I'm not sure if anyone has seen a similar issue, but I thought I would post my findings in case someone else runs into this.
Notes: No Jumbo Frames & Active/Passive 1Gbit NFS vmk0. VMware Tools upgraded to 5.1. 1Gbit NFS SAN working fine with other non-5.1 hosts. Booting off of 16GB USB2 Thumbdrive. Intel® Xeon® Processor E3-1200v2 Synology DS1812+ NAS w/ DSM 4.1-2636 GA