I have been racking my brain over this for the better part of two weeks. I seem to be getting really poor throughput between my Windows 2008 R2 VMs. I have read over several articles mentioning various tweaks to the VMXNET3 driver and the tcp stack. ( Disabling Autotune, Disable TSO, RSS, so on and so forth.) I run a customized 2008 R2 image for the norm. However I decided some testing was in order. I grabbed me a eval copy of 2008 R2 from M$ and deployed me some machines. Both physical and virtual. Lemme run down my environment.
Physical Boxes:
Cisco UCS B200 x2 ( Sit in the same Chassis)
Vanilla install of 2008 R2
2 10GB Cisco VIC interfaces
2 300GB 15k SAS in Raid1
Throughput tests
Test file: 10GB junk file created with fsutil.
Between 2 physical B200's
Iperf test
Ran several tests with Parallel streams. Consistent 10GB Summerized throughput. ip
iperf -c xx.xx.xx.xx -P 10 -i 1 -p 5001 -w 9000.0K -f m -t 10
![]()
SMBv2 transfer
![]()
Virtuals
ESXi 5.5
2 vCPU
4GB Ram
Updated vmxnet3
Same file size 10GB created with fsutils
Between 2 VM;s.
IPERF (same as above )
![]()
![]()
A huge difference in transfer rates. The VMXnet3 adapter has been "optimized" based on what Vmware claims is the best settings with Windows 2008 R2.
On the physical side, both servers are deployed from the same service profile so the settings are identical.
Really beating my head against the wall on this one. Can anyone maybe provide a little insight or a new direction to go down?