I am looking at the performance of a highly used guest VM through perfmon. Win 2008 R2 4vCPU. RAM not an issue.
The standard % Processor Time from Windows is easy to interpret.
My reading around VM Processor Time was that it would display the amount of utilised CPU as seen by ESXi. By comparing the two I was hoping to work out the best way to increase performance.
Can anyone help me interpret 500% VM Processor Time?
First, please forgive my english because I'm french. I know this is a long post (maybe difficult to read) but I'll do my best to be readable without headaches
I'm here today because I've got a problem that I did not manage to solve myself, in spite of a lot of reading (Forums, Blogs, Whitepapers, VMware Documentation...).
Here's the problem: After this lot of reading, I thought that I had understood the way the ESXi scheduler works, and particularly the way it handles NUMA & vNUMA. I was then about to explain to my boss that on our VMware infrastructure (vSphere 5.0) made of 6 cores/socket processors, our 8 vCPUs VMs were not sized the right way because the ESXi will not use vNUMA for VMs < 9 vCPUs, and thus, these VMs will be handled by standard NUMA scheduling. This means that for Wide-VMs like these ones, the ESXi will have to span the 8 vCPUs over multiple NUMA Clients with the RAM interleaved between the physical NUMA Nodes. This should normally create a local memory rate (N%L) around 50% with bad memory latencies for those big production VMs.
(Or this is what I understood after many hours of reading. If I'm wrong, please explain me what I did not get right, that's why I'm here )
To back up my recommendations, I SSH-ed to the ESXi to obtain the remote memory rate via esxtop but... Nope ! ESXi surprises me once again by showing me my big VM with 100% local memory !! Not 99, not 98, but 100% ! You can have a look by yourself on the attached file.
The green framed line is the interesting one. We can see that the VM is effectively spanned on 2 pNodes, and that the RAM is also spanned on those 2 Nodes (But I'm also surprised by this 50/50 distribution as I thought that the ESXi would first fill one NUMA Client before creating a second one, I was then expecting a 75/25 distribution instead of this 50/50 but I did not find any clear paragraph in any documentation about this...).
This result ( (Un)Lucky me ? Coincidence ?) which is exactly the same for all of my three 8 vCPUs VMs, seems especially remarkable when a majority of the other VMs on this ESXi (Which fit in only one pNode) won't reach the 100% (red frame).
Powershell helped me confirm that there is no advanced setting set in these VMs to activate vNUMA and/or force the CPU execution on different pNodes with parameters like numa.vcpu.maxPerMachineNode = 4. I'm now trying to explain those results with the theory but nothing that I have read seems to fit to this precise scenario...
At first, I thought that because these VMs did not migrate to/from any another ESXi since a long time (DRS tries no to move those big VMs), the scheduler had moved the memory pages one by one between the pNodes to achieve this 100% memory locality rate. But if that was the case, would it not do it for every other VM as well ? The VM which has a N%L rate of 70% for example did not move to/from any ESXi since last november at least !
So, to summarize: my question is: Why does this VM (unadapted to the physical NUMA architecture of her physical host, and which should be the most impacted by remote memory problems) is ultimately one of the only VMs which present a wonderful N%L rate of 100% while the wide majority of other VMs (which largely fit on only one pNode) on this ESXi, can present up to 30% of remote memory on an ESXi which is not RAM overcommitted... ???
Something escapes me but what ?
I thank very much in advance the expert who will be able to give me the solution because I'm really stuck with it !
Does anyone know if there is any impact or something special I need to do when moving two host from a cluster into a new cluster. Due to our Oracle licensing we have to remove 2 host from the cluster and put them in a separate cluster. The host strictly has Development and Production Servers/Oracle DBs.
i'm using VMDirectPath/PCI passthrough with 2 cards in an ESXi 5 host environment:
- The first card is a standard-PCI-card (non-express), an "AVM B1 ISDN-Controller"
- The second card is a PCIe card, a "Renesas USB 3.0 Controller"
Both passthrough'ed cards are working fine with ESXi 5.0 since a long time....
After an update to ESXi 5.5, the passthrough of the standard-PCI card doesn't work anymore:
- Both cards are listed fine as passthrough'ed devices in the vSphere client
- Both cards are configured with "msiEnabled=FALSE" in the VM, no error occur at power up of the VM
- Both cards are listed without errors in the device manager of Windows Server 2003 in the VM - no exclamation mark appears
- The USB 3.0 controller is working fine, devices are recognized and can be accessed
- But, the driver of the PCI ISDN card shows an error in the system log of the event viewer in the Windows VM: "the card can not be found", and the card is not working
After re-install of the "old" ESXi 5.0 and restore of an configuration backup, all works fine again.
A 'cat vmware.log | grep -i pcip' of the VM log file shows no relevant differences relating to PCI passthrough between ESXi 5.0 and 5.5.
Any idea, what has changed in ESXi 5.5 in comparison to ESXi 5.0 in PCI card passthrough/VMDirectPath ?
So as I understand it, due to OSX licensing issues, ESXi should/needs to be installed on mac hardware generally. Now that the new Mac Pro tower (is that the right way to describe it, since it isn't very towering...), is about to drop, will there be a new ESXi build to support the new hardware, or is the latest ESXi 5.5 with patches already compatible?
Some one removed virtual machine from Vcenter server.But that vm is very critical is any way to retrieve?.At least is there any idea to find where that vm located in local datastore or san data store and its path?
For refference am attaching screen shot..Please help me out.
Only new to Visualization and IT for that matter. I'm trying to setup a lab on Windows Azure so that I can play around with vSphere and maybe the web client. Can I install 5.5 on a virtual instance of Windows Server 2012 R2 or am I stupid?
It works for several hours and suddenly stops working. Other modules like esxcli hardware cpu looks fine. During this "not working" period, IPMI on iLO works fine (i've checked it by ipmitool using lanplus protocol).
After another several hours its starts working again, or after reboot (but this isn't the solution). Any ideas what's going on ?
Problems occurs on HP ProLiant DL360 G7.
VMware ESXi 5.1.0 build-1065491
I'm not able to identify any activities which could trigger such behavior.
(set/unset option "623" on iLO doesn't have any influence)
In logs (hostd.log) I've found:
2013-06-07T09:51:39.585Z [FFE40B90 verbose 'Vimsvc.Ticket 52 b2 85 df 89 a4 7c 3f-6e ad 08 7d ac 2b 36 95'] Ticket issued for root
2013-06-07T09:51:39.587Z [6E443B90 verbose 'Vimsvc.Ticket 52 b2 85 df 89 a4 7c 3f-6e ad 08 7d ac 2b 36 95'] Ticket used
Accepted password for user root from 127.0.0.1
2013-06-07T09:51:39.587Z [6E443B90 info 'Vimsvc'] [Auth]: User root
2013-06-07T09:51:39.734Z [6E360B90 info 'Solo.Vmomi'] Throw vim.EsxCLI.CLIFault
2013-06-07T09:51:39.734Z [6E360B90 info 'Solo.Vmomi'] Result:
--> (vim.EsxCLI.CLIFault) {
--> dynamicType = <unset>,
--> faultCause = (vmodl.MethodFault) null,
--> errMsg = (string) [
--> "Unable to get IPMI Sensor data : No records or incompatible version or read failed :"
--> ],
--> msg = "",
--> }
2013-06-07T09:51:39.734Z [6E360B90 warning 'Locale'] Resource module 'EsxCLI' not found.
2013-06-07T09:51:39.734Z [6E360B90 warning 'Locale'] Resource module 'EsxCLI' not found.
2013-06-07T09:51:39.736Z [6E360B90 verbose 'Default'] CloseSession called for session id=544d4418-2796-ec54-8b17-990027e18f3b
2013-06-07T09:51:39.737Z [6E360B90 info 'Vimsvc.ha-eventmgr'] Event 1841 : User root@127.0.0.1 logged out (login time: Thursday, 01 January, 1970 20:27:46, number of API invocations: 0, user agent: )
I've installed ESXi 5.5 onto an E3-1275v3 with 6 NIC's. For testing purposes, I've installed two VM's; one Linux Mint 16 (Ubuntu 13.10) and another Windows 7 Professional. After installing VMware Tools on both, the Windows 7 Professional VM is able to access the NAS over a Network Share (AFP or SMB) with sequential throughput reaching up to 107 MB/s via CrystalDiskMark. Unfortunately, the Linux Mint 16 VM is only able to reach 6-10 MB/s via dd and around 20-25 MB/s via rsync.
I'm using VMXNET3 Interfaces, although I've tried e1000 for testing purposes. I have also tinkered with disabling LRO and TSO. Typically, I allocate 2 GiB of Memory for Linux VM's and 4 GiB of Memory for Windows VM's, albeit. Any additional information that I'm missing and that I should provide to better troubleshoot this issue, please let me know.
I have a simple question. As of now I have VMware set up on two different physical machines. Each machine manages 3 VM (servers). I usually have to use the VMware Sphere client to access the host then get into the VM consoles or I can just RDP into the VM's. My question is that when I hook up these two machines into my KVM switch monitor I only see the configuration menu. Is their a way to set it up to see the same thing as if I were using the VMware Sphere client software? All I see are configuration menu's. Thank you for your help.
I'm looking for any information to explain why Windows servers (2003 and 2008) running as ESXi 5.1 guests are showing zero processors in the System Properties dialog. Performance is normal, and the Task Manager shows the correct number of vCPUs. But System Properties looks like this:
The machine has two vCPUs. Is it normal for a zero to be listed here? I haven't been able to find anyone mentioning a similar observation, but I see it across more than one guest on more than one OS on more than one host.
Are there currently any limitations when running vCenter Server Essentials 5.0 in terms of vCPUs? For instance I manage 3 hosts per vCenter Server, each with 2 physical processors; (20 cores/host) giving me nearly 60 cores that a single vCenter Server is managing.
How about on 5.1 vCenter Server Essentials? I have browsed each Configurations Maximums documents for each version but neither states a limit for vCPUs being managed by a single vCenter, but this also does not take licensing into account.
I am new to VMware and was looking to look at openflow support that I have heard about. I downloaded the standard "free" addition of vSphere ESXi and do not see anything there in the networking configurations on openflow. Is that only supported in the Enterprise version of vSphere? I was hoping to understand it more before requesting funds for purchasing something. Please let me know.
I am running a relatively fresh installation of ESXi 5.1.0 on a Lenovo T430 ThinkServer. The installation and general performance seem to be completely normal aside from a strange glitch I can't seem to shake.
I have a StarTech 2-port PCI Express USB 3.0 card installed in the server that I am hoping to use within a guest OS (Windows Server 2012 R2). When I load up the vSphere Client (5.1.0) and load the "Configuration" tab, I then select "Advanced Settings" to configure the passthrough. My card is listed in there so I am assuming the hardware itself has to be detected by the mobo and ESXi itself. I am able to successfully check my card, NEC Corporation uPD720200 USB 3.0 Host Controller, to add it to the list of pending hardware that will be available for passthrough (see attached images).
I am prompted to restart the host in order to finish the process. However, once I do this, I log back into the vSphere Client only to find that the passthrough list is completely empty. The entry that currently shows as pending completely disappears and I return right back to square one.
Has anyone experienced this issue? If not, does anyone have any sort of intuitive guess as to why the card would appear, attempt to add, and then disappear without any notice of failure? I really appreciate any sort of advice or suggestions.
We were trailing vCops in our environments and now we have removed both versions of vCops, we have unregistered vCenter inside vCops admin console and removed vCenter extensions; vCops has disappeared from vSphere desktop client but not from the vSphere Web-Client.
Please refer to the screenshots below.
Can you please advise on how we can remove the obsolete links from vSphere Web-Client highlighted above in the screenshot.
I need some help. I have an environment with 2 hosts running vsphere 5.1.0, 799733 each. They are clustered and we are using VSA 5.1. There is a vCenter server running on a VM and this is running 5.1 version.
In this environment, we are having problems in many of the VMs which have to do with the vmware tools. If we upgrade our 2 VM Hosts from 5.1 to 5.5, the VM vmware tools errors will go away. So, how is the best way to approach this?
What is the proper update sequence? I did a researched and before upgrading vsphere to 5.5, the vcenter server must be upgraded first. But I'd like to get your input and find out exactly what is the right approach for this?
Easy question here for you guys. I am installing ESXi 5.1 u2(the HP custom iso) on a HP ProLiant ML 310 gen8 ver2 server. We will eventually have a raid 1 array for data later on, but i want to install esxi on the usb flash drive that is plugged into the board. Should i just install vmware from the cd rom on that flash drive? Or should i use the intelligent provisioning which has an option under OS for vmware custom image? When i get to this point, i am unable to see the USB drive as an option, only the raid array. What do i need to do to enable that usb drive to be installed on? Or should i just install esxi from the cd directly to the usb drive and setup the raid arrays later? thanks.
we migrated a VM running 2003 R2 64 bit from Hyper-V to ESXi 5.1 (build 1117900) using VMware Converter for V2V migration, which configured an Intel E1000 as NIC card (with Microsoft driver inside the VM)
The migration went smooth and machine run fine for more than a month, but will little stress conditions.
As soon as the machine started being stressed a bit, with some network transfers, we started getting PSOD as follows on the ESX node:
After some reading in forums where this issue seemed more related to vmxnet3 than to e1000 (in fact many suggesting to go for e1000 ...), we did the following as per some findings around:
1) Upgraded ESXi 5.1 to latest build (1157734, this was done without any specific indication this could sort the issue out)
2) Installed Intel drivers instead of Microsoft ones (Intel Pro Set)
3) Disabled TCP offloading in Intel Drivers.
I will let you know if node will run more stable, but in the mean time I am asking for suggestions, if any.
Unfortunately no ESX kernel dump was there so it is useless to open a call to VMware at present..