Hola, tengo licencia esxi 5, la compre cuando era por memoria y cpu, pero yo he comprado por 3 años, resulta que ahora vmware ya la memoria es ilimitada, como podria hacer para que mi licencia me reconozca mas memoria.
saludos
Hola, tengo licencia esxi 5, la compre cuando era por memoria y cpu, pero yo he comprado por 3 años, resulta que ahora vmware ya la memoria es ilimitada, como podria hacer para que mi licencia me reconozca mas memoria.
saludos
Hi,
I'm trying to download the Dell customized esxi 5.1u1 iso from http://www.dell.com/support/drivers/us/en/19/driverdetails?driverid=RD5JN but the link is broken. Is there an alternate location I can get this from?
Thanks.
***Newbie alert***
Hi all.
I'm learning to use ESXi 5.1 run from a USB key on a home "server" (Lenovo Xeon tower 18GB RAM)
I've created a debian wheezy (7.0) guest on Fusion 5.0.3. As I read in the documentation, you should be able to move vms between vmware clients.
I created the vm (Debian 7 64 bit) with Debian 6 64 bit profile as there isn't a Debian 7 profile.
The new vm runs fine in Fusion 5.0.3.
I opened vsphere client and moved the "Debian\ 6\ 64-bit.vmwarevm" folder from the mac (OSX 10.8.x) using copy folder in vsphere.
No problem on the move.
I add to inventory by selecting the ".vmx" file in the "Debian\ 6\ 64-bit.vmwarevm" folder. It adds OK.
I check the virtual machine properties and they look OK. I want to add/change the network adapter to the ESXi network switch.
But I can't add/change networking or or start the vm. The file seems to be locked.
I ssh to the exsi and check the permissions on the files in the folder and compare to another Debian vm that runs fine on my ESXi.
I see that the copy created incorrect permissions. This is probably a bug in vsphere as with these permissions, ESXi can't run the guest anyway.
-rw------- 1 root root 2553 May 30 23:09 Debian 6 64-bit.vmx
-rw------- 1 root root 0 May 30 23:09 Debian 6 64-bit.vmsd
rw------- 1 root root 354143 May 30 23:13 vmware-0.log
-rw------- 1 root root 360004 May 30 23:13 vmware-1.log
-rw------- 1 root root 352495 May 30 23:13 vmware-2.log
-rw------- 1 root root 355371 May 30 23:13 vmware.log
The other running Debian vm (downloaded this from TurnkeyLinux) has following permissions
700 for the .vmx file
644 for the .vmsd file
644 for all the .log files.
So I change the permissions.
After changing the permissions, I try again to change network or start. I get the following error.
"Cannot open the configuration file /vmfs/volumes/hd/Debian 6 64-bit.vmwarevm/Debian 6 64-bit.vmx.
An error occurred while opening configuration file "/vmfs/volumes/hd/Debian 6 64-bit.vmwarevm/Debian 6 64-bit.vmx": Error. "
I can't get the vm to run on ESXi.
Additional note:
I checked the original guest on the mac. It has following permissions:
But in Fusion I see:
-rw-r--r-- 1 user user 0 27 May 14:13 Debian 6 64-bit.vmsd
-rwxr-xr-x 1 user user 2555 31 May 08:57 Debian 6 64-bit.vmx
Now I'm totally confused.
What have I done wrong?
What should I do to get this to work?
I'm trying to get ESXi installed on a system with a SuperMicro X9SBAA-F motherboard.
I've overcome the "no keyboard" issue (due to all ports being USB 3.0) by installing over IPMI (also tried PXE) and using ks.cfg.
However, I can never get ESXi installer to see my drive (WD RED 1.0TB) via the onboard 88SE9230 controller. I found a Japanese website which had a VIB that supposedly helps that and similar Marvell controllers, I rebuilt the ISO but still no luck.
The Chipset is over a year old, but surprised that it's not yet supported... either that or I am missing something.
Any suggestions to get the installer to see the drive are appreciated!
I need to migrate a storage server from win2k3 to win2k8r2 & if I can just disconnect that drive from the old server and attach it to the new one that would be an ideal solution.
is that possible?
I'm in the process of upgrading to ESXi 5.1. As part of the upgrade I'm also trying to create some new datacenters, clusters, and folders.
Is there any awy to move an ESXi server to a different datacenter without removing it from vCenter? I know I can move the ESXi server between clusters within the same datacenter simply by putting it in maintenance mode but when I attempt to move it between datacenters I get an error indicating that the operation is not supported on the object..
I also know that I can disconnect and remove the ESXi sever from vCenter and than reconnect it where I want but removing the host from vCenter will delete all historical data from the database and I'd rather not do this if it can be helped.
Thanks,
jd
I need to move a iSCSI volume that is currently attached to a stand alone windows box to a virtual windows box? Can this be done without formatting? When I attach a new volume I get a format warning and I can't loos the data.
First off, I am not having any problems right now, just deploying a new SAN and taking baselines. I'm noticing a bit of a discrepancy between what my SAN says and what Vmware says, primarily around latency. I will see somewhere in the neighborhood of 1-4ms of latency using my SAN monitoring tools for each LUN. However, on the Vcenter side, I have occasional spikes into the 15-30ms range that do not appear on the SAN tools. I can only assume this latency is on the Vmware side. Also IOPS are often not equal, but that's probably not a big deal.
Is this normal? And again, it's not really a problem as my average latency is about 1ms, it is just the occasional spikes I see from Vcenter that do not appear on the SAN that concern me.
Hmm... with the release of 5.1, they seem to be encouraging everybody to move to the web client. But it seems that Mac OS X is still not supported. What's the plan to support Mac OS X? I really don't want to have to run a VM (in Fusion) to manage my environment....
Will I Loose HA with a vm that has an RDM?
Hi,
Our vm environment (ESXi 5.0) is backed up by vranger 6.2, this vranger box is not a vm instead its a proxy server to which all the LUNs are presented. Just recently I carved out 2 new LUNs on 3PAR and presented it to one of the ESXi Cluster; later when I remote into the vranger box the server adminstration window popped up a screen asking me to initiliaze the 2 new disks. I unknowingly initialized these 2 disks on the vranger box. But this crossed my mind:
the above is from vranger deployment doc. Please suggest me what needs to be done.
Thanks!!
Hi Guys,
I am trying to create a VM with RDM (Virutal compatibility mode) disk and when i am done with the creating of Vm and once i click finish. it shows me an error message "The virtual disk is either corrupted or not a supported format" attached screenshot.
Hi Experts
We have both esx 4 and esx 5 in our environments.
I observed something strange with ESX 5 , We cannot create thick disk for VM on NFS datastores
while the same was working with ESX 4. I am still able to create thick disk using the same NFS volume.
Has anything changed in ESX 5.X? which doesnt allow users to create thick disk in NFS.
-Srinivas
Greetings,
I am looking to utilize RDMs for a SQL 2012 Cluster and in going though the process of adding shared storage to the second node I am a bit confused.
I am referencing VMware's Document, "Setup for Failover Clustering and Microsoft Cluster Service". I am following the steps under "Clustering Virtual Machines Across Physical Hosts. So I went ahead and added my LUNs which are on shared storage as RDMs to the first node. The next step is to add the hard disk to the second node, however this process requires me to select "Use an Existing Disk" and select the disks which represent the RDMs which are stored in the data store where host1 exist. The document reads, "Select Physical as the compatibility mode and click Next. The only option I get is, select the Mode: Persistent, Nonpersistent, or None. I do not have an option to specify the disk as being Physical or not. I did when I created the RDMs for the first host.
Also adding the disks as RDMs should I stored them with the VM (which is on shared storage) or should I create a separate datastore and specify the datastore which is on the same shared storage. Since each of the RDM's are separate LUNs on the Shared Storage I guess I am curious if its better to choose the option to store In a different datastore then with node 1.
Any clarification on this is greatly appreciated.
Cheers
Hi Everyone,
I just install a pci-express 4 port nic card into my Dl380g8 Server, but after that my Esxi server has found only the add-on cards, and the onboard nics are disappeared.
There is something to active?
Any suggetions?
Thank you
Why vSwitch not required to run a STP (Spanning Tree Protocol)? Any thoughts always welcome.
Hi. I currently have a esxi 5.1 server with a sata 1TB hard drive that's using vmfs5 as a datastore. I wanted to test 5.0, so I installed it on a fresh flash drive.
The problem is that I cannot find any storage device to create a datastore even though its physically connected the same way when im using 5.1, even changing hard drives it never appears. I have tried listing the devices with no success. I would like to add it as an existing datastore to 5.0
What could be the problem?
Hi All,
I followed this guide: http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-mscs-guide.pdf
to setup clutser. Added my first Quorum disk as RDM (virtual), SCSI 1:0. When i add the same disk to my second node, i get this error message when powering on.
Any ideas?
I known there are many factors in order to answer this, but is there an average GB/hour rate to expect. ASSUMING we have no bottlenecks in the storage and we are using scream 15k disk.. source and target.. I also would plan to clone it off hours and while it is powered off.
Thanks!!
Does anyone have any experience with using Qlogic QLE7340 40 Gbps Infiniband HCA PCI-e Cards?
Specifically, in using them with a direct-cable-connection to link two or more ESXi servers?
I.e., to simply move files between Datastores, or for vMotion, Dynamic Resource Allocation, etc., etc?