jasonboche's Accepted Solutions

The " time.synchronize.resume.disk = "FALSE" " did the trick. Again Thanks. Perhaps you could consider awarding points for correct/helpful answers. Thank you, J... See more...
The " time.synchronize.resume.disk = "FALSE" " did the trick. Again Thanks. Perhaps you could consider awarding points for correct/helpful answers. Thank you, Jas [i]Jason Boche[/i] [boche.net - VMware Virtualization Evangelist|http://www.boche.net/blog/][/i] [VMware Communities User Moderator|http://www.vmware.com/communities/content/community_terms/][/i] [Minneapolis Area VMware User Group Leader|http://communities.vmware.com/community/vmug/us-central/minneapolis][/i]
Here you go: http://www.petri.co.il/how-to-deal-with-vmware-esx-server-vcb-multipath-issues-consolidated-backup-windows.htm [i]Jason Boche[/i] [boche.net - VMware Virtualizatio... See more...
Here you go: http://www.petri.co.il/how-to-deal-with-vmware-esx-server-vcb-multipath-issues-consolidated-backup-windows.htm [i]Jason Boche[/i] [boche.net - VMware Virtualization Evangelist|http://www.boche.net/blog/][/i] [VMware Communities User Moderator|http://www.vmware.com/communities/content/community_terms/][/i] [Minneapolis Area VMware User Group Leader|http://communities.vmware.com/community/vmug/us-central/minneapolis][/i]
No. VMKernel/COS network connections are not designed to be shown in the maps and they cannot be made to be shown in the maps in the current versions. Jas [i]Jason Boche[/i] [bo... See more...
No. VMKernel/COS network connections are not designed to be shown in the maps and they cannot be made to be shown in the maps in the current versions. Jas [i]Jason Boche[/i] [boche.net - VMware Virtualization Evangelist|http://www.boche.net/blog/][/i] [VMware Communities User Moderator|http://www.vmware.com/communities/content/community_terms/][/i] [Minneapolis Area VMware User Group Leader|http://communities.vmware.com/community/vmug/us-central/minneapolis][/i]
VirtualCenter 2.5 Update 3 was designed to resolve the issues. There are no new features released in this build - it's all bug fixes. [i]Jason Boche[/i] [VMware Communities User Mo... See more...
VirtualCenter 2.5 Update 3 was designed to resolve the issues. There are no new features released in this build - it's all bug fixes. [i]Jason Boche[/i] [VMware Communities User Moderator|http://www.vmware.com/communities/content/community_terms/][/i] [Minneapolis Area VMware User Group Leader|http://communities.vmware.com/community/vmug/us-central/minneapolis][/i]
Why do you want your VMFS partition to be primary? In a nutshell, it can't be unless it's on a separate disk. The reason is that there can only be 4 per drive. The first 3 will be taken up by ... See more...
Why do you want your VMFS partition to be primary? In a nutshell, it can't be unless it's on a separate disk. The reason is that there can only be 4 per drive. The first 3 will be taken up by /, /boot, and swap (these are best practices). The 4th will be consumed by a extended partition which will contain the logical partitions for the remaining mount points. As far as your custom sizes, they'll work. / is a little overboard. I generally go 4096 for /tmp but your size is up to you. With all the disk space you have, I'm not sure why you are bothering with the "grow" component of your partitioning. I don't do that. Helps guard against fragmentation. [i]Jason Boche[/i] [VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
Removing/readding the VM is fairly non-disruptive. It doesn't impact power operations of the VM or the VMs availability on the network. About the only things you have to do is re-add it back to... See more...
Removing/readding the VM is fairly non-disruptive. It doesn't impact power operations of the VM or the VMs availability on the network. About the only things you have to do is re-add it back to whatever VM folder structure you previously had it in, and also check to see whether or not the VM's historical performance data gets nuked (you may or may not care about this one). [i]Jason Boche[/i] [VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
Per the 2.5 installation docs, your DSN user needs SQL Server sysadmin permissions. In SQL Enterprise Manager, drill down through Security, Logins, and properties on your database user acco... See more...
Per the 2.5 installation docs, your DSN user needs SQL Server sysadmin permissions. In SQL Enterprise Manager, drill down through Security, Logins, and properties on your database user account, then Server Roles tab. Check the box "System Administrators". After the upgrade is complete, I'd remove the sysadmin role from your VirtualCenter DSN database user. Jason Boche VMware Communities User Moderator
VMotion across a router to a different subnet completed successfully. Although, it ran slowly at first because I forgot that I was packet shaping on the router to simulate a T1 WAN link. So bas... See more...
VMotion across a router to a different subnet completed successfully. Although, it ran slowly at first because I forgot that I was packet shaping on the router to simulate a T1 WAN link. So basically when I started, I was VMotioning over a 1.544Mbps T1 :smileyshocked: Jas
Actually I was going to suggest the 600% network utilization may be traffic that is isolated to VMs on the same virtual switch on an ESX host. Remember that two VMs connected to the same virtual... See more...
Actually I was going to suggest the 600% network utilization may be traffic that is isolated to VMs on the same virtual switch on an ESX host. Remember that two VMs connected to the same virtual switch will communicate with each other as fast as the ESX host bus speed allows. Intra virtual switch traffic will not be limited to 1Gbps for the sake of emulating a physical NIC. On a powerful ESX server, network utilization on a virtual switch could exceed 1Gbps by quite a bit which could account for the 600% numbers you are seeing.
So, doing various searches, I've seen many highly respected posters advise to NOT upgrade from vmfs2 to vmfs3 in place... due to possible performance issues. Can any of you elaborate? ... See more...
So, doing various searches, I've seen many highly respected posters advise to NOT upgrade from vmfs2 to vmfs3 in place... due to possible performance issues. Can any of you elaborate? For performance reasons, this is why: http://www.vmware.com/vmtn/resources/608 http://www.vmware.com/pdf/esx3_partition_align.pdf
There are many threads in these forums regarding this problem. There are a few end user documented workarounds. The basic workaround is that you can install the VC2.x agent on your ESX hosts ma... See more...
There are many threads in these forums regarding this problem. There are a few end user documented workarounds. The basic workaround is that you can install the VC2.x agent on your ESX hosts manually and that will resolve the problem. Another workaround involves creating a particular directory under /var/ if I remember correctly. Perform a forum search based on your error message you're bound to find one quickly.
If you have redundant controllers in your MSA1500, or if you wanted multipathing, of course you'd need a switch. I'm fairly certain I read in the VMware SAN guide that direct connect from HBA ... See more...
If you have redundant controllers in your MSA1500, or if you wanted multipathing, of course you'd need a switch. I'm fairly certain I read in the VMware SAN guide that direct connect from HBA to SAN controller is not supported, although I've never tried it myself to see if it works. I guess my thought is if you're going to spend the money on the hosts, the SAN, the VMware Enterprise licensing, it's a no brainer to spend a little extra for the redundancy that multipathing and fabric switches will provide for your environment. You're putting a lot of eggs in one basket to allow this kind of single point of failure.
I consider the MSA1500 entry level in the HP family of SANs. I have a handful of them, a few of them back ending the ESX LAB DEV environment. I think you'll find they will handle many more VMs ... See more...
I consider the MSA1500 entry level in the HP family of SANs. I have a handful of them, a few of them back ending the ESX LAB DEV environment. I think you'll find they will handle many more VMs than the MSA1000 figure you are looking at. Of course, some of that is going to depend on the number of disk shevles you attach to the MSA, spindle count, and LUN configuration. I will be running at most 12 VM's using 2-3 ESX Servers > with the usual MS packages, MSA1500 is plenty of horsepower for that, considering average loads. Exchange Sql etc Well that makes things a little more interesting. You're going to have to elaborate on the Exchange and SQL configurations a little more before we can give the seal of approval here. SQL Performance can vary widely depending on database size/configuration, front end application coding, number of mailboxes per exchange server, etc. I'm thinking a healthy number of striped spindles for SQL and Exchange and multiple disk shelves (minimum 2). Multiple shelves also opens up the possibility for RAID10 which costs a bundle but may be required if this is a critical environment. You have to be open to the scenario of losing a disk shelf is going to 86 your entire cluster and all the VMs on it. I've lost back planes on Compaq RA4100 disk shelves in the past and it's not pretty.
It's just a text file. Even in the worst of botch jobs, you can copy an /etc/hosts file from a different ESX server and customize it for the new host. On that subject, if I remove the ESX... See more...
It's just a text file. Even in the worst of botch jobs, you can copy an /etc/hosts file from a different ESX server and customize it for the new host. On that subject, if I remove the ESX server from virtual center and then readd it, the vms on that ESX server will continue to run right? Yes, the VMs will continue to run. Why do you call it destructive? Because you will likely lose your historical host and VM data and depending on your configuration, you may have to reconfigure for DRS and you might also garf up any special VM resource reservations, limits, and shares because in the process of pulling the host out of a cluster and a resource pool, you will also yank the VMs out too. I promise that is my last question: I will mark your reply as correct after this.I understand, I am just thinking if I have a problem editing the etc/hosts file for some dumb reason.
Yes, we use BMC Remedy Action Request (AR) Server. As with most of our applications, we have 3 Remedy environments: DEV, QA, and Production. Each environment consists of a pair of Remedy servers.... See more...
Yes, we use BMC Remedy Action Request (AR) Server. As with most of our applications, we have 3 Remedy environments: DEV, QA, and Production. Each environment consists of a pair of Remedy servers. The application server which is the "brains" running the Remedy application as well as the email daemon. The other server is the "mid tier" server which is just a fancy name for the web server which runs MS IIS and Remedy mid tier. All end users go through the Remedy mid tier server to submit their tickets and to check status of tickets. That is to say, all our end users are using the Remedy web interface. A few Remedy admins run the Remedy Admin client. The mid tier server talks to the application server, and the application server data is back ended by SQL 2000. So in summary, we have 6 Remedy servers. All are VMs and we haven't had any issues running Remedy inside VMs (although I did test somewhat extensively at first because Remedy is Java driven and I was concerned about the context switching). About 2,500 end users hit our Remedy servers. Going from memory, each Remedy server has about 1GB of RAM allocated to it (if even that, I can check tomorrow if you want to verify). Each Remedy server is single vCPU running on HP DL585G1 AMD Opteron dual core hosts. But since each VM only has 1 vCPU, dual core or vSMP has no bearing on current performance which has been absolutely fine. If it would help you at all, I can get you some historical screen prints of our Remedy server performance to show you the CPU/memory/disk trending. One thing I really like about running Remedy on VMware is come patch time, I can throw the Remedy servers into snapshot mode. Remedy patches can be hairy and a snapshot is my ace in hole for recovery. There's nothing simple about a Remedy installation. Make sure you get good install documentation. It's not so much the installation, but the massive configuration afterwards (which thankfully I have no reponsibilities for). Also note that each time you patch Remedy, it's basically run like a re-install so the install documentation is very handy at the time to. A Remedy install is anything but clicking next and accepting all defaults. I do understand your Engineer's concern and he should be complimented on his due dilligence. Perhaps in a massive environment with tens of thousands of Remedy users, running Virtual might be a concern, but not at our user count of 2,500. As I said before, context switching was a concern of mine but it's been running just groovy. The improvements that came with VI3 helped in the kernel mode call area but I will say that we ran the same Remedy environments on ESX 2.x before we upgraded to VI3 and things were still fine then. Let me know how I can be of more help in this.
I have VC 2.0 but how do I migrate to the same host and what files need to move to the SAN. When you migrate the VM, you're going to keep it on the same blade, but after you put the HBA in ... See more...
I have VC 2.0 but how do I migrate to the same host and what files need to move to the SAN. When you migrate the VM, you're going to keep it on the same blade, but after you put the HBA in it will ask you what storage you want it to go to. THat's when you'll pick the SAN storage. Essentially it will keep your VM registered on the existing blade host, but move the storage from local to SAN.
I just noticed this today. Whenever I VMotion a guest it resets the uptime clock. This occurs whether I manually move a guest or if I put a host into maintenance mode. I understand it is ... See more...
I just noticed this today. Whenever I VMotion a guest it resets the uptime clock. This occurs whether I manually move a guest or if I put a host into maintenance mode. I understand it is calculating the uptime for only the host that it is currently running on. It would be nice to be able to see how long it has been up across a cluster, like it seems to work in VC 1.3. Just posting this to verify that this is what others see and let everyone know about this change between versions. Noticed this tonight and I hate it. Uptime should only be reset by a VM hardware power off operation (which can be performed by a power off or a guest shutdown which in turn powers off the VM). I'm submitting the feature/bug request on this one. Jas