virtual_dave1's Posts

Still no joy, same error. Opened SR1594856101, will update.
OK so installed SRM to second site VCenter and it works fine on the client. Notable difference is that the DB is local in site2 on the VCenter machine - SQL Express 2005, named instance, SQL a... See more...
OK so installed SRM to second site VCenter and it works fine on the client. Notable difference is that the DB is local in site2 on the VCenter machine - SQL Express 2005, named instance, SQL auth - whereas in site 1 the DB is remote SQL Standard 2005, default instance, SQL auth. Now I know the client side is good i will again re-install SRM to site 1 and report back.
Hi Yes restarted the VC\SRM server and client. Not installed the 2nd server yet - thought i would crack this problem first.... do you think that is relevant?
Don't think so, host FW on both the client and the server is off (service set to disabled). No other firewalls in our network infrastucture between me and the SRM server. If i point IE at htt... See more...
Don't think so, host FW on both the client and the server is off (service set to disabled). No other firewalls in our network infrastucture between me and the SRM server. If i point IE at https://<vcenter-hostname>:8095/dr i get the usual certificate warning (using auto-generated certs not from our PKI) and then, after accepting the cert warning, I get a 404 Not found.
Sorry forgot to add a crucial piece of information - if I RDP onto the VC\SRM server and run the VIC from there, install SRM plugin, it works fine and I can access SRM.
Hi all, Installed SRM 4.1 onto VCenter server (also 4.1). Install completes successfully and service is running. Using SQL auth as the SQL 2005 server is remote to the VCenter server. Dat... See more...
Hi all, Installed SRM 4.1 onto VCenter server (also 4.1). Install completes successfully and service is running. Using SQL auth as the SQL 2005 server is remote to the VCenter server. Database seems to be setup OK, documentation was a bit lacking but after a bit of trial and error got the SRM server installed fine, DB is populated with tables etc and no errors pop up during install. I download the plugin to my Vsphere client and install it fine. Restart VIC, log into vcenter as usual, click on Site Recovery and get a popup saying: "Connection to local Site Recovery Manager https://<vcenter-hostname>:8095/dr failed". Thats it no more info. VC\SRM server is 2008 R2. New DB created on SQL 2005 remote server, created new schema under database\Security, created new user (sql user not windows user), mapped user to DB using the new schema. Set user to be owner of Schema. Gave schema permissions to DB. Username matches schema name which also matches DB name. DSN is 32-bit (from c:\windows\syswow64). As it happens in an effort to troubleshoot I have given the user all server roles excluding sysadmin, but specifically it is a member of the bulkadmin server role. So SRM seems to have installed OK, no errors shown in event log or when service starts. But just can't connect the plugin to it. Client is Win7 x64, with UAC enabled, running as normal user - this is how i work day-to-day, and all other vmware stuff works fine like this (VC,VUM etc). Tried running the VIC as local and domain admins but error remains the same. any ideas or pointers? cheers
OK after the luxury of time sat on a symantec support call, i have found this thread: http://communities.vmware.com/thread/279965?tstart=30 Sorry to duplicate.
Hi all ESX 4.0u2 (soon to be esxi 4.1) Procurve 3500yl I have a vswitch for VMs with several port-groups for different VLANs. This vSwitch has 4 pNICs going to a single Procurve 3500yl. ... See more...
Hi all ESX 4.0u2 (soon to be esxi 4.1) Procurve 3500yl I have a vswitch for VMs with several port-groups for different VLANs. This vSwitch has 4 pNICs going to a single Procurve 3500yl. On the pswitch, the four ports are trunked so VLANs can be tagged into the vswitch, and on ESX the vswitch load balancing policy is set to IP-Hash (required with procurves). This is set at the vswitch level. In order to improve redundancy I have a second Procurve 3500yl. I would like to take two of the active NICs and place them into Standby. I would cable these into the second pswitch and reconfigure the trunking etc. These would then sit unused until called upon in a failover scenario. Bandwidth-wise, 2 active pnics is more than enough according to the performance metrics. When i go to create this configuration, I get a warning\error mesage - when moving two of the four active NICs down to standby (again at the vswitch level) I get a message saying: "The IP Hash based Load Balancing does not support Standby uplink physical adapters. Change all standby uplinks to active status". As far as I understand it, I need the IP-Hash LB policy to work with the Procurve trunks. Pswitch1 and Pswitch2 are connected via a 2 port (1Gb ports) LACP trunk and also support traffic\applications other than vmware. The inter-switch trunks are not busy at all during normal operations. The switches are not 'stacked' as in single management address. I am not sure if i can have 4 active nics, going to two different switches - they would obviously be two separate trunks at the pswitch end, but would that matter to ESX and the ip-hash based load balancing policy? Can anybody explain a bit about why vmware won't allow this configuraiton, and what I might be able to do to achieve the desired result? My goal is to be able to withstand an outage of pswitch1 (recent power issues at the datacenter have brought this on). outside of ESX, pSwitch1 has the primary IPVPN router connected to it, and pswitch2 has the secondary router, with HSRP running between the two routers across the inter-switch trunk. A complete failure of either pSwitch will still allow connectivity to devices not solely connected to either switch - hence my desire for this setup. Thanks.
Sadly it is the 'that's so cool' factor that the decision maker has running through his head that i'm trying to see past! On a GBP 250K project, would you pay an extra 50-60K for the "tha... See more...
Sadly it is the 'that's so cool' factor that the decision maker has running through his head that i'm trying to see past! On a GBP 250K project, would you pay an extra 50-60K for the "that's so cool" factor? Hey it's not my money i just think with the gap in quotes so large it seems silly to pay all that extra money for the privilege of seeing a sun sticker on your box when you make your monthly trip to the data center to change the autoloader... At this rate, if we outgrow an equallogic iscsi san and need fibre, we can use that 60k we saved to buy a brand new FC san from another vendor! re training. good point, we're making a big leap here in terms of technology in the context of the company i work for, so we're all pretty much at square 1 together. part of the tender was a knowledge transfer period after install, plus x months\years of hand holding until we get up to speed ourselves. i'm a techy too so am not immune to the 'coolness' factor believe me! but at what cost? ideally somebody will comment on how the extra 60k is not for the 'cool' factor and will list 100 reasons why this sun san is worth the extra money...... like i said before, it's win-win for me and the techies. i just need convincing that it is worth the extra money so i can get behind it. cheers
Guys, Wow thanks for the quick responses. There are two equalogic models in the proposal, depending on the ultimate performance speed we're after eg. sas\sata and size. The PS5000X... See more...
Guys, Wow thanks for the quick responses. There are two equalogic models in the proposal, depending on the ultimate performance speed we're after eg. sas\sata and size. The PS5000X (16x400gb) giving 4.2Tb in raid50, and the PS5000E (16x500gb) 5.3tb RAID 50. I think (but may be wrong) that the 5000X is SAS and the 5000E is SATA. The Sun proposal is 1X Sun Clustered 7410 storage controllers (presumably 2x heads), with a Sun J4400 SAS array. 20x1B SATA with 4x18GB 'Logzilla Flash Accellerator' (presumably these are the SSDs?). Sun presents CIFS and NFS to the LAN natively, whereas EQ doesn't. Appreciate the potential performance hit involved with setting up a windows\samba VM to then share out CIFS\NFS but I'm not sure how significant that is at the moment... Sun is upgradeable to FC, EQ isn't. Not much to argue about there, except whether we think we'll out grow iscsi. We'd be fools to discount that possibility.... The SSD's sound great, a clever way of getting higher capacity cheaper SATA drives to perform like SAS. But it is screaming 'complexity' to me, surely it is just more to go wrong? And must surely be more expensive to replace in the event of a failure? The other main point i picked up about the Sun solution was that it is 'open source' and non-proprietary. in truth i don't really care about that, i'm not out to change the world, i just want value for money. proprietary or non-proprietary, we still have to be pragmatic!!! Appreciate any thoughts anyone might care to share. Thanks! dave
Hey guys About to embark on the a consolidation\virtualisation project using VI3. Exciting times. We are in discussion with various 3rd parties who are each putting forward their t... See more...
Hey guys About to embark on the a consolidation\virtualisation project using VI3. Exciting times. We are in discussion with various 3rd parties who are each putting forward their tender if you like, encompassing both storage and vmware. In terms of storage, we seem to be down to two vendors: Sun Micrososystems and Dell\Equalogic. Sun are proposing one of their new 7000 series 'Amber Road' SAN\NAS servers, and we would be using iSCSI. Can i ask if anybody out there has experience of running vmware on the Sun 7000 SAN? Anybody got any tips or pointers, any gotchas to watch out for when speccing up this SAN? I personally favour the Equalogic approach in terms of simplicity and modularity\expandability. But can't ignore certain features of Sun like the ability to add an FC HBA in at a later date should we outgrow iSCSI. Hearing lots of things from Sun about ZFS and how it's the best thing since sliced bread blah blah. But looking at the features that we would be interested in (as it stands at the beginning of our virtualisation journey), Equalogic appears to have it all covered too. If not already implemented, then on the road map. But then we keep hearing 'yeah it's on the roadmap' from Sun too eg. de-dupe on-box in ZFS. Basically guys i'm wondering if anybody has any advice and more interestingly hands on experience with these new Sun 7000 series boxes. I'm a little dubious that we'll be paying over the odds for a bit of iron because it has a Sun sticker on the front of the box.... but would love to be corrected. For me as a techy, it is a win-win situation!! Sun or Equalogic - i can live with either! Just want to make sure we spend the bosses money on the right product for us. Thanks dave