Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations wOOdy-Soft on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Criterias for redundancy

Status
Not open for further replies.

tuliphead

IS-IT--Management
Aug 27, 2004
143
NO
I am about to configure a new farm.

Nothing big though, but redundancy and beeing able to reach any server in the farm if any of the terminal servers for some reason goes down is extremely important.

So my question is:
What do I need to have extra in mind here?

- The datastore will reside on a single MS SQL server with its own redundancy and backup/restore routines
- Do I need to configure every server in the farm as the Data Collector? And what is the easiest way of doing this during install (or after) ?
- Is there any other roles that I should spread among multiple servers to maintain redundancy?

Some info about the farm:
- Less than 5 servers
- All servers in the same farm
- All servers in the same zone (geographically located at the same spot)
- All servers will distribute the exact same set of application at any given time
- If this environment is offline, then hell is lose ... so I need to maximize my effort to make this environment as redundant as possible.
- If any number of servers are down, the remaining servers must offer full functionality to the users

Any suggestions tips to take into consideration on my planning?
 
My take.

Have them all as data collectors. That should do it for you.

[blue]Arguably the best cat skinner around ! [/blue]

Cheers
Scott
 
So there are no other roles to take into consideration than the data collector role?
 
Youve already got the data store sorted - SQL
Your data collector traffic is not going to be that great.
I assume your data is going to live on file and print. So should be fine.

Other things you could add foir extra earth pounds is Teamed NIcs. If you have HP's you could use their Rapid Deployment. Image the servers and therefore be able to replace quickly. Again all a function of cash.

5 servers is probably too little to be looking at enterprise Presentation server though.


[blue]Arguably the best cat skinner around ! [/blue]

Cheers
Scott
 
One quick follow up:
Let us conclude with the data collector role as the only server-role to distribute evenly among my citrix servers to avoid any single point of failure situations.

How do I define that every server should operate as a data collector? As far as I can remember from my test installations so far, I never get any questions about defining the data collector role. I guess the first server in the farm/zone will be operating in this role, so my question is then how to define the other servers as data collectors for the data store?

 
Right click at the farm level on left hand side of CMC and then click zones. You can then set preferences.

[blue]Arguably the best cat skinner around ! [/blue]

Cheers
Scott
 
um, not sure you can have more than one Data Collector.

The “chosen” MetaFrame XP server that communicates with other zones is known as the Zone Data Collector (ZDC). There is only one ZDC per zone, and every zone must have one. according to Brian Madden

As you've one zone, you can ONLY have one DC. However, you don't need to worry too much about this issue, as, in the event of the Data Collector being unavailable, an election is forced amongst the other servers within the Farm and a new one will be assigned. You have options within the Zones setting to set most preferred, preferred, no preference and not preferred, if memory serves me correctly. For the Farms I manage, I have one Most preferred (which IS the DC) and one other server set to preferred. Everything else is set to no preference.


For resiliance in the environment you've mentioned, other than the dual NIC's which is always a good idea, your other consideration should be to how you set up your clients.

Basically, each Citrix server's IP address should be set up in DNS with the name ICA. Each client should be set up to use ICA as the name it uses to find a Citrix server. As long as you configure your servers to respond to ICA requests, and your Farm so that the DC responds too (will help legacy apps and older systems to connect) then you've avoided a single point of failure. I've got a sitation for one of the farms i've just taken over where every client is configured to point to the IP address of a single server! That server goes down and, effectively, the entire Farm is down. Not ideal! I'm making the changes mentioned here to try and remove this point of failure.

Other than that, make sure you don't have any apps on just one Citrix server, make sure you control printer drivers so that you control whats installed and what is, not that it installs anything a user connects with, and you should be well on your way to a highly available server farm.

hope this helps a bit.

Chris
 
Edlcsre:
Thanx for the input. I found out the fact that I don´t can have one than more DC at the time in a Citrix zone. But I can define preferences between servers to have other servers catch the DC role if the main server goes down.

Redundancy:
I will have my users connect to this farm through the citrix web-interface (the neighbourhood agent would not be used at all). I will define load balancing between my servers, so any new logons (hopefully) will be directed to the servers with the best capacity at the moment. The farm will not consist of that many servers, but the applications that will run on them are "mission-critical" for a bunch of people. I have control on the app/printer thing, but need to experiment with the redunancy of this farm and the load balancing before I put anything in the production environment.

Your point with the DNS sounds interesting. I will try it out asop.

:)
 
Yes the ica entry in DNS for each Citrix server is a must. Takes advantage of Microsofts alleged load balancing which is just a by product of how DNS works.

To be honest I don't see to much to be messed with in your load balancing, it pretty much works out the box, if your machines are similar.

[blue]Arguably the best cat skinner around ! [/blue]

Cheers
Scott
 
Just one last question:

If I configure the DNS as suggested from you guys, this "ICA" "Citrix" or whatever I choose to name the record ... it is this I define as the "server" in the webinterface config right? Usually you would list all the servers in the farm here ... but I should just refer to the DNS-record I created right?
 
not it is the client that does the look up for "ica" and it should be "ica". You can name the servers as normal in Web Interface.

[blue]Arguably the best cat skinner around ! [/blue]

Cheers
Scott
 
The "ICA" name is related to the ICA browser service? Am I right? Well, I have been playing around with this for a while now ... but I can´t say I am 100% impressed.

Just with two servers at the moment ... the DNS ICA record will work in a round-robin kind of way returning a mix of adresses on each logon-try. So if I expect completely redundancy without any user-experienced unavailability it does not seem to be a full solution to the problem, except that it will work in about 50% of the userlogons tried ... and the users must be encouraged to try another logon (and possible another one) if the first one fails.

Is not the webinterface itself smart enough to trigger when a server is unavailable and route users down the list to the next server? It does not seem to work that way. Am I forgetting anything or what?


 
Note: I will *only* be using the webinterface for this farm. No manual configuration on the client will be done. No Citrix Neighbourhood will be used either. Just a plain login to the webinterface to reach N number of published applications.
 
I guess the only way to make a published application really redundant, is to set up a cluster on the farm.

The DNS-tip seems only relevant when using citrix application via the ordinary Citrix Neighbourhood (aka old-fashion desktop angle). When using the webinterface, it will try the servers in that list and you will get the same effect. Even if you set up a dns entry on every server (round-robin kind of functionality) ... it will only work in about 50% of the logons, in an environment with just two nodes. In an environment with many nodes, the failure percentage will drop ... but it will eventually always fail for some users (that is, if *one* node is down).

Comments?
 
Hi tuliphead.

I thought there was a way to configure web interface so that when it finds a server down, it stops trying to connect to that server for a certain period of time. I have vague memories of this as when I was doing some testing and we took a server down, when we brought it back up again we couldn't connect to it because it didn't try that 'down' machine again until the amount of time had passed. I could, of course, be getting really confused with something else, but I'm pretty sure that there is a mechanism within WebInterface for dealing with a downed server, so that not every n users (where n is the number of servers in your farm) gets an error.

Hopefully someone else will know exactly what I am talking about!

Hope this at least points in the right direction.

Chris
 
edlcsre:
You are correct! You have a setting called "Bypass any failed server for N minutes" which seems to bypass any server that can´t be reached for N minutes.

I am still in a test-phase for this ... but all my testing so far has uncovered a rather unstable logon when one of my servers is down. If that is related to the "Bypass any failed server for N minutes" setting or not, I do not know ... but it won´t log on my users the first time they try to logon ... it usually goes okay on try two or three.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top