Guest_imported
New member
- Jan 1, 1970
- 0
Our 4700 Windows 95 users run ccMail v8 clients and DB8 PostOffices sit on NetWare 4.11 and 5.1 servers with both traditional and NSS volumes. We of course have the usual ccMail routers and hubs to carry/deliver mail between POs/Hubs. Since around May, we have had an overwelming number of PO LOCKS, either locks held open by a user preventing others from writing, both permanent and slow locks (meaning the lock never goes away unless we clear the user, or the lock takes an unusual length of time to clear). This has caused company havoc. We have Lotus on site, and have revied everything from the stove to the kitchen sink, PC level layers, ie. ccMail client setup, NetWare Client, other apps, network cards, the network ethernet cables, routers and hubs, NetWare file server patches, etc. In the last 3 weeks we have seen a trememdous reduction of our help calls by 72% and far fewer PO lock issues. Many situations require manual intervention, and some clear themselves in time. Usually about 29 of 40 PO's affected per week, not always the same, many are the same. We've started doing purges and maintenance more often, at least every two days, and 30-day aged purges to reduce the PO sized which run around 50mb, to 500MB, and some 1GB or 2GB, but none ever reaching the 4GB limit, and we have about 50-200 users per PO.
While we are statistically monitoring the CCNFT logs and have enabled LAP to catch culprit users or routers which may lock POs, we cannot statistically track this to anything specific so far. Back in May, we updated from ccMail Client 6 to 8, and the Netware Client updated to v3.21, and while our NJ headquarters remained on NW 4.11 servers, our NY HQ upgraded to NW 5.1 with Cluster Services, and NSS volumes. I don't know of any significant network topology changes, but we use ELANS and have a fairly complex network, ethernet, 10/100, and some TR/16 left in some locations, and besides this NY/NJ LAN we have a worldwide WAN to major continents. We've even changes PC-based client settings to attempt to minimize the OS and related layers from interering with ccMail Client connectivity to the PO's.
Summary is, we still have some problems, usually 1 or 2 a day, lockups and slowdowns that is, but this AM we just had 5 PO's locked in a 12 hr period, and suspect network bandwidth increases during nighly backups and early AM login rush to be a possible cause. At this time, since 10/24, 12:30PM we've turned on LAP on all PO's to grab the locking information, but are monitoring use of LAP for adverse affects.
If anyone has had any of these kinds of ccMail lock and delay headaches and been able to solve them, would you please let me know???
Many thanks,
Peter
While we are statistically monitoring the CCNFT logs and have enabled LAP to catch culprit users or routers which may lock POs, we cannot statistically track this to anything specific so far. Back in May, we updated from ccMail Client 6 to 8, and the Netware Client updated to v3.21, and while our NJ headquarters remained on NW 4.11 servers, our NY HQ upgraded to NW 5.1 with Cluster Services, and NSS volumes. I don't know of any significant network topology changes, but we use ELANS and have a fairly complex network, ethernet, 10/100, and some TR/16 left in some locations, and besides this NY/NJ LAN we have a worldwide WAN to major continents. We've even changes PC-based client settings to attempt to minimize the OS and related layers from interering with ccMail Client connectivity to the PO's.
Summary is, we still have some problems, usually 1 or 2 a day, lockups and slowdowns that is, but this AM we just had 5 PO's locked in a 12 hr period, and suspect network bandwidth increases during nighly backups and early AM login rush to be a possible cause. At this time, since 10/24, 12:30PM we've turned on LAP on all PO's to grab the locking information, but are monitoring use of LAP for adverse affects.
If anyone has had any of these kinds of ccMail lock and delay headaches and been able to solve them, would you please let me know???
Many thanks,
Peter