Navin,
To look at the clients NFS mouting from your server, execute :
showmount -a
This will list all remote mounts, there is one caveat to this this is only up to date if you unmount the filesystems properly....so if they are just rebooted the entry in /etc/rmtab will still exist. It also worth looking in /etc/sm, this will list all hosts who have viewed or edited NFS mounted filesystems.
Here is some info. about changing the number of nfsd and biod daemons:
Choosing Initial Numbers of nfsd and biod daemons
Determining the best numbers of nfsd and biod daemons is an iterative process. Guidelines can give you no more than a reasonable starting point.
By default there are six biod daemons on a client and eight nfsd daemons on a server. The defaults are a good starting point for small systems, but should probably be increased for client systems with more than two users or servers with more than two clients. A few guidelines are as follows:
In each client, estimate the maximum number of files that will be written simultaneously. Configure at least two biod daemons per file. If the files are large (more than 32 KB), you may want to start with four biod daemons per file to support read-ahead or write-behind activity. It is common for up to five biod daemons to be busy writing to a single large file.
In each server, start by configuring as many nfsd daemons as the sum of the numbers of biod daemons that you have configured on the clients to handle files from that server. Add 20 percent to allow for non-read/write NFS requests.
If you have fast client workstations connected to a slower server, you may have to constrain the rate at which the clients generate NFS requests.The best solution is to reduce the number of biod daemons on the clients, with due attention to the relative importance of each client's workload and response time.
Tuning the Numbers of nfsd and biod daemons
After you have arrived at an initial number of biod and nfsd daemons, or have changed one or the other, do the following:
First, recheck the affected systems for CPU or I/O saturation with the vmstat and iostat commands. If the server is now saturated, you must reduce its load or increase its power, or both.
Use the command netstat -s to determine if any system is experiencing UDP socket buffer overflows. If so, use the command no -a to verify that the recommendations in Tuning Other Layers to Improve NFS Performance have been implemented. If so, and the system is not saturated, increase the number of biod or nfsd daemons.
Examine the nullrecv column in the nfsstat -s output. If the number starts to grow, it may mean there are too many nfsd daemons. However, this is less likely on this operating system's NFS servers than it is on other platforms. The reason for that is that all nfsd daemons are not awakened at the same time when an NFS request comes into the server. Instead, the first nfsd daemon wakes up, and if there is more work to do, this daemon wakes up the second nfsd daemon, and so on.
To change the numbers of nfsd and biod daemons, use the chnfs command.
To change the number of nfsd daemons on a server to 10, both immediately and at each subsequent system boot, use the following:
# chnfs -n 10
To change the number of biod daemons on a client to 8 temporarily, with no permanent change (that is, the change happens now but is lost at the next system boot), use the following:
# chnfs -N -b 8
To change both the number of biod daemons and the number of nfsd daemons on a system to 9, with the change delayed until the next system boot, run the following command:
# chnfs -I -b 9 -n 9
Increasing the number of biod daemons on the client worsens server performance because it allows the client to send more request at once, further loading the network and the server. In extreme cases of a client overrunning the server, it may be necessary to reduce the client to one biod daemon, as follows:
# stopsrc -s biod
This leaves the client with the kernel process biod still running.
PSD
IBM Certified Specialist - AIX V4.3 Systems Support
IBM Certified Specialist - AIX V4 HACMP