I do this on HACMP clusters (storage provided by ESS). This is for situations with a rotating cluster where the standby server is an NFS client of the primary in normal operation but then has to take over those filesystems and become an NFS server when failover occurs. The method I use is:
1) On primary server create the filesystems with unique names (replace filesystem1 & 2 with whatever you prefer).:
/NFS/filesystem1
/NFS/filesystem2
2) Import volume group definitions onto standby server.
3) On primary server create an /etc/exports file (replace host description with real hostnames):
/NFS/filesystem1 -root=primary:standby

ther_systems,access=primary:standby

ther_systems
/NFS/filesystem2 -root=primary:standby

ther_systems,access=primary:standby

ther_systems
4) Create mount points for NFS mounts on all systems:
mkdir -p /filesystem1
mkdir -p /filesystem2
5) Setup HACMP resource group, add the /NFS/filesystem names to the 'Filesystems/Directories to Export' field.
Add the filesystems and the mountpoints into the 'Filesystems/Directories to NFS Mount' field in the format mountpoint;source ie,
/filesystem1;/NFS/filesystem1 /filesystem2;/NFS/filesystem2
6) Synchronise resource group to the cluster.
On the standby node you have to perform the mounts/unmounts as part of the application scripts because HACMP won't do any NFS stuff until run-level a is reached (in this config run-level a is only reached on the standby node when a failover takes place). Also you need to ensure that the /etc/exports is copied onto a location on shared disk so that it can be used by the standby after the failover.
If your situation is simpler (ie, that your standby node is a true idle standby with nothing running on it until a failover occurs) then it's even simpler - you don't need the mountpoint to have a different name than the source and if it's a cascading cluster you don't have to worry about having to script the NFS mounts/unmounts, just do it thru HA.