At the company I work for Linux has a problem with filesystems that go read-only. Suffice it to say, we opened a ticket with Red Hat and they have never given us a solution, so it continues to happen. We have RHEL NFS clients on Xen virtuals and when the NFS server goes down the NFS clients on virtuals get stale NFS handles although the RHEL physical servers don't exhibit the problem. Again, Red Hat has never provided a solution. If you use SRDF and flip to another data center and make them read-write, when you return to your primary data center and make those devices read-write and secondary read-only, you may get errors like we do. Yet again, in over two years Red Hat has never provided a solution or even an answer why it occurs.
You may have better luck with xSeries hardware, but at my company we are using two other vendors and have frequent hardware problems. Some of the problems we have with servers crashing could have been caught if the hardware and OS were tightly coupled like AIX/POWER and Solaris/SPARC.
I have been trying to get SystemTap (the Linux solution to DTrace) running on RHEL to solve a problem. It is a manual solution, meaning if the server is upgraded you have to manually install files again. To make matters worse it is different for RHEL 4, 5 and 6!
Linux is just a kernel and not a complete operating system where userland and kernel are integrated. Henceforth, when Red Hat or Novell decide to add some new feature it is probable there will be problems with versions. Also, Red Hat likes to change paths like the nonsense where they want to move everything from /usr/sbin and /usr/bin to one location, which of course will break all of our in-house utilities. Or the brainchild being thought up by Red Hat to replace syslog with a new utility that will create binary files that are not grepable or awkable.
Oracle is adding kslice or ksplice to their Unbreakable Linux in an upcoming version from what I read. Yawn. AIX has had multibos since 2006 or 2007. There isn't anything to upgrade Red Hat or SLES like you can with Solaris' LiveUpgrade or AIX' alternate boot install. The Linux LVM is nothing like the AIX LVM and in fact, as unworldly as the Solaris Volume Manager is, it is better than the Linux LVM.
There are no disk management utilities in Red Hat like rmdev or rmpath for AIX or luxadm or cfgadm or devfsadm for Solaris. If you need to remove multiple disks from your server, or add multiple disks to your server for Red Hat it can take you an hour or longer depending on how many LUNs you have to configure.
AIX and Solaris are years ahead of Red Hat and SLES in features. Sometimes Red Hat likes to mention they are adding this or that and like I alluded to earlier, it likely already has been in AIX and Solaris for sometimes a decade. Red Hat's Cluster Suite that they use as a PowerHA solution is a joke. The AIX kernel has taken some of the HACMP/XD components and has more functionality than Red Hat Cluster Suite. GFS on Red Hat is broken and we have been removing it wherever we can. Again, Red Hat has provided no solutions to GFS problems. We have two- and three-node clusters that constantly are rebooting because of cman or some other cluster problem and like a broken record there are no answers from Red Hat.
Linux is technically inferior to AIX and Solaris. For one reason, AIX and Solaris were engineered and are developed by profesisional engineers to this day. Some will point out that HP and AIX among others have commits to the Linux kernel, and while this is true, they all have competing interests, and as such, you think that leads to a stable kernel?
There isn't one area of Linux that I can point to and say it is better than AIX or Solaris.
You may not have any influence on the decision, much as I have none if my company, but what I have given you above is truthful and accurate and should be considered in any decisions.