Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Calculating overhead for filesystems

Status
Not open for further replies.
Apr 13, 2004
316
US
I need to know how to figure the overhead for filesystems in Solaris 9 using Disk Suite. For example, if someone says they need a 1GB filesystem and I create a slice on an EMC disk from slice a of 1GB, then they may have only 750MB of disk space. Or if someone wants a 1GB filesystem, then I create a slice of 1.2GB, it may be 1.150GB available.

Either way it ends up being too small or I end up too large and may end up wasting disk space. In AIX, you use 512-byte blocks and I have a table that allows me to figure the filesystem in gig, meg, or kilo and use different partition sizes (4, 8, 16, etc.) to calculate the correct size, and it also shows the number for 512-byte blocks.

I also have a table that I use to create filesystems that need to be X in size, the figures in the NBPI and fragmentation sizes to use. Is there a table for Solaris/DiskSuite that I can use for this purpose.

Thanks for answering these 2 questions.
 
By default solaris sets aside 10% of a partition for disk maintenance. I am pretty sure this number is be modified.
 
My english stinks... Should read:

I am pretty sure this number can be modified
 
Hmm, a ufs partition is a tricky thing as the defaults of newfs are for ancient old small hdds.

newfs -m 3 -i 2048 /dev/rdsk/<whatever>

The key in that is the -m 3 which is the percentage of the disk reserved for maintenance (minfree)

A good guide is to take the largest single file size you expect, then round up.

Default is 10%, which on a 73GB drive is a whopping 7.3GB.

However there is another problem, which is block size. By default its set at 2k ( switch -i 2048 ). These days on large disks this uses lots of space by creating loads of inodes and isn't really necessary.

you can go for much larger numbers (but don't forget this will condition the MINIMUM file size... if the disk is for lots of small files, a SMALLER -i could be more efficient. Your call really).

Sorry for not being able to supply the exact formulas,

Matthew



The Universe: God's novelty screensaver?
 
Thanks for answering.

Ten percent seems awfully high. On AIX the filesystem overhead is 4%. Is it advisable to use the -m and restrict how much is used? How do you determine whether 3% or say 7% or just leave it at 10%?

Is the size of the disk taken into account? Say a 73GB disk vs. a 36GB disk vs. an 8GB disk, and so forth?
 
Well the -m parameter controls that overhead, and its set at 10% by default. The thing is you need it to be bigger than the largest file you are likely to store.

So if you are doing newsgroup posts, then it should be really small, -m 2 or even -m 1 (2% and 1%)

If you are likely to put an ISO file (600MB on the disk) calculate how many percent its likely to be and then round up.

And the size is taken into account.

Matthew

The Universe: God's novelty screensaver?
 
I suggest to llok at the manpage of tunefs
Code:
     -m minfree
           Specifies the percentage of space held back from  nor-
           mal  users;  the  minimum  free  space threshold. This
           value can be set to 0; however,  up  to  a  factor  of
           three  in throughput will be lost over the performance
           obtained at a 10% threshold. Note:  If  the  value  is
           raised  above  the  current usage level, users will be
           unable to allocate files until enough files have  been
           deleted to get under the higher threshold.

     -o[space|time]
           Changes optimization strategy for the file system.

           space: conserves space

           time: attempt to  organize  file  layout  to  minimize
           access time[/b]

tunefs -m 1 -o time ...

afaik Sol9 generates Filesystems with minfree 1% and timeoptimization (depending on Filesystemssizes)

Best Regards, [url=http://www.daFranze.de]Franz[/url]
--
Solaris System Manager from Munich, Germany 
I used to work for Sun Microsystems Support (EMEA) for 5 years
 
Thanks for the answers. My problem, as I mentioned, is that of too much or too little space when creating a filesystem.

There are times when I will add 10% (e.g., 2GB filesystem and partition a slice of 2.2GB) thinking I will have 2GB available and sometimes end up with 2.1GB available and other times end up with far less usable disk space.

I am surmising this is due to the size of the disks that I start with, which range from 8GB to 72GB. In AIX, the size doesn’t matter, because the JFS uses 4% every time; but this isn’t the case with Disk Suite on Solaris. Also, I know the way the company has decided to implement the SAN disk is probably incorrect. Given an 18GB disk and then slice it up into 8 partitions. Or is there a better way to allocate disk?
 
Considerably. If you are using Solaris9 you have full access to the sun LVM subsystem. In which case create single partitions as described above.

There is not a bad chapter on it in:


Free download of the book as PDF.

The LVM system takes over from haveing individual disks to allowing you to create larger logical volumes from groups of disks. It also allows RAID installations.

Matthew

The Universe: God's novelty screensaver?
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top