You are on page 1of 12

Max # of files allowed in directory under UFS & under VxFS (Veritas File System) in Solaris 10 Subscribe Hello, What

is the maximum number of files allowed in a directory under UFS in Solaris 10 ? What is the maximum number of files allowed in a directory under Veritas File System 5.0 in Solaris 10 ? I'd like to know where to locate this maximum number under UFS and Veritas File System 5.0 in Solaris 10. Is there any way that we could modify this maximum number ? Thanks, Bill 0 Reply underh20 5/18/2010 10:10:18 PM

On 05/19/10 10:10 AM, underh20 wrote: > Hello, > > What is the maximum number of files allowed in a directory under UFS > in Solaris 10 ? It depends. > What is the maximum number of files allowed in a directory under > Veritas File System 5.0 in Solaris 10 ? Pass. > I'd like to know where to locate this maximum number under UFS and > Veritas File System 5.0 in Solaris 10. Is there any way that we could > modify this maximum number ? df -o i reports the inode data for a UFS filesystem. To increase the number, you have to recreate the filesystem. See df_ufs(1M) and newfs(1M). -Ian Collins 0 Reply Ian 5/18/2010 10:34:05 PM On 18-May-2010, underh20 <underh20.scubadiving@gmail.com> wrote: > What is the maximum number of files allowed in a directory under UFS > in Solaris 10 ? It depends on the size of the file system as UFS uses inodes .. the bigger

0 in Solaris 10..0 in Solaris 10.0 in Solaris 10 ? > > I'd like to know where to locate this maximum number under UFS and > Veritas File System 5.the file system... > > Bill . Not sure about thet other's Im afraid. What problem are you trying to solve?? 0 Reply Richard 5/19/2010 12:37:36 AM In article <_dydncBiFoRGr27WnZ2dnUVZ_h2dnZ2d@giganews.com>. Is there any way that we could > modify this maximum number ? > > Thanks. it's huge! It's far more files than it would be reasonable to catalog in a single directory.. I know Ive had 20k files in a single directory . the limit for UFS is *quite* large .. > > What is the maximum number of files allowed in a directory under UFS > in Solaris 10 ? > > What is the maximum number of files allowed in a directory under > Veritas File System 5. Gilbert" <rgilbert88@comcast. > > Bill If there is a limit.. Is there any way that we could > modify this maximum number ? > > Thanks. but ZFS is definitely good to go :D Cya 0 Reply Hugo 5/18/2010 10:47:32 PM underh20 wrote: > Hello. the larger the number of inodes available..net> wrote: > > > > > > > > > > > > > > > > underh20 wrote: > Hello. You can check on the inode usage by using "df -F ufs -o i" I dont think ZFS has a limit on the number of files in a directory.0 in Solaris 10 ? > > I'd like to know where to locate this maximum number under UFS and > Veritas File System 5. "Richard B.. > > What is the maximum number of files allowed in a directory under UFS > in Solaris 10 ? > > What is the maximum number of files allowed in a directory under > Veritas File System 5. it's in *at least* the 10's of thousands ..

Netscape used to do this with the local cache. I had to clean up many lazy developer's "dirty" implementations of a project.net> wrote: > >> underh20 wrote: >> > Hello. it's huge! It's far more files than it would be > reasonable to catalog in a single directory. [I filter all Goggle Groups posts. the system's /tmp. Gilbert" <rgilbert88@comcast. All this is pretty old based on Solaris 7. name them in a specific way such that you can put them in at least 2 levels of hashed subdirectories. a cron can be implemented to clean up after the lazy. I don't know if VxFS has this problem since. The memory is from /tmp filling up. sucking memory up or taking forever to do certain things. AFAIK. so any reply may be automatically ignored] 0 Reply Michael 5/19/2010 3:09:52 AM On Tuesday 18 May 2010 21:09. > > What problem are you trying to solve?? No one in this thread has mentioned that it's a singularly Bad Idea(tm) to put a large number of files in a single directory. evil developers. don't press that button! DeeDee! NO! Dee. -DeeDee. At some point. These are ones that leave lots of files in an application's defined temporary directory or much worse.000 but probably less than 100.. You can do performance measurements with mail using a /var/mail with 10. If things have improved in Solaris 10. that sucks up swap space and memory for systems that stay up long periods of time using swap rather than real file system space.000 mboxes and see if things get better or worse. it doesn't use inodes and linearly searched directory files. As a sysadmin. On Solaris. I'm sure someone will jump in here and correct me.com>. More than 10. What's the magic number where things fall to crap? Dunno. A simple rm can take for freak'in ever to go linearly through the directory to find the file's inode in the directory file (uses a linear search). Usually by the time I get involved the application has been running in production for a while and all of a sudden it starts slowing down. find works better.net) opined: > In article <_dydncBiFoRGr27WnZ2dnUVZ_h2dnZ2d@giganews. > "Richard B. there is a internal directory cache that gets filled and name lookups get progressively longer (I may have the details of this wrong)..000. sendmail has to read /var/mail every time it delivers mail to a user's inbox. But once the damage is cleaned up. Michael Vilain (vilain@NOspamcop.> > If there is a limit. Rather than put lots of files in a single directory. >> > >> > What is the maximum number of files allowed in a directory under UFS .

AFAIK. it's huge! It's far more files than it would be >> reasonable to catalog in a single directory. A simple rm can take for freak'in ever to go > linearly through the directory to find the file's inode in the directory > file (uses a linear search). Is there any way that we could >> > modify this maximum number ? >> > >> > Thanks.>> > in Solaris 10 ? >> > >> > What is the maximum number of files allowed in a directory under >> > Veritas File System 5. >> >> What problem are you trying to solve?? > > No one in this thread has mentioned that it's a singularly Bad Idea(tm) > to put a large number of files in a single directory. > > As a sysadmin. sendmail has to read /var/mail every time > it delivers mail to a user's inbox. I > don't know if VxFS has this problem since. it doesn't use inodes > and linearly searched directory files. I'm sure someone will jump in here and correct me. Netscape used to do this with the local cache.or tera-bytes of whatever in a single directory. More than > 10. > there is a internal directory cache that gets filled and name lookups > get progressively longer (I may have the details of this wrong). the > system's /tmp. >> > >> > Bill >> >> If there is a limit. But once the damage is > cleaned up. > Yeah. This applies not . > > What's the magic number where things fall to crap? Dunno. but today? Who knows?). a cron can be implemented to clean up after the lazy. You can do performance > measurements with mail using a /var/mail with 10. > > Usually by the time I get involved the application has been running in > production for a while and all of a sudden it starts slowing down. Conventional wisdom is for multiple directories. each containing a limited number of related files (once upon a time this was 4096 files max. find works better. On Solaris. I had to clean up many lazy developer's "dirty" > implementations of a project.0 in Solaris 10 ? >> > >> > I'd like to know where to locate this maximum number under UFS and >> > Veritas File System 5. These are ones that leave lots of files > in an application's defined temporary directory or much worse. name them in a > specific way such that you can put them in at least 2 levels of hashed > subdirectories. > > All this is pretty old based on Solaris 7. Granted we have terabyte+ hard drives even on our home boxes these days. The memory is > from /tmp filling up. > > Rather than put lots of files in a single directory.000. evil > developers.000 but probably less than 100. that doesn't mean we necessarily should store giga.0 in Solaris 10. > sucking memory up or taking forever to do certain things.000 mboxes and see if > things get better or worse. that sucks up swap space and memory for > systems that stay up long periods of time using swap rather than real > file system space. At some point. If things have improved in > Solaris 10. Just because you CAN do something doesn't mean you SHOULD do it.

AFAIK. a cron can be implemented to clean up after the lazy. These are ones that leave lots of files > in an application's defined temporary directory or much worse.com>. the > system's /tmp.net> wrote: > >> underh20 wrote: >>> Hello. evil . Is there any way that we could >>> modify this maximum number ? >>> >>> Thanks. >>> >>> Bill >> If there is a limit. it's huge! It's far more files than it would be >> reasonable to catalog in a single directory. A simple rm can take for freak'in ever to go > linearly through the directory to find the file's inode in the directory > file (uses a linear search). Bob Melson -Robert G. that sucks up swap space and memory for > systems that stay up long periods of time using swap rather than real > file system space. > "Richard B. I had to clean up many lazy developer's "dirty" > implementations of a project. The memory is > from /tmp filling up. > > As a sysadmin.just to Solaris but to any reasonably administered system. Gilbert" <rgilbert88@comcast. > there is a internal directory cache that gets filled and name lookups > get progressively longer (I may have the details of this wrong). I > don't know if VxFS has this problem since.0 in Solaris 10. > sucking memory up or taking forever to do certain things. > > Usually by the time I get involved the application has been running in > production for a while and all of a sudden it starts slowing down. >> >> What problem are you trying to solve?? > > No one in this thread has mentioned that it's a singularly Bad Idea(tm) > to put a large number of files in a single directory. On Solaris. At some point. Texas ----Nothing astonishes men so much as common sense and plain dealing. Ralph Waldo Emerson 0 Reply Bob 5/19/2010 6:09:37 AM Michael Vilain wrote: > In article <_dydncBiFoRGr27WnZ2dnUVZ_h2dnZ2d@giganews. find works better. it doesn't use inodes > and linearly searched directory files. Melson | Rio Grande MicroSolutions | El Paso. >>> >>> What is the maximum number of files allowed in a directory under UFS >>> in Solaris 10 ? >>> >>> What is the maximum number of files allowed in a directory under >>> Veritas File System 5.0 in Solaris 10 ? >>> >>> I'd like to know where to locate this maximum number under UFS and >>> Veritas File System 5. But once the damage is > cleaned up.

I'm sure someone will jump in here and correct me. I think in some release it was switched to a self balancing b-tree but I never tracked down for sure when or if. I once had to deal with ~70. If I had to do it over again. I > don't know if VxFS has this problem since. But it remains a bad idea if there's any way out of it. In either case it's possible to fill the device's inode table before hitting a limit on files in a directory. 0 Reply Richard 5/19/2010 11:15:35 AM Michael Vilain wrote: > > No one in this thread has mentioned that it's a singularly Bad Idea(tm) > to put a large number of files in a single directory.000 but probably less than 100..000. On Solaris. It took about three days to delete those files one at a time. that sucks up swap space and memory for systems that stay up long periods of time using swap rather than real file system space. I'd have the developer killed and then initialize the disk and restore from backup. > there is a internal directory cache that gets filled and name lookups > get progressively longer (I may have the details of this wrong). More than 10. > > > > > > As a sysadmin. All this is pretty old based on Solaris 7.000 little files on a disk. Some idiot developer left something running while she went on vacation. I had to clean up many lazy developer's "dirty" implementations of a project. You can do performance measurements with mail using a /var/mail with 10.. On VXFS it started out as a self balancing b-tree. On UFS the directory structure started out as a linked list so it could grow without limits until it took up all of the inodes on the device. I negotiated an aging policy but the initial "find .5 million plus files and growing by 1000+ per day by the time I was called in. the system's /tmp. What's the magic number where things fall to crap? Dunno. all in one directory. These are ones that leave lots of files in an application's defined temporary directory or much worse. It wasn't Solaris but NO O/S that I know of would handle the situation very well." expression to do the initial clean up ran for hours. If things have improved in Solaris 10. I eventually switched it to run weekly. | xargs . .> > > > > > > > > > > > > > > developers. I set it in cron daily and it ran for 15 minutes. Netscape used to do this with the local cache. Rather than put lots of files in a single directory.. name them in a specific way such that you can put them in at least 2 levels of hashed subdirectories. AFAIK. Right.000 mboxes and see if things get better or worse.. sendmail has to read /var/mail every time it delivers mail to a user's inbox. it doesn't use inodes > and linearly searched directory files. At some point. I had one case of a mount point with about 100K files in various dirs and one particular dir with 1.

h. There's another fun aspect of directories in UFS. It's *far* better to talk the developers into hashing into a directory tree any time there are 1000+ files in any one dir. do cd * done and I waited until the shell ran out of stack space and it failed out of the loop! At that point I moved * to lost+found. I put the nested loop in a acript and it ran for most of the day clearing out the directory chain. -. I don't know that you can. Sometimes it's not possible to dictate to the developers but at very least open up a feature request ticket.com> wrote: > Hello. name them in a > specific way such that you can put them in at least 2 levels of hashed > subdirectories. returned to the top. It was so deep that "rm -rf *" did a core dump before it just to the bottom. It ran until the mount point hit 100%.scubadiving@gmail. add "set vxfs:vx_maxlink=65534" to /etc/system and reboot. > I'd like to know where to locate this maximum number under UFS and > Veritas File System 5. For UFS. Even with only one "file" per directory things can get bad.0 in Solaris 10. > What is the maximum number of files allowed in a directory under > Veritas File System 5. Netscape used to do this with the local cache. underh20 <underh20. See MAXLINK in sys/param. Is there any way that we could > modify this maximum number ? For vxfs.They do not track depth because they are just implemented in a tree. Ceri -That must be wonderful! I don't understand it at all.> Rather than put lots of files in a single directory. I ended up doing a loop like while true . deleted the older of the two trees and ran the loop again. There are libraries available to do that with just a library call and it reduces the overhead considerably. > > What is the maximum number of files allowed in a directory under UFS > in Solaris 10 ? 32767. VXFS and for that matter HFS and so on . 0 Reply Doug 5/19/2010 4:18:09 PM On 2010-05-18.0 in Solaris 10 ? 32767. It turns out it was in a loop creating directories then cd-ing into them.Moliere . Once I got called in because an installation process had run all night on a developer's system and it was complaining about a full drive when he got back in the next morning.

. See MAXLINK in sys/param.h..net> wrote: >> On 2010-05-18. as I'd already looked before replying: From usr/src/uts/common/fs/ufs/ufs_dir. See MAXLINK in sys/param. number of files in a directory though. > > > > > > anyway. maybe someone else can shed some light on this :) Or I could. >> >> What is the maximum number of files allowed in a directory under UFS >> in Solaris 10 ? > > 32767. 805 * If tvpp is non-null.net> wrote: > On 2010-05-18. Not on UFS. 806 */ 807 int 808 ufs_direnter_lr( .. some people here wrote they've seen 70k+ files in single directory.scubadiving@gmail. MAXLINK seems to be the maximum number of (hard?)links to a file and also the limit of subdirectories. MAXLINK seems to be the maximum number of (hard?)links to a file and also the limit of subdirectories. > > some people here wrote they've seen 70k+ files in single directory. Second Edition.. Ceri Davies <ceri_usenet@submonkey. anyway..c: 804 * Write a new directory entry for DE_LINK.0 Reply Ceri 5/19/2010 7:21:36 PM On 2010-05-19.. Second Edition. Page 740-741 "ic_nlink" I'm still trying to figure out the max.. maybe someone else can shed some light on this :) 0 Reply Stefan 5/19/2010 9:00:46 PM On 2010-05-19. 879 return (EMLINK).scubadiving@gmail.de> wrote: > On 2010-05-19. DE_SYMLINK or DE_RENAME operation s.h. Ceri Davies <ceri_usenet@submonkey.. Page 740-741 "ic_nlink" I'm still trying to figure out the max.. see Solaris Internals.com> wrote: >> Hello. Stefan Krueger <stadtkind2@gmx. see Solaris Internals. underh20 <underh20. number of files in a directory though. 877 if (sip->i_nlink == MAXLINK) { 878 rw_exit(&sip->i_contents). underh20 <underh20.. return with the pointer to the target vnode. >>> >>> What is the maximum number of files allowed in a directory under UFS >>> in Solaris 10 ? >> >> 32767.com> wrote: >>> Hello.. 880 } .

com> wrote: >>>> Hello. Stefan Krueger <stadtkind2@gmx. number of files in a >> directory though. Ceri Davies <ceri_usenet@submonkey. to stop guessing I wrote a small shell script which just .h as: > > #define MAXLINK 32767 /* max links */ "Write a new directory entry". see Solaris Internals. >> >> some people here wrote they've seen 70k+ files in single directory. >>>> >>>> What is the maximum number of files allowed in a directory under UFS >>>> in Solaris 10 ? >>> >>> 32767.h..net> wrote: >>> On 2010-05-18. maybe someone else can shed some light on this :) > > Or I could.Moliere 0 Reply Ceri 5/19/2010 9:14:48 PM On 2010-05-19. underh20 <underh20.de> wrote: >> On 2010-05-19. >> Second Edition.MAXLINK is defined (and included from) sys/param... See MAXLINK in sys/param. return with the pointer to the target > vnode.. > 880 } > > MAXLINK is defined (and included from) sys/param..scubadiving@gmail. so thanks for that :-) anyway. Ceri Davies <ceri_usenet@submonkey. > 877 if (sip->i_nlink == MAXLINK) { > 878 rw_exit(&sip->i_contents). > 879 return (EMLINK). > 805 * If tvpp is non-null. DE_SYMLINK or > DE_RENAME operations. > >> anyway. directory != file and this basically proofs what I wrote. > 806 */ > 807 int > 808 ufs_direnter_lr( > . -. > > Not on UFS. MAXLINK seems to be the maximum number of (hard?)links to a >> file and also the limit of subdirectories.net> wrote: > On 2010-05-19.c: > > 804 * Write a new directory entry for DE_LINK. as I'd already looked before replying: > > From usr/src/uts/common/fs/ufs/ufs_dir.. Page 740-741 "ic_nlink" >> >> I'm still trying to figure out the max.h as: #define MAXLINK 32767 /* max links */ Ceri -That must be wonderful! I don't understand it at all.

will suck! 0 Reply Richard 5/19/2010 11:54:56 PM Ceri Davies <ceri_usenet@submonkey.000. >> . I think the max. Stefan Krueger <stadtkind2@gmx.com> wrote: >> Hello. underh20 <underh20.. Performance. I hope that's ok $ ls | wc -l 50001 So. > > some people here wrote they've seen 70k+ files in single directory.scubadiving@gmail. which is inadvertently determined by the size of the file system 8] 0 Reply Hugo 5/19/2010 11:14:09 PM Stefan Krueger wrote: > On 2010-05-19.scubadiving@gmail. >>> >>> What is the maximum number of files allowed in a directory under UFS >>> in Solaris 10 ? >> 32767. underh20 <underh20.touch'es files (on Solaris 10.. num of files in a directory is only limited by the number of free inodes HTH 0 Reply Stefan 5/19/2010 9:44:52 PM On 19-May-2010. number of files in a > directory though. I made it stop at 50. I think the max.. Ceri Davies <ceri_usenet@submonkey.de> wrote: > > > > > $ ls | wc -l 50001 So. MAXLINK seems to be the maximum number of (hard?)links to a > file and also the limit of subdirectories... UFS). to put it bluntly.. I do know that it's a very poor idea to put thousands or tens of thousands of files in one directory. maybe someone else can shed some light on this :) I don't know what the maximum number of files that can be cataloged in a directory..com> wrote: >>> Hello.net> wrote: >> On 2010-05-18.. See MAXLINK in sys/param. see Solaris Internals. Page 740-741 "ic_nlink" > > I'm still trying to figure out the max. num of files in a directory is only limited by the number of free inodes Agreed..net> writes: >On 2010-05-18..h. > Second Edition. > > anyway.

. return (EMLINK). Second Edition. See MAXLINK in sys/param. Stefan Krueger <stadtkind2@gmx.de> wrote: >> On 2010-05-19. > 805 > 806 > 807 > 808 >..com> wrote: >>>> Hello.h as: . You can created millions of files inside a directory but the performance (esp for UFS) will be poor. >>>> >>>> What is the maximum number of files allowed in a directory under UFS >>>> in Solaris 10 ? >>> >>> 32767. */ int ufs_direnter_lr( if (sip->i_nlink == MAXLINK) { rw_exit(&sip->i_contents). 0 Reply Casper 5/20/2010 8:45:36 AM Ceri Davies <ceri_usenet@submonkey.net> wrote: >>> On 2010-05-18. number of files in a directory though.net> writes: >On 2010-05-19. Statements on Sun products included here are not gospel and may be fiction rather than truth. } >MAXLINK is defined (and included from) sys/param.. Sun Microsystems. They are in no way related to opinions held by my employer. DE_SYMLINK or DE_RENAME operati * If tvpp is non-null. as I'd already looked before replying: >From usr/src/uts/common/fs/ufs/ufs_dir. MAXLINK seems to be the maximum number of (hard?)links to a file and also the limit of subdirectories. See MAXLINK in sys/param..h. >Not on UFS. return with the pointer to the target vnode. that is the limit on the number of sub directories inside a single directory. maybe someone else can shed some light on this :) >Or I could. underh20 <underh20..h. Ceri Davies <ceri_usenet@submonkey. >> >> some people here wrote they've seen 70k+ files in single directory. Casper -Expressed in this posting are my opinions.>> What is the maximum number of files allowed in a directory under UFS >> in Solaris 10 ? >32767. Page 740-741 "ic_nlink" I'm still trying to figure out the max. Not correct. > 877 > 878 > 879 > 880 * Write a new directory entry for DE_LINK.scubadiving@gmail. see Solaris Internals. >> >> >> >> >> >> anyway.c: > 804 ons..

Casper -Expressed in this posting are my opinions.S.scubadiving@gmail. They are in no way related to opinions held by my employer.COM> writes: > Ceri Davies <ceri_usenet@submonkey.followup in the newsgroup] .net> writes: > >>On 2010-05-18.. Even if the filesystem and application does it efficiently. not for files. that is the limit on the number of sub directories inside > a single directory. -Andrew Gabriel [email address is not usable -. Statements on Sun products included here are not gospel and may be fiction rather than truth. 0 Reply Casper 5/20/2010 8:46:33 AM In article <4bf4f6b0$0$22938$e4fe514c@news. See MAXLINK in sys/param.xs4all. You just don't want to go there.Dik@Sun. Casper H.h.com> wrote: >>> Hello. Sun Microsystems. and things like rm {some shell expression} will blow up with too many arguments.. Dik <Casper.nl>. waiting for ls(1) to sort a million files into alphabetic order still makes it an admin's nightmare when they have to dive in to see what's gone wrong. > > Not correct. You can created millions of files inside a directory > but the performance (esp for UFS) will be poor. underh20 <underh20.>#define MAXLINK 32767 /* max links */ That's only for directories. >>> >>> What is the maximum number of files allowed in a directory under UFS >>> in Solaris 10 ? > >>32767.