If you run this comand,
Code:
grep ARG_MAX /usr/include/linux/limits.h
you'll see a line that defines the constant ARG_MAX. The Linux kernel is compiled with a similar header file that also defines that constant, which is 131072 for me. It's accompanied by the comment:
Code:
# bytes of args + environ for exec()
ARG_MAX defines the amount of memory the kernel reserves for an exec call. This amount can't be changed during runtime (not on a stock Linux kernel, anyway; I think some other Unixes can change this value at runtime).
I believe this limit applies to the
number of arguments, and it also includes environment "arguments" (e.g. "HOME=/home/chip" and "TERM=xterm").
A command like
when run in a directory containing a large number of files, can expand a command to longer than ARG_MAX. The
shell can request more memory to be allocated to hold all that, but that's useless if the
kernel can't handle a line that long.
So such a command will indeed fail. On my Debian Etch:
Code:
chip@horus:~$ ls /* /*/* /*/*/* /*/*/*/*
bash: /bin/ls: Argument list too long
Notice that bash itself gives that error message
about /bin/ls; it couldn't exec the ls process (or more likely it didn't even try) because the argument list was too long.
So when necessary, xargs does split up its arguments into multiple invocations that are small enough to exec.