샤브의 블로그 RSS 태그 관리 글쓰기 방명록
Device & Language (105)
2010-09-05 01:06:18

스마트 폰을 사용하다 보면
AS 센터에 이런 글을 보게 될 수도 있습니다.
스마트폰은 일반폰과 달리 문제가 발생될 경우가 많은데
핸드폰을 껏다가 다시 켜줘야 한다고 말입니다.

But~!!!
이 어플이 왜만한 문젠 해결해주는 것 같습니다~

Advanced Task Killer

이 어플을 위젯으로 추가해서 놓고 실행시키면
재부팅하는 효과와 비슷하게
현재 실행 중인 프로그램들이 죽습니다.

(00 Apps killed, 000M Memory Available)
위의 메시지를 보게 됩니다.

이후 필요한 것들이 다시 실행 됩니다~
그러니 걱정할 필요없이 한번씩 눌러주면
다시 빨라지는 것을 발견할 것입니다.

2010-09-05 00:42:08

NAME

    pfsh, clist- Profile shell

SYNOPSIS

    pfsh [-acefhiknprstuvx] [argument...]

DESCRIPTION

    The profile shell is a modified version of the Bourne shell, sh(1) . Based on the user's profiles, pfsh restricts the commands that can be executed. Based on the profile definitions, pfsh determines which privileges, user ID (UID), and group ID (GID) to use in executing commands.

    Usage

      Refer to the sh(1) man page for a complete usage description. pfsh adds the clist command.

    Commands

      clist [ --hpniu ]

      Displays a list of the commands that are permitted for the user.

      -h

      Includes a hexadecimal list of the privileges assigned to each command in the command list.

      -p

      Includes a list of the privileges assigned to each command in the command list. The list is in text form.

      -n

      Includes a comma-separated decimal list of the privileges assigned to each command in the command list.

      -i

      Includes the UID and GID assigned to each command in the command list.

      -u

      Lists only those commands that are are unusable because the profile assigned privileges that pfsh did not inherit. (See WARNINGS .)

ATTRIBUTES

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE ATTRIBUTE VALUE
    Availability SUNWtsu

SEE ALSO

WARNINGS

    pfsh must inherit privileges in order to run commands with those privileges. Privileges for a command that are defined in a profile may not be inherited when pfsh runs that command. If such a command is executed, a warning message is printed and the command is run with no privileges.

    Profiles are searched in the order specified in the user's tsoluser entry. If the same command appears in more than one profile, pfsh uses the first entry whose label range includes the sensitivity label of the process.

    When it is executed, pfsh builds the list of allowable commands by reading the user's profiles. If any changes are made to the profiles while pfsh is running, the changes will not take effect until the shell is restarted.

NOTES

    These interfaces are uncommitted. Although they are not expected to change between minor releases of the Trusted Solaris environment, they may.


2010-09-05 00:41:05

NAME

    sh, jsh- standard and job control shell and command interpreter

SYNOPSIS

    /usr/bin/sh [-acefhiknprstuvx] [argument...]
    /usr/xpg4/bin/sh [+- abCefhikmnoprstuvx] [+- o option...] [-c   string] [arg...]
    /usr/bin/jsh [-acefhiknprstuvx] [argument...]

DESCRIPTION

    The /usr/bin/sh utility is a command programming language that executes commands read from a terminal or a file.

    The /usr/xpg4/bin/sh utility is identical to /usr/bin/ksh . See ksh(1) .

    The jsh utility is an interface to the shell that provides all of the functionality of sh and enables job control (see Job Control section below).

    Arguments to the shell are listed in the Invocation section below.

    Definitions

      A blank is a tab or a space. A name is a sequence of ASCII letters, digits, or underscores, beginning with a letter or an underscore. A parameter is a name, a digit, or any of the characters * , @ , # , ? , - , $ , and ! .

USAGE

    Commands

      A simple-command is a sequence of non-blank word s separated by blank s. The first word specifies the name of the command to be executed. Except as specified below, the remaining word s are passed as arguments to the invoked command. The command name is passed as argument 0 (see exec(2) ). The value of a simple-command is its exit status if it terminates normally, or (octal) 200 + status if it terminates abnormally; see signal(5) for a list of status values.

      A pipeline is a sequence of one or more command s separated by | . The standard output of each command but the last is connected by a pipe(2) to the standard input of the next command . Each command is run as a separate process; the shell waits for the last command to terminate. The exit status of a pipeline is the exit status of the last command in the pipeline .

      A list is a sequence of one or more pipeline s separated by ; , & , && , or || , and optionally terminated by ; or & . Of these four symbols, ; and & have equal precedence, which is lower than that of && and || . The symbols && and || also have equal precedence. A semicolon ( ; ) causes sequential execution of the preceding pipeline (that is, the shell waits for the pipeline to finish before executing any commands following the semicolon); an ampersand ( & ) causes asynchronous execution of the preceding pipeline (that is, the shell does not wait for that pipeline to finish). The symbol && ( || ) causes the list following it to be executed only if the preceding pipeline returns a zero (non-zero) exit status. An arbitrary number of newlines may appear in a list , instead of semicolons, to delimit commands.

      A command is either a simple-command or one of the following. Unless otherwise stated, the value returned by a command is that of the last simple-command executed in the command.

      for name [ in word ... ] do list done

      Each time a for command is executed, name is set to the next word taken from the in word list. If in word ... is omitted, then the for command executes the do list once for each positional parameter that is set (see Parameter Substitution section below). Execution ends when there are no more words in the list.

      case word in [ pattern [ | pattern ] ) list ;; ] ... esac A case command executes the list associated with the first pattern that matches word . The form of the patterns is the same as that used for file-name generation (see File Name Generation section) except that a slash, a leading dot, or a dot immediately following a slash need not be matched explicitly.

      if list ; then list ; [ elif list ; then list ; ] ... [ else list ; ] fi


      The list following if is executed and, if it returns a zero exit status, the list following the first then is executed. Otherwise, the list following elif is executed and, if its value is zero, the list following the next then is executed. Failing that, the else list is executed. If no else list or then list is executed, then the if command returns a zero exit status.


      while list do list done

      A while command repeatedly executes the while list and, if the exit status of the last command in the list is zero, executes the do list ; otherwise the loop terminates. If no commands in the do list are executed, then the while command returns a zero exit status; until may be used in place of while to negate the loop termination test.

      ( list )

      Execute list in a sub-shell.

      { list ;}

      list is executed in the current (that is, parent) shell. The { must be followed by a space.

      name () { list ;}

      Define a function which is referenced by name . The body of the function is the list of commands between { and } . The { must be followed by a space. Execution of functions is described below (see Execution section). The { and } are unnecessary if the body of the function is a command as defined above, under Commands .

      The following words are only recognized as the first word of a command and when not quoted:


      if then else elif fi case esac for while until do done { }


    Comments Lines

      A word beginning with # causes that word and all the following characters up to a newline to be ignored.

    Command Substitution

      The shell reads commands from the string between two grave accents ( `` ) and the standard output from these commands may be used as all or part of a word. Trailing newlines from the standard output are removed.

      No interpretation is done on the string before the string is read, except to remove backslashes ( \\ ) used to escape other characters. Backslashes may be used to escape a grave accent ( ` ) or another backslash ( \\ ) and are removed before the command string is read. Escaping grave accents allows nested command substitution. If the command substitution lies within a pair of double quotes ( " ...` ...` ... " ), a backslash used to escape a double quote ( \\" ) will be removed; otherwise, it will be left intact.

      If a backslash is used to escape a newline character ( \ ewline ), both the backslash and the newline are removed (see the later section on Quoting ). In addition, backslashes used to escape dollar signs ( \\$ ) are removed. Since no parameter substitution is done on the command string before it is read, inserting a backslash to escape a dollar sign has no effect. Backslashes that precede characters other than \\ , ` , " , newline , and $ are left intact when the command string is read.

    Parameter Substitution

      The character $ is used to introduce substitutable parameters . There are two types of parameters, positional and keyword. If parameter is a digit, it is a positional parameter. Positional parameters may be assigned values by set . Keyword parameters (also known as variables) may be assigned values by writing:


      name = value [ name = value ] ...


      Pattern-matching is not performed on value . There cannot be a function and a variable with the same name .

      ${ parameter }

      The value, if any, of the parameter is substituted. The braces are required only when parameter is followed by a letter, digit, or underscore that is not to be interpreted as part of its name. If parameter is * or @ , all the positional parameters, starting with $1 , are substituted (separated by spaces). Parameter $0 is set from argument zero when the shell is invoked.

      ${ parameter :- word }

      If parameter is set and is non-null, substitute its value; otherwise substitute word .

      ${ parameter := word }

      If parameter is not set or is null set it to word ; the value of the parameter is substituted. Positional parameters may not be assigned in this way.

      ${ parameter :? word }

      If parameter is set and is non-null, substitute its value; otherwise, print word and exit from the shell. If word is omitted, the message "parameter null or not set" is printed.

      ${ parameter :+ word }

      If parameter is set and is non-null, substitute word ; otherwise substitute nothing.

      In the above, word is not evaluated unless it is to be used as the substituted string, so that, in the following example, pwd is executed only if d is not set or is null:


      echo ${d:-`pwd`}


      If the colon ( : ) is omitted from the above expressions, the shell only checks whether parameter is set or not.

      The following parameters are automatically set by the shell.

      #

      The number of positional parameters in decimal.

      -

      Flags supplied to the shell on invocation or by the set command.

      ?

      The decimal value returned by the last synchronously executed command.

      $

      The process number of this shell.

      !

      The process number of the last background command invoked.

      The following parameters are used by the shell. The parameters in this section are also referred to as environment variables.

      HOME

      The default argument (home directory) for the cd command, set to the user's login directory by login(1) from the password file (see passwd(4) ).

      PATH

      The search path for commands (see Execution section below).

      CD PATH

      The search path for the cd command.

      MAIL

      If this parameter is set to the name of a mail file and the MAIL PATH parameter is not set, the shell informs the user of the arrival of mail in the specified file.

      MAILCHECK

      This parameter specifies how often (in seconds) the shell will check for the arrival of mail in the files specified by the MAIL PATH or MAIL parameters. The default value is 600 seconds (10 minutes). If set to 0, the shell will check before each prompt.

      MAIL PATH

      A colon ( : ) separated list of file names. If this parameter is set, the shell informs the user of the arrival of mail in any of the specified files. Each file name can be followed by % and a message that will be printed when the modification time changes. The default message is, you have mail .

      PS1

      Primary prompt string, by default " $ ".

      PS2

      Secondary prompt string, by default " > ".

      IFS

      Internal field separators, normally space , tab , and newline (see Blank Interpretation section).

      SHACCT

      If this parameter is set to the name of a file writable by the user, the shell will write an accounting record in the file for each shell procedure executed.

      SHELL

      When the shell is invoked, it scans the environment (see Environment section below) for this name.

      See environ(5) for descriptions of the following environment variables that affect the execution of sh : LC_CTYPE and LC_MESSAGES .

      The shell gives default values to PATH , PS1 , PS2 , MAILCHECK , and IFS . HOME and MAIL are set by login(1) .

    Blank Interpretation

      After parameter and command substitution, the results of substitution are scanned for internal field separator characters (those found in IFS ) and split into distinct arguments where such characters are found. Explicit null arguments ( "" or '' ) are retained. Implicit null arguments (those resulting from parameters that have no values) are removed.

    Input/Output Redirection

      A command's input and output may be redirected using a special notation interpreted by the shell. The following may appear anywhere in a simple-command or may precede or follow a command and are not passed on as arguments to the invoked command. Note: Parameter and command substitution occurs before word or digit is used.

      < word

      Use file word as standard input (file descriptor 0).

      > word

      Use file word as standard output (file descriptor 1). If the file does not exist, it is created; otherwise, it is truncated to zero length.

      >> word

      Use file word as standard output. If the file exists, output is appended to it (by first seeking to the EOF); otherwise, the file is created.

      <> word

      Open file word for reading and writing as standard input.

      << [ - ] word

      After parameter and command substitution is done on word , the shell input is read up to the first line that literally matches the resulting word , or to an EOF. If, however, - is appended to << :

      1)

      leading tabs are stripped from word before the shell input is read (but after parameter and command substitution is done on word ),

      2)

      leading tabs are stripped from the shell input as it is read and before each line is compared with word , and

      3)

      shell input is read up to the first line that literally matches the resulting word , or to an EOF.

      If any character of word is quoted (see Quoting section later), no additional processing is done to the shell input. If no characters of word are quoted:

      1)

      parameter and command substitution occurs,

      2)

      (escaped) \ ewline s are removed, and

      3)

      \\ must be used to quote the characters \\ , $ , and ` .

      The resulting document becomes the standard input.

      <& digit

      Use the file associated with file descriptor digit as standard input. Similarly for the standard output using >& digit .

      <&-

      The standard input is closed. Similarly for the standard output using >&- .

      If any of the above is preceded by a digit, the file descriptor which will be associated with the file is that specified by the digit (instead of the default 0 or 1 ). For example:


      ... 2>&1


      associates file descriptor 2 with the file currently associated with file descriptor 1.

      The order in which redirections are specified is significant. The shell evaluates redirections left-to-right. For example:


      ... 1> xxx 2>&1


      first associates file descriptor 1 with file xxx . It associates file descriptor 2 with the file associated with file descriptor 1 (that is, xxx ). If the order of redirections were reversed, file descriptor 2 would be associated with the terminal (assuming file descriptor 1 had been) and file descriptor 1 would be associated with file xxx .

      Using the terminology introduced on the first page, under Commands , if a command is composed of several simple commands , redirection will be evaluated for the entire command before it is evaluated for each simple command . That is, the shell evaluates redirection for the entire list , then each pipeline within the list , then each command within each pipeline , then each list within each command .

      If a command is followed by & the default standard input for the command is the empty file /dev/null . Otherwise, the environment for the execution of a command contains the file descriptors of the invoking shell as modified by input/output specifications.

    File Name Generation

      Before a command is executed, each command word is scanned for the characters * , ? , and [ . If one of these characters appears the word is regarded as a pattern . The word is replaced with alphabetically sorted file names that match the pattern. If no file name is found that matches the pattern, the word is left unchanged. The character . at the start of a file name or immediately following a / , as well as the character / itself, must be matched explicitly.

      *

      Matches any string, including the null string.

      ?

      Matches any single character.

      [ ... ]

      Matches any one of the enclosed characters. A pair of characters separated by - matches any character lexically between the pair, inclusive. If the first character following the opening [ is a ! , any character not enclosed is matched.

      Note that all quoted characters (see below) must be matched explicitly in a filename.

    Quoting

      The following characters have a special meaning to the shell and cause termination of a word unless quoted:


      ; & ( ) | ^ < > newline space tab


      A character may be quoted (that is, made to stand for itself) by preceding it with a backslash ( \\ ) or inserting it between a pair of quote marks ( '' or "" ). During processing, the shell may quote certain characters to prevent them from taking on a special meaning. Backslashes used to quote a single character are removed from the word before the command is executed. The pair \ ewline is removed from a word before command and parameter substitution.

      All characters enclosed between a pair of single quote marks ( '' ), except a single quote, are quoted by the shell. Backslash has no special meaning inside a pair of single quotes. A single quote may be quoted inside a pair of double quote marks (for example, "'" ), but a single quote can not be quoted inside a pair of single quotes.

      Inside a pair of double quote marks ( "" ), parameter and command substitution occurs and the shell quotes the results to avoid blank interpretation and file name generation. If $* is within a pair of double quotes, the positional parameters are substituted and quoted, separated by quoted spaces ( "$1 $2 ... " ); however, if $@ is within a pair of double quotes, the positional parameters are substituted and quoted, separated by unquoted spaces ( "$1" "$2" ... ). \\ quotes the characters \\ , ` , , and $ . The pair \ ewline is removed before parameter and command substitution. If a backslash precedes characters other than \\ , ` , , $ , and newline, then the backslash itself is quoted by the shell.

    Prompting

      When used interactively, the shell prompts with the value of PS1 before reading a command. If at any time a newline is typed and further input is needed to complete a command, the secondary prompt (that is, the value of PS2 ) is issued.

    Environment

      The environment (see environ(5) ) is a list of name-value pairs that is passed to an executed program in the same way as a normal argument list. The shell interacts with the environment in several ways. On invocation, the shell scans the environment and creates a parameter for each name found, giving it the corresponding value. If the user modifies the value of any of these parameters or creates new parameters, none of these affects the environment unless the export command is used to bind the shell's parameter to the environment (see also set -a ). A parameter may be removed from the environment with the unset command. The environment seen by any executed command is thus composed of any unmodified name-value pairs originally inherited by the shell, minus any pairs removed by unset , plus any modifications or additions, all of which must be noted in export commands.

      The environment for any simple-command may be augmented by prefixing it with one or more assignments to parameters. Thus:


      TERM =450 command


      and


      (export TERM ; TERM =450; command )


      are equivalent as far as the execution of command is concerned if command is not a Special Command. If command is a Special Command, then


      TERM =450 command


      will modify the TERM variable in the current shell.

      If the -k flag is set, all keyword arguments are placed in the environment, even if they occur after the command name. The following example first prints a=b c and c :



      echo a=b c
      
      a=b c
      
      set -k
      
      echo a=b c
      
      c


    Signals

      The INTERRUPT and QUIT signals for an invoked command are ignored if the command is followed by & ; otherwise signals have the values inherited by the shell from its parent, with the exception of signal 11 (but see also the trap command below).

    Execution

      Each time a command is executed, the command substitution, parameter substitution, blank interpretation, input/output redirection, and filename generation listed above are carried out. If the command name matches the name of a defined function, the function is executed in the shell process (note how this differs from the execution of shell script files, which require a sub-shell for invocation). If the command name does not match the name of a defined function, but matches one of the Special Commands listed below, it is executed in the shell process.

      The positional parameters $1 , $2 , ... are set to the arguments of the function. If the command name matches neither a Special Command nor the name of a defined function, a new process is created and an attempt is made to execute the command via exec(2) .

      The shell parameter PATH defines the search path for the directory containing the command. Alternative directory names are separated by a colon ( : ). The default path is /usr/bin . The current directory is specified by a null path name, which can appear immediately after the equal sign, between two colon delimiters anywhere in the path list, or at the end of the path list. If the command name contains a / the search path is not used. Otherwise, each directory in the path is searched for an executable file. If the file has execute permission but is not an a.out file, it is assumed to be a file containing shell commands. A sub-shell is spawned to read it. A parenthesized command is also executed in a sub-shell.

      The location in the search path where a command was found is remembered by the shell (to help avoid unnecessary execs later). If the command was found in a relative directory, its location must be re-determined whenever the current directory changes. The shell forgets all remembered locations whenever the PATH variable is changed or the hash -r command is executed (see below).

    Special Commands

      Input/output redirection is now permitted for these commands. File descriptor 1 is the default output location. When Job Control is enabled, additional Special Commands are added to the shell's environment (see Job Control section below).

      :

      No effect; the command does nothing. A zero exit code is returned.

      . filename

      Read and execute commands from filename and return. The search path specified by PATH is used to find the directory containing filename .

      bg [ % jobid ... ]

      When Job Control is enabled, the bg command is added to the user's environment to manipulate jobs. Resumes the execution of a stopped job in the background. If % jobid is omitted the current job is assumed. (See Job Control section below for more detail.)

      break [ n ]

      Exit from the enclosing for or while loop, if any. If n is specified, break n levels.

      cd [ argument ]

      Change the current directory to argument . The shell parameter HOME is the default argument . The shell parameter CD PATH defines the search path for the directory containing argument . Alternative directory names are separated by a colon ( : ). The default path is <null> (specifying the current directory). Note: The current directory is specified by a null path name, which can appear immediately after the equal sign or between the colon delimiters anywhere else in the path list. If argument begins with a / the search path is not used. Otherwise, each directory in the path is searched for argument .

      chdir [ dir ]

      chdir changes the shell's working directory to directory dir . If no argument is given, change to the home directory of the user. If dir is a relative pathname not found in the current directory, check for it in those directories listed in the CD PATH variable. If dir is the name of a shell variable whose value starts with a / , change to the directory named by that value.

      continue [ n ]

      Resume the next iteration of the enclosing for or while loop. If n is specified, resume at the n -th enclosing loop.

      echo [ arguments ... ]

      The words in arguments are written to the shell's standard output, separated by space characters. See echo(1) for fuller usage and description.

      eval [ argument ... ]

      The arguments are read as input to the shell and the resulting command(s) executed.

      exec [ argument ... ]

      The command specified by the arguments is executed in place of this shell without creating a new process. Input/output arguments may appear and, if no other arguments are given, cause the shell input/output to be modified.

      exit [ n ]

      Causes the calling shell or shell script to exit with the exit status specified by n . If n is omitted the exit status is that of the last command executed (an EOF will also cause the shell to exit.)

      export [ name ... ]

      The given name s are marked for automatic export to the environment of subsequently executed commands. If no arguments are given, variable names that have been marked for export during the current shell's execution are listed. (Variable names exported from a parent shell are listed only if they have been exported again during the current shell's execution.) Function names are not exported.

      fg [ % jobid ... ]

      When Job Control is enabled, the fg command is added to the user's environment to manipulate jobs. Resumes the execution of a stopped job in the foreground, also moves an executing background job into the foreground. If % jobid is omitted the current job is assumed. (See Job Control section below for more detail.)

      getopts

      Use in shell scripts to support command syntax standards (see intro(1) ); it parses positional parameters and checks for legal options. See getoptcvt(1) for usage and description.

      hash [ -r ] [ name ... ]

      For each name , the location in the search path of the command specified by name is determined and remembered by the shell. The -r option causes the shell to forget all remembered locations. If no arguments are given, information about remembered commands is presented. Hits is the number of times a command has been invoked by the shell process. Cost is a measure of the work required to locate a command in the search path. If a command is found in a "relative" directory in the search path, after changing to that directory, the stored location of that command is recalculated. Commands for which this will be done are indicated by an asterisk ( * ) adjacent to the hits information. Cost will be incremented when the recalculation is done.

      jobs [ -p | -l ] [ % jobid ... ]

      jobs -x command [ arguments ]

      Reports all jobs that are stopped or executing in the background. If % jobid is omitted, all jobs that are stopped or running in the background will be reported. (See Job Control section below for more detail.)

      kill [ -sig ] % job ...

      kill -l

      Sends either the TERM (terminate) signal or the specified signal to the specified jobs or processes. Signals are either given by number or by names (as given in signal(5) stripped of the prefix "SIG" with the exception that SIGCHD is named CHLD ). If the signal being sent is TERM (terminate) or HUP (hangup), then the job or process will be sent a CONT (continue) signal if it is stopped. The argument job can be the process id of a process that is not a member of one of the active jobs. See Job Control section below for a description of the format of job . In the second form, kill -l , the signal numbers and names are listed. (See kill(1) ).

      login [ argument ... ]

      Equivalent to ` exec login argument ....' See login(1) for usage and description.

      newgrp [ argument ]

      Equivalent to exec newgrp argument. See newgrp(1) for usage and description.

      pwd

      Print the current working directory. See pwd(1) for usage and description.

      read name ...

      One line is read from the standard input and, using the internal field separator, IFS (normally space or tab), to delimit word boundaries, the first word is assigned to the first name , the second word to the second name , and so forth, with leftover words assigned to the last name . Lines can be continued using \ ewline . Characters other than newline can be quoted by preceding them with a backslash. These backslashes are removed before words are assigned to names , and no interpretation is done on the character that follows the backslash. The return code is 0 , unless an EOF is encountered.

      readonly [ name ... ]

      The given name s are marked readonly and the values of the these name s may not be changed by subsequent assignment. If no arguments are given, a list of all readonly names is printed.

      return [ n ]

      Causes a function to exit with the return value specified by n . If n is omitted, the return status is that of the last command executed.

      set [ -aefhkntuvx [ argument ... ] ]

      -a

      Mark variables which are modified or created for export.

      -e

      Exit immediately if a command exits with a non-zero exit status.

      -f

      Disable file name generation.

      -h

      Locate and remember function commands as functions are defined (function commands are normally located when the function is executed).

      -k

      All keyword arguments are placed in the environment for a command, not just those that precede the command name.

      -n

      Read commands but do not execute them.

      -t

      Exit after reading and executing one command.

      -u

      Treat unset variables as an error when substituting.

      -v

      Print shell input lines as they are read.

      -x

      Print commands and their arguments as they are executed.

      -

      Do not change any of the flags; useful in setting $1 to - .

      Using + rather than - causes these flags to be turned off. These flags can also be used upon invocation of the shell. The current set of flags may be found in $- . The remaining arguments are positional parameters and are assigned, in order, to $1 , $2 , ... If no arguments are given the values of all names are printed.

      shift [ n ]

      The positional parameters from $ n +1 ... are renamed $1 ... . If n is not given, it is assumed to be 1.

      stop pid ...

      Halt execution of the process number pid . (see ps(1) ).

      suspend

      Stops the execution of the current shell (but not if it is the login shell).

      test

      Evaluate conditional expressions. See test(1) for usage and description.

      times

      Print the accumulated user and system times for processes run from the shell.

      trap [ argument n [ n2 ... ]]

      The command argument is to be read and executed when the shell receives numeric or symbolic signal(s) ( n ). (Note: argument is scanned once when the trap is set and once when the trap is taken.) Trap commands are executed in order of signal number or corresponding symbolic names. Any attempt to set a trap on a signal that was ignored on entry to the current shell is ineffective. An attempt to trap on signal 11 (memory fault) produces an error. If argument is absent all trap(s) n are reset to their original values. If argument is the null string this signal is ignored by the shell and by the commands it invokes. If n is 0 the command argument is executed on exit from the shell. The trap command with no arguments prints a list of commands associated with each signal number.

      type [ name ... ]

      For each name , indicate how it would be interpreted if used as a command name.

      ulimit [ - ]& HS ]]& a | cdfnstv ] ]

      ulimit [ - ]& HS ]]& c | d | f | n | s | t | v ] ] limit

      ulimit prints or sets hard or soft resource limits. These limits are described in getrlimit(2) .

      If limit is not present, ulimit prints the specified limits. Any number of limits may be printed at one time. The -a option prints all limits.

      If limit is present, ulimit sets the specified limit to limit . The string unlimited requests the largest valid limit. Limits may be set for only one resource at a time. Any user may set a soft limit to any value below the hard limit. Any user may lower a hard limit. Only a super-user may raise a hard limit; see su(1M) .

      The -H option specifies a hard limit. The -S option specifies a soft limit. If neither option is specified, ulimit will set both limits and print the soft limit.

      The following options specify the resource whose limits are to be printed or set. If no option is specified, the file size limit is printed or set.

      -c

      maximum core file size (in 512-byte blocks)

      -d

      maximum size of data segment or heap (in kbytes)

      -f

      maximum file size (in 512-byte blocks)

      -n

      maximum file descriptor plus 1

      -s

      maximum size of stack segment (in kbytes)

      -t

      maximum CPU time (in seconds)

      -v

      maximum size of virtual memory (in kbytes)

      Run the sysdef(1M) command to obtain the maximum possible limits for your system. The values reported are in hexadecimal, but can be translated into decimal numbers using the bc(1) utility. See swap(1M) .)

      Example of ulimit:

      to limit the size of a core file dump to 0 Megabytes, type the following:


      ulimit -c 0


      umask [ nnn ]

      The user file-creation mask is set to nnn (see umask(1) ). If nnn is omitted, the current value of the mask is printed.

      unset [ name ... ]

      For each name , remove the corresponding variable or function value. The variables PATH , PS1 , PS2 , MAILCHECK , and IFS cannot be unset.

      wait [ n ]

      Wait for your background process whose process id is n and report its termination status. If n is omitted, all your shell's currently active background processes are waited for and the return code will be zero.

    Invocation

      If the shell is invoked through exec(2) and the first character of argument zero is - , commands are initially read from /etc/profile and from $ HOME /.profile , if such files exist. Thereafter, commands are read as described below, which is also the case when the shell is invoked as /usr/bin/sh . The flags below are interpreted by the shell on invocation only. Note: Unless the -c or -s flag is specified, the first argument is assumed to be the name of a file containing commands, and the remaining arguments are passed as positional parameters to that command file:

      -c string

      If the -c flag is present commands are read from string .

      -i

      If the -i flag is present or if the shell input and output are attached to a terminal, this shell is interactive . In this case TERMINATE is ignored (so that kill 0 does not kill an interactive shell) and INTERRUPT is caught and ignored (so that wait is interruptible). In all cases, QUIT is ignored by the shell.

      -p

      If the -p flag is present, the shell will not set the effective user and group IDs to the real user and group IDs.

      -r

      If the -r flag is present the shell is a restricted shell (see rsh(1M) ).

      -s

      If the -s flag is present or if no arguments remain, commands are read from the standard input. Any remaining arguments specify the positional parameters. Shell output (except for Special Commands ) is written to file descriptor 2.

      The remaining flags and arguments are described under the set command above.

    Job Control (jsh)

      When the shell is invoked as jsh , Job Control is enabled in addition to all of the functionality described previously for sh . Typically Job Control is enabled for the interactive shell only. Non-interactive shells typically do not benefit from the added functionality of Job Control.

      With Job Control enabled every command or pipeline the user enters at the terminal is called a job . All jobs exist in one of the following states: foreground, background or stopped. These terms are defined as follows: 1) a job in the foreground has read and write access to the controlling terminal; 2) a job in the background is denied read access and has conditional write access to the controlling terminal (see stty(1) ); 3) a stopped job is a job that has been placed in a suspended state, usually as a result of a SIGTSTP signal (see signal(5) ).

      Every job that the shell starts is assigned a positive integer, called a job number which is tracked by the shell and will be used as an identifier to indicate a specific job. Additionally the shell keeps track of the current and previous jobs. The current job is the most recent job to be started or restarted. The previous job is the first non-current job.

      The acceptable syntax for a Job Identifier is of the form:


      % jobid


      where, jobid may be specified in any of the following formats:

      % or +

      for the current job

      -

      for the previous job

      ? <string>

      specify the job for which the command line uniquely contains string .

      n

      for job number n , where n is a job number

      pref

      where pref is a unique prefix of the command name (for example, if the command ls -l name were running in the background, it could be referred to as %ls ); pref cannot contain blanks unless it is quoted.

      When Job Control is enabled, the following commands are added to the user's environment to manipulate jobs:

      bg [ % jobid ... ]

      Resumes the execution of a stopped job in the background. If % jobid is omitted the current job is assumed.

      fg [ % jobid ... ]

      Resumes the execution of a stopped job in the foreground, also moves an executing background job into the foreground. If % jobid is omitted the current job is assumed.

      jobs [ -p | -l ] [ % jobid ... ]

      jobs -x command [ arguments ]

      Reports all jobs that are stopped or executing in the background. If % jobid is omitted, all jobs that are stopped or running in the background will be reported. The following options will modify/enhance the output of jobs :

      -l

      Report the process group ID and working directory of the jobs.

      -p

      Report only the process group ID of the jobs.

      -x

      Replace any jobid found in command or arguments with the corresponding process group ID, and then execute command passing it arguments .

      kill [ -signal ] % jobid

      Builtin version of kill to provide the functionality of the kill command for processes identified with a jobid .

      stop % jobid ...

      Stops the execution of a background job(s).

      suspend

      Stops the execution of the current shell (but not if it is the login shell).

      wait [ % jobid ... ]

      wait builtin accepts a job identifier. If % jobid is omitted wait behaves as described above under Special Commands .

    Large File Behavior

      See largefile(5) for the description of the behavior of sh and jsh when encountering files greater than or equal to 2 Gbyte ( 2 31 bytes).

EXIT STATUS

    Errors detected by the shell, such as syntax errors, cause the shell to return a non-zero exit status. If the shell is being used non-interactively execution of the shell file is abandoned. Otherwise, the shell returns the exit status of the last command executed (see also the exit command above).

    jsh Only

      If the shell is invoked as jsh and an attempt is made to exit the shell while there are stopped jobs, the shell issues one warning:


      There are stopped jobs.


      This is the only message. If another exit attempt is made, and there are still stopped jobs they will be sent a SIGHUP signal from the kernel and the shell is exited.

FILES

    $ HOME /.profile

    /dev/null

    /etc/profile

    /tmp/sh*

ATTRIBUTES

    See attributes(5) for descriptions of the following attributes:

    /usr/bin/sh

    /usr/bin/jsh

      ATTRIBUTE TYPE ATTRIBUTE VALUE
      Availability SUNWcsu
      CSI Enabled

    /usr/xpg4/bin/sh

      ATTRIBUTE TYPE ATTRIBUTE VALUE
      Availability SUNWxcu4
      CSI Enabled

SEE ALSO

WARNINGS

    The use of setuid shell scripts is strongly discouraged.

NOTES

    Words used for filenames in input/output redirection are not interpreted for filename generation (see File Name Generation section above). For example, cat file1 >a* will create a file named a* .

    Because commands in pipelines are run as separate processes, variables set in a pipeline have no effect on the parent shell.

    If you get the error message cannot fork , too many processes , try using the wait(1) command to clean up your background processes. If this doesn't help, the system process table is probably full or you have too many active foreground processes. (There is a limit to the number of process ids associated with your login, and to the number the system can keep track of.)

    Only the last process in a pipeline can be waited for.

    If a command is executed, and a command with the same name is installed in a directory in the search path before the directory where the original command was found, the shell will continue to exec the original command. Use the hash command to correct this situation.

    The Bourne shell has a limitation on the effective UID for a process. If this UID is less than 100 (and not equal to the process' real UID), then the UID is reset to the process' real UID.

    Because the shell implements both foreground and background jobs in the same process group, they all receive the same signals, which can lead to unexpected behavior. It is, therefore, recommended that other job contrl shells be used, especially in an interactive environment.

    When the shell executes a shell script that attempts to execute a non-existent command interpreter, the shell returns an erroneous diagnostic message that the shell script file does not exist.


2010-09-05 00:39:05

NAME

    ksh, rksh- KornShell, a standard/restricted command and programming language

SYNOPSIS

    /usr/bin/ksh [+- abCefhikmnoprstuvx] [+- o option...] [arg...]
    /usr/bin/ksh -c   [+- abCefhikmnoprstuvx] [+- o option...]   command_string [command_name [arg...]]
    /usr/xpg4/bin/sh [+- abCefhikmnoprstuvx] [+- o option...] [arg...]
    /usr/xpg4/bin/sh -c   [+- abCefhikmnoprstuvx] [+- o option...]   command_string [command_name [arg...]]
    /usr/bin/rksh [+- abCefhikmnoprstuvx] [+- o option...] [arg...]
    /usr/bin/rksh -c   [+- abCefhikmnoprstuvx] [+- o option...]   command_string [command_name [arg...]]

DESCRIPTION

    /usr/xpg4/bin/sh is identical to /usr/bin/ksh , a command and programming language that executes commands read from a terminal or a file. rksh is a restricted version of the command interpreter ksh ; it is used to set up login names and execution environments whose capabilities are more controlled than those of the standard shell. See Invocation below for the meaning of arguments to the shell.

    Definitions

      A metacharacter is one of the following characters:


      ; & ( ) | < > NEWLINE SPACE TAB


      A blank is a TAB or a SPACE . An identifier is a sequence of letters, digits, or underscores starting with a letter or underscore. Identifiers are used as names for functions and variables . A word is a sequence of characters separated by one or more non-quoted metacharacters .

      A command is a sequence of characters in the syntax of the shell language. The shell reads each command and carries out the desired action either directly or by invoking separate utilities. A special-command is a command that is carried out by the shell without creating a separate process. Except for documented side effects, most special commands can be implemented as separate utilities.

    Commands

      A simple-command is a sequence of blank-separated words which may be preceded by a variable assignment list. (See Environment below.) The first word specifies the name of the command to be executed. Except as specified below, the remaining words are passed as arguments to the invoked command. The command name is passed as argument 0 (see exec(2) ). The value of a simple-command is its exit status if it terminates normally, or (octal) 200+ status if it terminates abnormally (see signal(3C) for a list of status values).

      A pipeline is a sequence of one or more commands separated by | . The standard output of each command but the last is connected by a pipe(2) to the standard input of the next command. Each command is run as a separate process; the shell waits for the last command to terminate. The exit status of a pipeline is the exit status of the last command.

      A list is a sequence of one or more pipeline s separated by ; , & , && , or || , and optionally terminated by ; , & , or |& . Of these five symbols, ; , & , and |& have equal precedence, which is lower than that of && and || . The symbols && and || also have equal precedence. A semicolon ( ; ) causes sequential execution of the preceding pipeline; an ampersand ( & ) causes asynchronous execution of the preceding pipeline (that is, the shell does not wait for that pipeline to finish). The symbol |& causes asynchronous execution of the preceding command or pipeline with a two-way pipe established to the parent shell.

      The standard input and output of the spawned command can be written to and read from by the parent shell using the -p option of the special commands read and print described in Special Commands . The symbol && ( || ) causes the list following it to be executed only if the preceding pipeline returns 0 (or a non-zero) value. An arbitrary number of new-lines may appear in a list , instead of a semicolon, to delimit a command.

      A command is either a simple-command or one of the following. Unless otherwise stated, the value returned by a command is that of the last simple-command executed in the command.

      for identifier [ in word ... ] ; do list ; done

      Each time a for command is executed, identifier is set to the next word taken from the in word list. If in word ... is omitted, then the for command executes the do list once for each positional parameter that is set (see Parameter Substitution below). Execution ends when there are no more words in the list.

      select identifier [ in word ... ] ; do list ; done

      A select command prints to standard error (file descriptor 2), the set of word s, each preceded by a number. If in word ... is omitted, then the positional parameters are used instead (see Parameter Substitution below). The PS3 prompt is printed and a line is read from the standard input. If this line consists of the number of one of the listed word s, then the value of the variable identifier is set to the word corresponding to this number. If this line is empty the selection list is printed again. Otherwise the value of the variable identifier is set to NULL . (See Blank Interpretation about NULL ). The contents of the line read from standard input is saved in the shell variable REPLY . The list is executed for each selection until a break or EOF is encountered. If the REPLY variable is set to NULL by the execution of list , then the selection list is printed before displaying the PS3 prompt for the next selection.

      case word in [ pattern [ | pattern ] ) list ;; ] ... esac

      A case command executes the list associated with the first pattern that matches word . The form of the patterns is the same as that used for file-name generation (see File Name Generation below).

      if list ; then list ; [ elif list ; then list ; ... ] [ else list ; ] fi

      The list following if is executed and, if it returns an exit status of 0 , the list following the first then is executed. Otherwise, the list following elif is executed and, if its value is 0 , the list following the next then is executed. Failing that, the else list is executed. If no else list or then list is executed, then the if command returns 0 exit status.

      while list ; do list ; done
      until list ; do list ; done

      A while command repeatedly executes the while list and, if the exit status of the last command in the list is 0 , executes the do list ; otherwise the loop terminates. If no commands in the do list are executed, then the while command returns 0 exit status; until may be used in place of while to negate the loop termination test.

      ( list )

      Execute list in a separate environment. Note, that if two adjacent open parentheses are needed for nesting, a space must be inserted to avoid arithmetic evaluation as described below.

      { list }

      list is simply executed. Note that unlike the metacharacters ( and ) , { and } are reserved word s and must occur at the beginning of a line or after a ; in order to be recognized.

      [[ expression ]]

      Evaluates expression and returns 0 exit status when expression is true. See Conditional Expressions below, for a description of expression .

      function identifier { list ;}
      identifier () { list ;}

      Define a function which is referenced by identifier . The body of the function is the list of commands between { and } . (See Functions below).

      time pipeline

      The pipeline is executed and the elapsed time as well as the user and system time are printed to standard error.

      The following reserved words are only recognized as the first word of a command and when not quoted:


      !  if   then   else   elif   fi   case   esac   for   while   until   do   done   {   }
      function   select   time  [[  ]]
      

    Comments

      A word beginning with # causes that word and all the following characters up to a new-line to be ignored.

    Aliasing

      The first word of each command is replaced by the text of an alias if an alias for this word has been defined. An alias name consists of any number of characters excluding metacharacters, quoting characters, file expansion characters, parameter and command substitution characters, and = . The replacement string can contain any valid shell script including the metacharacters listed above. The first word of each command in the replaced text, other than any that are in the process of being replaced, will be tested for aliases. If the last character of the alias value is a blank then the word following the alias will also be checked for alias substitution. Aliases can be used to redefine special builtin commands but cannot be used to redefine the reserved words listed above. Aliases can be created, listed, and exported with the alias command and can be removed with the unalias command. Exported aliases remain in effect for scripts invoked by name, but must be reinitialized for separate invocations of the shell (see Invocation below). To prevent infinite loops in recursive aliasing, if the shell is not currently processing an alias of the same name, the word will be replaced by the value of the alias; otherwise, it will not be replaced.

      Aliasing is performed when scripts are read, not while they are executed. Therefore, for an alias to take effect, the alias definition command has to be executed before the command which references the alias is read.

      Aliases are frequently used as a short hand for full path names. An option to the aliasing facility allows the value of the alias to be automatically set to the full pathname of the corresponding command. These aliases are called tracked aliases. The value of a tracked alias is defined the first time the corresponding command is looked up and becomes undefined each time the PATH variable is reset. These aliases remain tracked so that the next subsequent reference will redefine the value. Several tracked aliases are compiled into the shell. The -h option of the set command makes each referenced command name into a tracked alias.

      The following exported aliases are compiled into (and built-in to) the shell but can be unset or redefined:



      autoload='typeset -fu'
      false='let 0'
      functions='typeset -f'
      hash='alias -t'
      history='fc -l'
      integer='typeset -i'
      nohup='nohup '
      r='fc -e -'
      true=':'
      type='whence -v'
      

      An example concerning trailing blank characters and reserved words follows. If the user types:



      $ 
      alias foo="/bin/ls "
      
      $ 
      alias while="/"
      

      the effect of executing:



      $ 
      while true
      
      > 
      do
      
      > 
      echo "Hello, World"
      
      > 
      done
      

      is a never-ending sequence of Hello, World strings to the screen. However, if the user types:



      $ 
      foo while
      

      the result will be an ls listing of / . Since the alias substitution for foo ends in a space character, the next word is checked for alias substitution. The next word, while , has also been aliased, so it is substituted as well. Since it is not in the proper position as a command word, it is not recognized as a reserved word.

      If the user types:



      $ 
      foo; while
      

      while retains its normal reserved-word properties.

    Tilde Substitution

      After alias substitution is performed, each word is checked to see if it begins with an unquoted ~ . If it does, then the word up to a / is checked to see if it matches a user name. If a match is found, the ~ and the matched login name are replaced by the login directory of the matched user. This is called a tilde substitution. If no match is found, the original text is left unchanged. A ~ by itself, or in front of a / , is replaced by $HOME . A ~ followed by a + or - is replaced by $PWD and $OLDPWD respectively.

      In addition, tilde substitution is attempted when the value of a variable assignment begins with a ~ .

    Tilde Expansion

      A tilde-prefix consists of an unquoted tilde character at the beginning of a word, followed by all of the characters preceding the first unquoted slash in the word, or all the characters in the word if there is no slash. In an assignment, multiple tilde-prefixes can be used: at the beginning of the word (that is, following the equal sign of the assignment), following any unquoted colon or both. A tilde-prefix in an assignment is terminated by the first unquoted colon or slash. If none of the characters in the tilde-prefix are quoted, the characters in the tilde-prefix following the tilde are treated as a possible login name from the user database.

      A portable login name cannot contain characters outside the set given in the description of the LOGNAME environment variable. If the login name is null (that is, the tilde-prefix contains only the tilde), the tilde-prefix will be replaced by the value of the variable HOME . If HOME is unset, the results are unspecified. Otherwise, the tilde-prefix will be replaced by a pathname of the home directory associated with the login name obtained using the getpwnam function. If the system does not recognize the login name, the results are undefined.

      Tilde expansion generally occurs only at the beginning of words, but an exception based on historical practice has been included:


      PATH=/posix/bin:~dgk/bin

      is eligible for tilde expansion because tilde follows a colon and none of the relevant characters is quoted. Consideration was given to prohibiting this behavior because any of the following are reasonable substitutes:


      PATH=$(printf %s ~karels/bin : ~bostic/bin)
      for Dir in ~maart/bin ~srb/bin .
      do
           PATH=${PATH:+$PATH:}$Dir
      done

      With the first command, explicit colons are used for each directory. In all cases, the shell performs tilde expansion on each directory because all are separate words to the shell.

      Note that expressions in operands such as:


      make -k mumble LIBDIR=~chet/lib

      do not qualify as shell variable assignments and tilde expansion is not performed (unless the command does so itself, which make does not).

      The special sequence $~ has been designated for future implementations to evaluate as a means of forcing tilde expansion in any word.

      Because of the requirement that the word not be quoted, the following are not equivalent; only the last will cause tilde expansion:



      \\~hlj/   ~h\\lj/   ~"hlj"/   ~hlj\\/   ~hlj/
      

      The results of giving tilde with an unknown login name are undefined because the KornShell ~+ and ~- constructs make use of this condition, but, in general it is an error to give an incorrect login name with tilde. The results of having HOME unset are unspecified because some historical shells treat this as an error.

    Command Substitution

      The standard output from a command enclosed in parenthesis preceded by a dollar sign (that is, $( command ) ) or a pair of grave accents ( `` ) may be used as part or all of a word; trailing new-lines are removed. In the second (archaic) form, the string between the quotes is processed for special quoting characters before the command is executed. (See Quoting below.) The command substitution $(cat file ) can be replaced by the equivalent but faster $(< file ) . Command substitution of most special commands that do not perform input/output redirection are carried out without creating a separate process.

      Command substitution allows the output of a command to be substituted in place of the command name itself. Command substitution occurs when the command is enclosed as follows:


      $ ( command )


      or (backquoted version):


      ` command `


      The shell will expand the command substitution by executing command in a subshell environment and replacing the command substitution (the text of command plus the enclosing $() or backquotes) with the standard output of the command, removing sequences of one or more newline characters at the end of the substitution. Embedded newline characters before the end of the output will not be removed; however, they may be treated as field delimiters and eliminated during field splitting, depending on the value of IFS and quoting that is in effect.

      Within the backquoted style of command substitution, backslash shall retain its literal meaning, except when followed by:



      $     `     \
      

      (dollar-sign, backquote, backslash). The search for the matching backquote is satisfied by the first backquote found without a preceding backslash; during this search, if a non-escaped backquote is encountered within a shell comment, a here-document, an embedded command substitution of the $( command ) form, or a quoted string, undefined results occur. A single- or double-quoted string that begins, but does not end, within the ` ... ` sequence produces undefined results.

      With the $( command ) form, all characters following the open parenthesis to the matching closing parenthesis constitute the command . Any valid shell script can be used for command , except:

      • A script consisting solely of redirections produces unspecified results.

      • See the restriction on single subshells described below.

      The results of command substitution will not be field splitting and pathname expansion processed for further tilde expansion, parameter expansion, command substitution or arithmetic expansion. If a command substitution occurs inside double-quotes, it will not be performed on the results of the substitution.

      Command substitution can be nested. To specify nesting within the backquoted version, the application must precede the inner backquotes with backslashes; for example:


      `\\` command \\``


      The $() form of command substitution solves a problem of inconsistent behavior when using backquotes. For example:

      Command Output
      echo '\\$x' \\$x
      echo ` echo '\\$x' ` $x
      echo $(echo '\\$x') \\$x

      Additionally, the backquoted syntax has historical restrictions on the contents of the embedded command. While the new $() form can process any kind of valid embedded script, the backquoted form cannot handle some valid scripts that include backquotes. For example, these otherwise valid embedded scripts do not work in the left column, but do work on the right:

      echo ` echo $(
      cat <<eeof cat <<eeof
      a here-doc with ` a here-doc with )
      eof eof
      ` )
      echo ` echo $(
      echo abc # a comment with ` echo abc # a comment with )
      ` )
      echo ` echo $(
      echo ' ` ' echo ')'
      ` )

      Because of these inconsistent behaviors, the backquoted variety of command substitution is not recommended for new applications that nest command substitutions or attempt to embed complex scripts.

      If the command substitution consists of a single subshell, such as:


      $( ( command ) )


      a portable application must separate the $( and ( into two tokens (that is, separate them with white space). This is required to avoid any ambiguities with arithmetic expansion.

    Arithmetic Expansion

      An arithmetic expression enclosed in double parentheses preceded by a dollar sign ( $(( arithmetic-expression )) ) is replaced by the value of the arithmetic expression within the double parenthesis. Arithmetic expansion provides a mechanism for evaluating an arithmetic expression and substituting its value. The format for arithmetic expansion is as follows:


      $(( expression ))


      The expression is treated as if it were in double-quotes, except that a double-quote inside the expression is not treated specially. The shell will expand all tokens in the expression for parameter expansion, command substitution and quote removal.

      Next, the shell will treat this as an arithmetic expression and substitute the value of the expression. The arithmetic expression will be processed according to the rules of the ISO C with the following exceptions:

      • Only integer arithmetic is required.

      • The sizeof() operator and the prefix and postfix ++ and -- operators are not required.

      • Selection, iteration, and jump statements are not supported.

      As an extension, the shell may recognize arithmetic expressions beyond those listed. If the expression is invalid, the expansion will fail and the shell will write a message to standard error indicating the failure.

      A simple example using arithmetic expansion:


      # repeat a command 100 times
      x=100
      while [ $x -gt 0 ]
      do
           command
           x=$(($x-1))
      done

    Process Substitution

      This feature is available in SunOS and only on versions of the UNIX operating system that support the /dev/fd directory for naming open files. Each command argument of the form <( list ) or >( list ) will run process list asynchronously connected to some file in /dev/fd . The name of this file will become the argument to the command. If the form with > is selected, then writing on this file will provide input for list . If < is used, then the file passed as an argument will contain the output of the list process. For example,


      paste <(cut -f1 file1 ) <(cut -f3 file2 ) | tee >( process1 ) >( process2 )


      cut s fields 1 and 3 from the files file1 and file2 , respectively, paste s the results together, and sends it to the processes process1 and process2 , as well as putting it onto the standard output. Note that the file, which is passed as an argument to the command, is a UNIX pipe(2) so programs that expect to lseek(2) on the file will not work.

    Parameter Substitution

      A parameter is an identifier , one or more digits, or any of the characters * , @ , # , ? , - , $ , and ! . A variable (a parameter denoted by an identifier ) has a value and zero or more attributes . variable s can be assigned value s and attribute s by using the typeset special command. The attributes supported by the shell are described later with the typeset special command. Exported variables pass values and attributes to the environment.

      The shell supports a one-dimensional array facility. An element of an array variable is referenced by a subscript . A subscript is denoted by a [ , followed by an arithmetic expression (see Arithmetic Evaluation below) followed by a ] . To assign values to an array, use set -A name value .... The value of all subscripts must be in the range of 0 through 1023. Arrays need not be declared. Any reference to a variable with a valid subscript is legal and an array will be created if necessary. Referencing an array without a subscript is equivalent to referencing the element 0 . If an array identifier with subscript * or @ is used, then the value for each of the elements is substituted (separated by a field separator character).

      The value of a variable may be assigned by writing:


      name = value [ name = value ] ...


      If the integer attribute, -i , is set for name , the value is subject to arithmetic evaluation as described below.

      Positional parameters, parameters denoted by a number, may be assigned values with the set special command. Parameter $0 is set from argument zero when the shell is invoked. If parameter is one or more digits then it is a positional parameter. A positional parameter of more than one digit must be enclosed in braces.

    Parameter Expansion

      The format for parameter expansion is as follows:


      ${ expression }


      where expression consists of all characters until the matching } . Any } escaped by a backslash or within a quoted string, and characters in embedded arithmetic expansions, command substitutions and variable expansions, are not examined in determining the matching } .

      The simplest form for parameter expansion is:


      ${ parameter }


      The value, if any, of parameter will be substituted.

      The parameter name or symbol can be enclosed in braces, which are optional except for positional parameters with more than one digit or when parameter is followed by a character that could be interpreted as part of the name. The matching closing brace will be determined by counting brace levels, skipping over enclosed quoted strings and command substitutions.

      If the parameter name or symbol is not enclosed in braces, the expansion will use the longest valid name whether or not the symbol represented by that name exists. When the shell is scanning its input to determine the boundaries of a name, it is not bound by its knowledge of what names are already defined. For example, if F is a defined shell variable, the command:



      echo $Fred
      


      does not echo the value of $F followed by red ; it selects the longest possible valid name, Fred , which in this case might be unset.

      If a parameter expansion occurs inside double-quotes:

      • Pathname expansion will not be performed on the results of the expansion.

      • Field splitting will not be performed on the results of the expansion, with the exception of @ .

      In addition, a parameter expansion can be modified by using one of the following formats. In each case that a value of word is needed (based on the state of parameter , as described below), word will be subjected to tilde expansion, parameter expansion, command substitution and arithmetic expansion. If word is not needed, it will not be expanded. The } character that delimits the following parameter expansion modifications is determined as described previously in this section and in dquote . (For example, ${foo-bar}xyz} would result in the expansion of foo followed by the string xyz} if foo is set, else the string barxyz} ).

      ${ parameter :- word }

      Use Default Values. If parameter is unset or null, the expansion of word will be substituted; otherwise, the value of parameter will be substituted.

      ${ parameter := word }

      Assign Default Values. If parameter is unset or null, the expansion of word will be assigned to parameter . In all cases, the final value of parameter will be substituted. Only variables, not positional parameters or special parameters, can be assigned in this way.

      ${ parameter :?[ word ]}

      Indicate Error if Null or Unset. If parameter is unset or null, the expansion of word (or a message indicating it is unset if word is omitted) will be written to standard error and the shell will exit with a non-zero exit status. Otherwise, the value of parameter will be substituted. An interactive shell need not exit.

      ${ parameter :+[ word ]}

      Use Alternative Value. If parameter is unset or null, null will be substituted; otherwise, the expansion of word will be substituted.

      In the parameter expansions shown previously, use of the colon in the format results in a test for a parameter that is unset or null; omission of the colon results in a test for a parameter that is only unset. The following table summarizes the effect of the colon:

      parameter set and not null parameter set but null parameter unset
      ${ parameter :- word } substitute parameter substitute word substitute word
      ${ parameter - word } substitute parameter substitute null substitute word
      ${ parameter := word } substitute parameter assign word assign word
      ${ parameter = word } substitute parameter substitute parameter assign null
      ${ parameter :? word } substitute parameter error, exit error, exit
      ${ parameter ? word } substitute parameter substitute null error, exit
      ${ parameter :+ word } substitute word substitute null substitute null
      ${ parameter + word } substitute word substitute word substitute null

      In all cases shown with "substitute", the expression is replaced with the value shown. In all cases shown with "assign" parameter is assigned that value, which also replaces the expression.

      ${# parameter }

      String Length . The length in characters of the value of parameter . If parameter is * or @ , then all the positional parameters, starting with $1 , are substituted (separated by a field separator character).

      The following four varieties of parameter expansion provide for substring processing. In each case, pattern matching notation (see patmat ), rather than regular expression notation, will be used to evaluate the patterns. If parameter is * or @ , then all the positional parameters, starting with $1 , are substituted (separated by a field separator character). Enclosing the full parameter expansion string in double-quotes will not cause the following four varieties of pattern characters to be quoted, whereas quoting characters within the braces will have this effect.

      ${ parameter % word }

      Remove Smallest Suffix Pattern. The word will be expanded to produce a pattern. The parameter expansion then will result in parameter , with the smallest portion of the suffix matched by the pattern deleted.

      ${ parameter %% word }

      Remove Largest Suffix Pattern. The word will be expanded to produce a pattern. The parameter expansion then will result in parameter , with the largest portion of the suffix matched by the pattern deleted.

      ${ parameter # word }

      Remove Smallest Prefix Pattern. The word will be expanded to produce a pattern. The parameter expansion then will result in parameter , with the smallest portion of the prefix matched by the pattern deleted.

      ${ parameter ## word }

      Remove Largest Prefix Pattern. The word will be expanded to produce a pattern. The parameter expansion then will result in parameter , with the largest portion of the prefix matched by the pattern deleted.

      Examples :

      ${ parameter :- word }


      In this example, ls is executed only if x is null or unset. (The $(ls) command substitution notation is explained in Command Substitution above.)

      ${x:-$(ls)}


      ${ parameter := word }


      unset X
      echo ${X:=abc}
      abc

      ${ parameter :? word }


      unset posix
      echo ${posix:?}
      sh: posix: parameter null or not set

      ${ parameter :+ word }


      set a b c
      echo ${3:+posix}
      posix

      ${# parameter }


      HOME=/usr/posix
      echo ${#HOME} 
      10

      ${ parameter % word }


      x=file.c
      echo ${x%.c}.o
      file.o

      ${ parameter %% word }


      x=posix/src/std
      echo ${x%%/*}
      posix

      ${ parameter # word }


      x=$HOME/src/cmd
      echo ${x#$HOME}
      /src/cmd

      ${ parameter ## word }


      x=/one/two/three
      echo ${x##*/}
      three

    Parameters Set by Shell

      The following parameters are automatically set by the shell:

      #

      The number of positional parameters in decimal.

      -

      Flags supplied to the shell on invocation or by the set command.

      ?

      The decimal value returned by the last executed command.

      $

      The process number of this shell.

      _

      Initially, the value of _ is an absolute pathname of the shell or script being executed as passed in the environment . Subsequently it is assigned the last argument of the previous command. This parameter is not set for commands which are asynchronous. This parameter is also used to hold the name of the matching MAIL file when checking for mail.

      !

      The process number of the last background command invoked.

      ERRNO

      The value of errno as set by the most recently failed system call. This value is system dependent and is intended for debugging purposes.

      LINENO

      The line number of the current line within the script or function being executed.

      OLDPWD

      The previous working directory set by the cd command.

      OPTARG

      The value of the last option argument processed by the getopts special command.

      OPTIND

      The index of the last option argument processed by the getopts special command.

      PPID

      The process number of the parent of the shell.

      PWD

      The present working directory set by the cd command.

      RANDOM

      Each time this variable is referenced, a random integer, uniformly distributed between 0 and 32767, is generated. The sequence of random numbers can be initialized by assigning a numeric value to RANDOM .

      REPLY

      This variable is set by the select statement and by the read special command when no arguments are supplied.

      SECONDS

      Each time this variable is referenced, the number of seconds since shell invocation is returned. If this variable is assigned a value, then the value returned upon reference will be the value that was assigned plus the number of seconds since the assignment.

    Variables Used by Shell

      The following variables are used by the shell:

      CDPATH

      The search path for the cd command.

      COLUMNS

      If this variable is set, the value is used to define the width of the edit window for the shell edit modes and for printing select lists.

      EDITOR

      If the value of this variable ends in emacs , gmacs , or vi and the VISUAL variable is not set, then the corresponding option (see the set special command below) will be turned on.

      ENV

      This variable, when the shell is invoked, is subjected to parameter expansion by the shell and the resulting value is used as a pathname of a file containing shell commands to execute in the current environment. The file need not be executable. If the expanded value of ENV is not an absolute pathname, the results are unspecified. ENV will be ignored if the user's real and effective user ID s or real and effective group ID s are different.

      This variable can be used to set aliases and other items local to the invocation of a shell. The file referred to by ENV differs from $HOME/.profile in that .profile is typically executed at session startup, whereas the ENV file is executed at the beginning of each shell invocation. The ENV value is interpreted in a manner similar to a dot script, in that the commands are executed in the current environment and the file needs to be readable, but not executable. However, unlike dot scripts, no PATH searching is performed. This is used as a guard against Trojan Horse security breaches.

      FCEDIT

      The default editor name for the fc command.

      FPATH

      The search path for function definitions. By default the FPATH directories are searched after the PATH variable. If an executable file is found, then it is read and executed in the current environment. FPATH is searched before PATH when a function with the -u attribute is referenced. The preset alias autoload causes a function with the -u attribute to be created.

      IFS

      Internal field separators, normally space , tab , and new-line that are used to separate command words which result from command or parameter substitution and for separating words with the special command read . The first character of the IFS variable is used to separate arguments for the $* substitution (See Quoting below).

      HISTFILE

      If this variable is set when the shell is invoked, then the value is the pathname of the file that will be used to store the command history. (See Command re-entry below.)

      HISTSIZE

      If this variable is set when the shell is invoked, then the number of previously entered commands that are accessible by this shell will be greater than or equal to this number. The default is 128 .

      HOME

      The default argument (home directory) for the cd command.

      LC_ALL

      This variable provides a default value for the LC_* variables.

      LC_COLLATE

      This variable determines the behavior of range expressions, equivalence classes and multi-byte character collating elements within pattern matching.

      LC_CTYPE

      Determines how the shell handles characters. When LC_CTYPE is set to a valid value, the shell can display and handle text and filenames containing valid characters for that locale. If LC_CTYPE (see environ(5) ) is not set in the environment, the operational behavior of the shell is determined by the value of the LANG environment variable. If LC_ALL is set, its contents are used to override both the LANG and the other LC_* variables.

      LC_MESSAGES

      This variable determines the language in which messages should be written.

      LANG

      Provide a default value for the internationalization variables that are unset or null. If any of the internationalization variables contains an invalid setting, the utility will behave as if none of the variables had been defined.

      LINENO

      This variable is set by the shell to a decimal number representing the current sequential line number (numbered starting with 1) within a script or function before it executes each command. If the user unsets or resets LINENO , the variable may lose its special meaning for the life of the shell. If the shell is not currently executing a script or function, the value of LINENO is unspecified.

      LINES

      If this variable is set, the value is used to determine the column length for printing select lists. Select lists will print vertically until about two-thirds of LINES lines are filled.

      MAIL

      If this variable is set to the name of a mail file and the MAILPATH variable is not set, then the shell informs the user of arrival of mail in the specified file.

      MAILCHECK

      This variable specifies how often (in seconds) the shell will check for changes in the modification time of any of the files specified by the MAILPATH or MAIL variables. The default value is 600 seconds. When the time has elapsed the shell will check before issuing the next prompt.

      MAILPATH

      A colon ( : ) separated list of file names. If this variable is set, then the shell informs the user of any modifications to the specified files that have occurred within the last MAILCHECK seconds. Each file name can be followed by a ? and a message that will be printed. The message will undergo parameter substitution with the variable $_ defined as the name of the file that has changed. The default message is you have mail in $_ .

      NLSPATH

      Determine the location of message catalogues for the processing of LC_MESSAGES .

      PATH

      The search path for commands (see Execution below). The user may not change PATH if executing under rksh (except in .profile ).

      PPID

      This variable is set by the shell to the decimal process ID of the process that invoked the shell. In a subshell, PPID will be set to the same value as that of the parent of the current shell. For example, echo $PPID and (echo $PPID) would produce the same value.

      PS1

      The value of this variable is expanded for parameter substitution to define the primary prompt string which by default is `` $ ''. The character ! in the primary prompt string is replaced by the command number (see Command Re-entry below). Two successive occurrences of ! will produce a single ! when the prompt string is printed.

      PS2

      Secondary prompt string, by default `` > ''.

      PS3

      Selection prompt string used within a select loop, by default `` #? ''.

      PS4

      The value of this variable is expanded for parameter substitution and precedes each line of an execution trace. If omitted, the execution trace prompt is `` + ''.

      SHELL

      The pathname of the shell is kept in the environment. At invocation, if the basename of this variable is rsh , rksh , or krsh , then the shell becomes restricted.

      TMOUT

      If set to a value greater than zero, the shell will terminate if a command is not entered within the prescribed number of seconds after issuing the PS1 prompt. (Note that the shell can be compiled with a maximum bound for this value which cannot be exceeded.)

      VISUAL

      If the value of this variable ends in emacs , gmacs , or vi , then the corresponding option (see Special Command set below) will be turned on.

      The shell gives default values to PATH , PS1 , PS2 , PS3 , PS4 , MAILCHECK , FCEDIT , TMOUT , and IFS , while HOME , SHELL , ENV , and MAIL are not set at all by the shell (although HOME is set by login(1) ). On some systems MAIL and SHELL are also set by login .

    Blank Interpretation

      After parameter and command substitution, the results of substitutions are scanned for the field separator characters (those found in IFS ) and split into distinct arguments where such characters are found. Explicit null arguments ( "" ) or ( '' ) are retained. Implicit null arguments (those resulting from parameters that have no values) are removed.

    File Name Generation

      Following substitution, each command word is scanned for the characters * , ? , and [ unless the -f option has been set . If one of these characters appears, the word is regarded as a pattern . The word is replaced with lexicographically sorted file names that match the pattern. If no file name is found that matches the pattern, the word is left unchanged. When a pattern is used for file name generation, the character period ( . ) at the start of a file name or immediately following a / , as well as the character / itself, must be matched explicitly. A file name beginning with a period will not be matched with a pattern with the period inside parentheses; that is,


      ls .@(r*)


      would locate a file named .restore , but ls @(.r*) would not. In other instances of pattern matching the / and . are not treated specially.


      *

      Matches any string, including the null string.

      ?

      Matches any single character.

      [ ... ]

      Matches any one of the enclosed characters. A pair of characters separated by - matches any character lexically between the pair, inclusive. If the first character following the opening "[ " is a "! ", then any character not enclosed is matched. A - can be included in the character set by putting it as the first or last character.


      A pattern-list is a list of one or more patterns separated from each other with a | . Composite patterns can be formed with one or more of the following:


      ?( pattern-list )

      Optionally matches any one of the given patterns.

      *( pattern-list )

      Matches zero or more occurrences of the given patterns.

      +( pattern-list )

      Matches one or more occurrences of the given patterns.

      @( pattern-list )

      Matches exactly one of the given patterns.

      !( pattern-list )

      Matches anything, except one of the given patterns.


    Quoting

      Each of the metacharacters listed above (See Definitions ) has a special meaning to the shell and causes termination of a word unless quoted. A character may be quoted (that is, made to stand for itself) by preceding it with a \\ . The pair \\NEWLINE is removed. All characters enclosed between a pair of single quote marks ( ' ' ) are quoted. A single quote cannot appear within single quotes. Inside double quote marks ( "" ), parameter and command substitution occur and \\ quotes the characters \\ , ` , " , and $ . The meaning of $* and $@ is identical when not quoted or when used as a parameter assignment value or as a file name. However, when used as a command argument, $* is equivalent to ``$1 d $2 d ...'', where d is the first character of the IFS variable, whereas $@ is equivalent to $1 $2 .... Inside grave quote marks ( `` ), \\ quotes the characters \\ , ' , and $ . If the grave quotes occur within double quotes, then \\ also quotes the character " .

      The special meaning of reserved words or aliases can be removed by quoting any character of the reserved word. The recognition of function names or special command names listed below cannot be altered by quoting them.

    Arithmetic Evaluation

      An ability to perform integer arithmetic is provided with the special command let . Evaluations are performed using long arithmetic. Constants are of the form [ base # ] n where base is a decimal number between two and thirty-six representing the arithmetic base and n is a number in that base. If base is omitted then base 10 is used.

      An arithmetic expression uses the same syntax, precedence, and associativity of expression as the C language. All the integral operators, other than ++ , -; , ?: , and , are supported. Variables can be referenced by name within an arithmetic expression without using the parameter substitution syntax. When a variable is referenced, its value is evaluated as an arithmetic expression.

      An internal integer representation of a variable can be specified with the -i option of the typeset special command. Arithmetic evaluation is performed on the value of each assignment to a variable with the -i attribute. If you do not specify an arithmetic base, the first assignment to the variable determines the arithmetic base. This base is used when parameter substitution occurs.

      Since many of the arithmetic operators require quoting, an alternative form of the let command is provided. For any command which begins with a (( , all the characters until a matching )) are treated as a quoted expression. More precisely, (( ... )) is equivalent to let " ... " .

    Prompting

      When used interactively, the shell prompts with the parameter expanded value of PS1 before reading a command. If at any time a new-line is typed and further input is needed to complete a command, then the secondary prompt (that is, the value of PS2 ) is issued.

    Conditional Expressions

      A conditional expression is used with the [[ compound command to test attributes of files and to compare strings. Word splitting and file name generation are not performed on the words between [[ and ]] . Each expression can be constructed from one or more of the following unary or binary expressions:

      -a file

      True, if file exists.

      -b file

      True, if file exists and is a block special file.

      -c file

      True, if file exists and is a character special file.

      -d file

      True, if file exists and is a directory.

      -e file

      True, if file exists.

      -f file

      True, if file exists and is an ordinary file.

      -g file

      True, if file exists and is has its setgid bit set.

      -k file

      True, if file exists and is has its sticky bit set.

      -n string

      True, if length of string is non-zero.

      -o option

      True, if option named option is on.

      -p file

      True, if file exists and is a fifo special file or a pipe.

      -r file

      True, if file exists and is readable by current process.

      -s file

      True, if file exists and has size greater than zero.

      -t fildes

      True, if file descriptor number fildes is open and associated with a terminal device.

      -u file

      True, if file exists and has its setuid bit set.

      -w file

      True, if file exists and is writable by current process.

      -x file

      True, if file exists and is executable by current process. If file exists and is a directory, then the current process has permission to search in the directory.

      -z string

      True, if length of string is zero.

      -L file

      True, if file exists and is a symbolic link.

      -O file

      True, if file exists and is owned by the effective user id of this process.

      -G file

      True, if file exists and its group matches the effective group id of this process.

      -S file

      True, if file exists and is a socket.

      file1 -nt file2

      True, if file1 exists and is newer than file2 .

      file1 -ot file2

      True, if file1 exists and is older than file2 .

      file1 -ef file2

      True, if file1 and file2 exist and refer to the same file.

      string

      True if the string string is not the null string.

      string = pattern

      True, if string matches pattern .

      string != pattern

      True, if string does not match pattern .

      string1 = string2

      True if the strings string1 and string2 are identical.

      string1 ! = string2

      True if the strings string1 and string2 are not identical.

      string1 < string2

      True, if string1 comes before string2 based on strings interpreted as appropriate to the locale setting for category LC_COLLATE .

      string1 > string2

      True, if string1 comes after string2 based on strings interpreted as appropriate to the locale setting for category LC_COLLATE .

      exp1 -eq exp2

      True, if exp1 is equal to exp2 .

      exp1 -ne exp2

      True, if exp1 is not equal to exp2 .

      exp1 -lt exp2

      True, if exp1 is less than exp2 .

      exp1 -gt exp2

      True, if exp1 is greater than exp2 .

      exp1 -le exp2

      True, if exp1 is less than or equal to exp2 .

      exp1 -ge exp2

      True, if exp1 is greater than or equal to exp2 .

      In each of the above expressions, if file is of the form /dev/fd/ n , where n is an integer, then the test is applied to the open file whose descriptor number is n .

      A compound expression can be constructed from these primitives by using any of the following, listed in decreasing order of precedence.

      ( expression )

      True, if expression is true. Used to group expressions.

      ! expression

      True if expression is false.

      expression1 && expression2

      True, if expression1 and expression2 are both true.

      expression1 || expression2

      True, if either expression1 or expression2 is true.

    Input/Output

      Before a command is executed, its input and output may be redirected using a special notation interpreted by the shell. The following may appear anywhere in a simple-command or may precede or follow a command and are not passed on to the invoked command. Command and parameter substitution occur before word or digit is used except as noted below. File name generation occurs only if the pattern matches a single file, and blank interpretation is not performed.

      < word

      Use file word as standard input (file descriptor 0).

      > word

      Use file word as standard output (file descriptor 1). If the file does not exist then it is created. If the file exists, and the noclobber option is on, this causes an error; otherwise, it is truncated to zero length.

      >| word

      Sames as > , except that it overrides the noclobber option.

      >> word

      Use file word as standard output. If the file exists then output is appended to it (by first seeking to the EOF ); otherwise, the file is created.

      <> word

      Open file word for reading and writing as standard input.

      << [ - ] word

      The shell input is read up to a line that is the same as word , or to an EOF . No parameter substitution, command substitution or file name generation is performed on word . The resulting document, called a here-document , becomes the standard input. If any character of word is quoted, then no interpretation is placed upon the characters of the document; otherwise, parameter and command substitution occur, \\ NEWLINE is ignored, and \\ must be used to quote the characters \\ , $ , ` , and the first character of word . If - is appended to << , then all leading tabs are stripped from word and from the document.

      <& digit

      The standard input is duplicated from file descriptor digit (see dup(2) ). Similarly for the standard output using >& digit .

      <&-

      The standard input is closed. Similarly for the standard output using >&- .

      <&p

      The input from the co-process is moved to standard input.

      >&p

      The output to the co-process is moved to standard output.

      If one of the above is preceded by a digit, then the file descriptor number referred to is that specified by the digit (instead of the default 0 or 1). For example:


      ... 2>&1


      means file descriptor 2 is to be opened for writing as a duplicate of file descriptor 1.

      The order in which redirections are specified is significant. The shell evaluates each redirection in terms of the ( file descriptor , file ) association at the time of evaluation. For example:


      ... 1> fname 2>&1


      first associates file descriptor 1 with file fname . It then associates file descriptor 2 with the file associated with file descriptor 1 (that is fname ). If the order of redirections were reversed, file descriptor 2 would be associated with the terminal (assuming file descriptor 1 had been) and then file descriptor 1 would be associated with file fname .

      If a command is followed by & and job control is not active, then the default standard input for the command is the empty file /dev/null . Otherwise, the environment for the execution of a command contains the file descriptors of the invoking shell as modified by input/output specifications.

    Environment

      The environment (see environ(5) ) is a list of name-value pairs that is passed to an executed program in the same way as a normal argument list. The names must be identifiers and the values are character strings. The shell interacts with the environment in several ways. On invocation, the shell scans the environment and creates a variable for each name found, giving it the corresponding value and marking it export . Executed commands inherit the environment. If the user modifies the values of these variables or creates new ones, using the export or typeset -x commands, they become part of the environment. The environment seen by any executed command is thus composed of any name-value pairs originally inherited by the shell, whose values may be modified by the current shell, plus any additions which must be noted in export or typeset -x commands.

      The environment for any simple-command or function may be augmented by prefixing it with one or more variable assignments. A variable assignment argument is a word of the form identifier=value . Thus:



      TERM=450 
      cmd args
      

      and



      (export TERM; TERM=450; 
      cmd args)
      

      are equivalent (as far as the above execution of cmd is concerned, except for special commands listed below that are preceded with an asterisk).

      If the -k flag is set, all variable assignment arguments are placed in the environment, even if they occur after the command name. The following first prints a=b c and then c :



      echo a=b c
      set -k echo
      a=b c 


      This feature is intended for use with scripts written for early versions of the shell and its use in new scripts is strongly discouraged. It is likely to disappear someday.

    Functions

      The function reserved word, described in the Commands section above, is used to define shell functions. Shell functions are read in and stored internally. Alias names are resolved when the function is read. Functions are executed like commands with the arguments passed as positional parameters. (See Execution below.)

      Functions execute in the same process as the caller and share all files and present working directory with the caller. Traps caught by the caller are reset to their default action inside the function. A trap condition that is not caught or ignored by the function causes the function to terminate and the condition to be passed on to the caller. A trap on EXIT set inside a function is executed after the function completes in the environment of the caller. Ordinarily, variables are shared between the calling program and the function. However, the typeset special command used within a function defines local variables whose scope includes the current function and all functions it calls.

      The special command return is used to return from function calls. Errors within functions return control to the caller.

      The names of all functions can be listed with typeset +f . typeset -f lists all function names as well as the text of all functions. typeset -f function-names lists the text of the named functions only. Functions can be undefined with the -f option of the unset special command.

      Ordinarily, functions are unset when the shell executes a shell script. The -xf option of the typeset command allows a function to be exported to scripts that are executed without a separate invocation of the shell. Functions that need to be defined across separate invocations of the shell should be specified in the ENV file with the -xf option of typeset .

    Function Definition Command

      A function is a user-defined name that is used as a simple command to call a compound command with new positional parameters. A function is defined with a function definition command .

      The format of a function definition command is as follows:


      fname() compound-command [ io-redirect ...]


      The function is named fname ; it must be a name. An implementation may allow other characters in a function name as an extension. The implementation will maintain separate name spaces for functions and variables.

      The () in the function definition command consists of two operators. Therefore, intermixing blank characters with the fname , ( , and ) is allowed, but unnecessary.

      The argument compound-command represents a compound command.

      When the function is declared, none of the expansions in wordexp will be performed on the text in compound-command or io-redirect ; all expansions will be performed as normal each time the function is called. Similarly, the optional io-redirect redirections and any variable assignments within compound-command will be performed during the execution of the function itself, not the function definition.

      When a function is executed, it will have the syntax-error and variable-assignment properties described for the special built-in utilities.

      The compound-command will be executed whenever the function name is specified as the name of a simple command The operands to the command temporarily will become the positional parameters during the execution of the compound-command ; the special parameter # will also be changed to reflect the number of operands. The special parameter 0 will be unchanged. When the function completes, the values of the positional parameters and the special parameter # will be restored to the values they had before the function was executed. If the special built-in return is executed in the compound-command , the function will complete and execution will resume with the next command after the function call.

      An example of how a function definition can be used wherever a simple command is allowed:



      # If variable i is equal to "yes",
      # define function foo to be ls -l
      #
      [ "$i" = yes ] && foo() {
            ls -l
      }
      

      The exit status of a function definition will be 0 if the function was declared successfully; otherwise, it will be greater than zero. The exit status of a function invocation will be the exit status of the last command executed by the function.

    Jobs

      If the monitor option of the set command is turned on, an interactive shell associates a job with each pipeline. It keeps a table of current jobs, printed by the jobs command, and assigns them small integer numbers. When a job is started asynchronously with & , the shell prints a line which looks like:



      [1] 1234
      

      indicating that the job , which was started asynchronously, was job number 1 and had one (top-level) process, whose process id was 1234.

      If you are running a job and wish to do something else you may press the key ^Z (CTRL-Z) which sends a STOP signal to the current job. The shell will then normally indicate that the job has been `Stopped' , and print another prompt. You can then manipulate the state of this job, putting it in the background with the bg command, or run some other commands and then eventually bring the job back into the foreground with the foreground command fg . A ^Z takes effect immediately and is like an interrupt in that pending output and unread input are discarded when it is typed.

      A job being run in the background will stop if it tries to read from the terminal. Background jobs are normally allowed to produce output, but this can be disabled by giving the command "stty tostop" . If you set this tty option, then background jobs will stop when they try to produce output as they do when they try to read input.

      There are several ways to refer to job s in the shell. A job can be referred to by the process id of any process of the job or by one of the following:

      % number

      The job with the given number.

      % string

      Any job whose command line begins with string .

      %? string

      Any job whose command line contains string .

      %%

      Current job.

      %+

      Equivalent to %% .

      %-

      Previous job.

      The shell learns immediately whenever a process changes state. It normally informs you whenever a job becomes blocked so that no further progress is possible, but only just before it prints a prompt. This is done so that it does not otherwise disturb your work.

      When the monitor mode is on, each background job that completes triggers any trap set for CHLD .

      When you try to leave the shell while jobs are running or stopped, you will be warned with the message `You have stopped(running) jobs.' You may use the jobs command to see what they are. If you do this or immediately try to exit again, the shell will not warn you a second time, and the stopped jobs will be terminated. If you have nohup 'ed jobs running when you attempt to logout, you will be warned with the message:

      You have jobs running.

      You will then need to logout a second time to actually logout; however, your background jobs will continue to run.

    Signals

      The INT and QUIT signals for an invoked command are ignored if the command is followed by & and the monitor option is not active. Otherwise, signals have the values inherited by the shell from its parent (but see also the trap special command below).

    Execution

      Each time a command is executed, the above substitutions are carried out. If the command name matches one of the Special Commands listed below, it is executed within the current shell process. Next, the command name is checked to see if it matches one of the user defined functions. If it does, the positional parameters are saved and then reset to the arguments of the function call. When the function completes or issues a return , the positional parameter list is restored and any trap set on EXIT within the function is executed. The value of a function is the value of the last command executed. A function is also executed in the current shell process. If a command name is not a special command or a user defined function , a process is created and an attempt is made to execute the command via exec(2) .

      The shell variable PATH defines the search path for the directory containing the command. Alternative directory names are separated by a colon ( : ). The default path is /bin:/usr/bin: (specifying /bin , /usr/bin , and the current directory in that order). The current directory can be specified by two or more adjacent colons, or by a colon at the beginning or end of the path list. If the command name contains a / then the search path is not used. Otherwise, each directory in the path is searched for an executable file. If the file has execute permission but is not a directory or an a.out file, it is assumed to be a file containing shell commands. A sub-shell is spawned to read it. All non-exported aliases, functions, and variables are removed in this case. A parenthesized command is executed in a sub-shell without removing non-exported quantities.

    Command Re-entry

      The text of the last HISTSIZE (default 128) commands entered from a terminal device is saved in a history file. The file $HOME/.sh_history is used if the HISTFILE variable is not set or if the file it names is not writable. A shell can access the commands of all interactive shells which use the same named HISTFILE . The special command fc is used to list or edit a portion of this file. The portion of the file to be edited or listed can be selected by number or by giving the first character or characters of the command. A single command or range of commands can be specified. If you do not specify an editor program as an argument to fc then the value of the variable FCEDIT is used. If FCEDIT is not defined, then /bin/ed is used. The edited command(s) is printed and re-executed upon leaving the editor. The editor name - is used to skip the editing phase and to re-execute the command. In this case a substitution parameter of the form old = new can be used to modify the command before execution. For example, if r is aliased to 'fc -e - ' then typing 'r bad=good c' will re-execute the most recent command which starts with the letter c , replacing the first occurrence of the string bad with the string good .

    In-line Editing Option

      Normally, each command line entered from a terminal device is simply typed followed by a new-line (RETURN or LINEFEED). If either the emacs , gmacs , or vi option is active, the user can edit the command line. To be in either of these edit modes set the corresponding option. An editing option is automatically selected each time the VISUAL or EDITOR variable is assigned a value ending in either of these option names.

      The editing features require that the user's terminal accept RETURN as carriage return without line feed and that a space must overwrite the current character on the screen.

      The editing modes implement a concept where the user is looking through a window at the current line. The window width is the value of COLUMNS if it is defined, otherwise 80. If the window width is too small to display the prompt and leave at least 8 columns to enter input, the prompt is truncated from the left. If the line is longer than the window width minus two, a mark is displayed at the end of the window to notify the user. As the cursor moves and reaches the window boundaries the window will be centered about the cursor. The mark is a > if the line extends on the right side of the window, < if the line extends on the left, and * if the line extends on both sides of the window.

      The search commands in each edit mode provide access to the history file. Only strings are matched, not patterns, although a leading caret ( ^ ) in the string restricts the match to begin at the first character in the line.

    emacs Editing Mode

      This mode is entered by enabling either the emacs or gmacs option. The only difference between these two modes is the way they handle ^T . To edit, move the cursor to the point needing correction and then insert or delete characters or words as needed. All the editing commands are control characters or escape sequences. The notation for control characters is caret ( ^ ) followed by the character. For example, ^F is the notation for control F . This is entered by depressing `f' while holding down the CTRL (control) key. The SHIFT key is not depressed. (The notation ^? indicates the DEL (delete) key.)

      The notation for escape sequences is M- followed by a character. For example, M-f (pronounced Meta f) is entered by depressing ESC (ascii 033 ) followed by `f'. ( M-F would be the notation for ESC followed by SHIFT (capital) `F'.)

      All edit commands operate from any place on the line (not just at the beginning). Neither the RETURN nor the LINEFEED key is entered after edit commands except when noted.

      ^F

      Move cursor forward (right) one character.

      M-f

      Move cursor forward one word. (The emacs editor's idea of a word is a string of characters consisting of only letters, digits and underscores.)

      ^B

      Move cursor backward (left) one character.

      M-b

      Move cursor backward one word.

      ^A

      Move cursor to start of line.

      ^E

      Move cursor to end of line.

      ^] char

      Move cursor forward to character char on current line.

      M-^] char

      Move cursor backward to character char on current line.

      ^X^X

      Interchange the cursor and mark.

      erase

      (User defined erase character as defined by the stty(1) command, usually ^H or # .) Delete previous character.

      ^D

      Delete current character.

      M-d

      Delete current word.

      M-^H

      (Meta-backspace) Delete previous word.

      M-h

      Delete previous word.

      M-^?

      (Meta-DEL) Delete previous word (if your interrupt character is ^? (DEL, the default) then this command will not work).

      ^T

      Transpose current character with next character in emacs mode. Transpose two previous characters in gmacs mode.

      ^C

      Capitalize current character.

      M-c

      Capitalize current word.

      M-l

      Change the current word to lower case.

      ^K

      Delete from the cursor to the end of the line. If preceded by a numerical parameter whose value is less than the current cursor position, then delete from given position up to the cursor. If preceded by a numerical parameter whose value is greater than the current cursor position, then delete from cursor up to given cursor position.

      ^W

      Kill from the cursor to the mark.

      M-p

      Push the region from the cursor to the mark on the stack.

      kill

      (User defined kill character as defined by the stty(1) command, usually ^G or @ .) Kill the entire current line. If two kill characters are entered in succession, all kill characters from then on cause a line feed (useful when using paper terminals).

      ^Y

      Restore last item removed from line. (Yank item back to the line.)

      ^L

      Line feed and print current line.

      ^@

      (null character) Set mark.

      M- space

      (Meta space) Set mark.

      J

      (New line) Execute the current line.

      M

      (Return) Execute the current line.

      eof

      End-of-file character, normally ^D , is processed as an End-of-file only if the current line is null.

      ^P

      Fetch previous command. Each time ^P is entered the previous command back in time is accessed. Moves back one line when not on the first line of a multi-line command.

      M-<

      Fetch the least recent (oldest) history line.

      M->

      Fetch the most recent (youngest) history line.

      ^N

      Fetch next command line. Each time ^N is entered the next command line forward in time is accessed.

      ^R string

      Reverse search history for a previous command line containing string . If a parameter of zero is given, the search is forward. string is terminated by a RETURN or NEW LINE. If string is preceded by a ^ , the matched line must begin with string . If string is omitted, then the next command line containing the most recent string is accessed. In this case a parameter of zero reverses the direction of the search.

      ^O

      Operate. Execute the current line and fetch the next line relative to current line from the history file.

      M- digits

      (Escape) Define numeric parameter, the digits are taken as a parameter to the next command. The commands that accept a parameter are ^F , ^B , erase , ^C , ^D , ^K , ^R , ^P , ^N , ^] , M-. , M-^] , M-_ , M-b , M-c , M-d , M-f , M-h , M-l and M-^H .

      M- letter

      Soft-key. Your alias list is searched for an alias by the name _ letter and if an alias of this name is defined, its value will be inserted on the input queue. The letter must not be one of the above meta-functions.

      M-[ letter

      Soft-key. Your alias list is searched for an alias by the name __ letter and if an alias of this name is defined, its value will be inserted on the input queue. The can be used to program functions keys on many terminals.

      M-.

      The last word of the previous command is inserted on the line. If preceded by a numeric parameter, the value of this parameter determines which word to insert rather than the last word.

      M-_

      Same as M-. .

      M-*

      An asterisk is appended to the end of the word and a file name expansion is attempted.

      M-ESC

      File name completion. Replaces the current word with the longest common prefix of all filenames matching the current word with an asterisk appended. If the match is unique, a / is appended if the file is a directory and a space is appended if the file is not a directory.

      M-=

      List files matching current word pattern if an asterisk were appended.

      ^U

      Multiply parameter of next command by 4.

      \\

      Escape next character. Editing characters, the user's erase, kill and interrupt (normally ^? ) characters may be entered in a command line or in a search string if preceded by a \\ . The \\ removes the next character's editing features (if any).

      ^V

      Display version of the shell.

      M-#

      Insert a # at the beginning of the line and execute it. This causes a comment to be inserted in the history file.

    vi Editing Mode

      There are two typing modes. Initially, when you enter a command you are in the input mode. To edit, enter control mode by typing ESC ( 033 ) and move the cursor to the point needing correction and then insert or delete characters or words as needed. Most control commands accept an optional repeat count prior to the command.

      When in vi mode on most systems, canonical processing is initially enabled and the command will be echoed again if the speed is 1200 baud or greater and it contains any control characters or less than one second has elapsed since the prompt was printed. The ESC character terminates canonical processing for the remainder of the command and the user can then modify the command line. This scheme has the advantages of canonical processing with the type-ahead echoing of raw mode.

      If the option viraw is also set, the terminal will always have canonical processing disabled. This mode is implicit for systems that do not support two alternate end of line delimiters, and may be helpful for certain terminals.

    Input Edit Commands

      By default the editor is in input mode.

      erase

      (User defined erase character as defined by the stty(1) command, usually ^H or # .) Delete previous character.

      ^W

      Delete the previous blank separated word.

      ^D

      Terminate the shell.

      ^V

      Escape next character. Editing characters and the user's erase or kill characters may be entered in a command line or in a search string if preceded by a ^V . The ^V removes the next character's editing features (if any).

      \\

      Escape the next erase or kill character.

    Motion Edit Commands

      These commands will move the cursor.

      [ count ] l

      Cursor forward (right) one character.

      [ count ] w

      Cursor forward one alpha-numeric word.

      [ count ] W

      Cursor to the beginning of the next word that follows a blank.

      [ count ] e

      Cursor to end of word.

      [ count ] E

      Cursor to end of the current blank delimited word.

      [ count ] h

      Cursor backward (left) one character.

      [ count ] b

      Cursor backward one word.

      [ count ] B

      Cursor to preceding blank separated word.

      [ count ] |

      Cursor to column count .

      [ count ] f c

      Find the next character c in the current line.

      [ count ] F c

      Find the previous character c in the current line.

      [ count ] t c

      Equivalent to f followed by h .

      [ count ] T c

      Equivalent to F followed by l .

      [ count ] ;

      Repeats count times, the last single character find command, f , F , t , or T .

      [ count ] ,

      Reverses the last single character find command count times.

      0

      Cursor to start of line.

      ^

      Cursor to first non-blank character in line.

      $

      Cursor to end of line.

      %

      Moves to balancing ( , ) , { , } , [ , or ] . If cursor is not on one of the above characters, the remainder of the line is searched for the first occurrence of one of the above characters first.

    Search Edit Commands

      These commands access your command history.

      [ count ] k

      Fetch previous command. Each time k is entered the previous command back in time is accessed.

      [ count ] -

      Equivalent to k .

      [ count ] j

      Fetch next command. Each time j is entered, the next command forward in time is accessed.

      [ count ] +

      Equivalent to j .

      [ count ] G

      The command number count is fetched. The default is the least recent history command.

      / string

      Search backward through history for a previous command containing string . string is terminated by a RETURN or NEWLINE. If string is preceded by a ^ , the matched line must begin with string . If string is NULL, the previous string will be used.

      ? string

      Same as / except that search will be in the forward direction.

      n

      Search for next match of the last pattern to / or ? commands.

      N

      Search for next match of the last pattern to / or ? , but in reverse direction. Search history for the string entered by the previous / command.

    Text Modification Edit Commands

      These commands will modify the line.

      a

      Enter input mode and enter text after the current character.

      A

      Append text to the end of the line. Equivalent to $a .

      [ count ] c motion
      c [ count ] motion

      Delete current character through the character that motion would move the cursor to and enter input mode. If motion is c , the entire line will be deleted and input mode entered.

      C

      Delete the current character through the end of line and enter input mode. Equivalent to c$ .

      [ count ] s

      Delete count characters and enter input mode.

      S

      Equivalent to cc .

      D

      Delete the current character through the end of line. Equivalent to d$ .

      [ count ] d motion
      d [ count ] motion

      Delete current character through the character that motion would move to. If motion is d , the entire line will be deleted.

      i

      Enter input mode and insert text before the current character.

      I

      Insert text before the beginning of the line. Equivalent to 0i .

      [ count ] P

      Place the previous text modification before the cursor.

      [ count ] p

      Place the previous text modification after the cursor.

      R

      Enter input mode and replace characters on the screen with characters you type overlay fashion.

      [ count ] r c

      Replace the count character(s) starting at the current cursor position with c , and advance the cursor.

      [ count ] x

      Delete current character.

      [ count ] X

      Delete preceding character.

      [ count ] .

      Repeat the previous text modification command.

      [ count ] ~

      Invert the case of the count character(s) starting at the current cursor position and advance the cursor.

      [ count ] _

      Causes the count word of the previous command to be appended and input mode entered. The last word is used if count is omitted.

      *

      Causes an * to be appended to the current word and file name generation attempted. If no match is found, it rings the bell. Otherwise, the word is replaced by the matching pattern and input mode is entered.

      \\

      Filename completion. Replaces the current word with the longest common prefix of all filenames matching the current word with an asterisk appended. If the match is unique, a / is appended if the file is a directory and a space is appended if the file is not a directory.

    Other Edit Commands

      Miscellaneous commands.

      [ count ] y motion
      y [ count ] motion

      Yank current character through character that motion would move the cursor to and puts them into the delete buffer. The text and cursor are unchanged.

      Y

      Yanks from current position to end of line. Equivalent to y$ .

      u

      Undo the last text modifying command.

      U

      Undo all the text modifying commands performed on the line.

      [ count ] v

      Returns the command fc -e ${VISUAL: -${EDITOR:-vi} } count in the input buffer. If count is omitted, then the current line is used.

      ^L

      Line feed and print current line. Has effect only in control mode.

      J

      (New line) Execute the current line, regardless of mode.

      M

      (Return) Execute the current line, regardless of mode.

      #

      If the first character of the command is a # , then this command deletes this # and each # that follows a newline. Otherwise, sends the line after inserting a # in front of each line in the command. Useful for causing the current line to be inserted in the history as a comment and removing comments from previous comment commands in the history file.

      =

      List the file names that match the current word if an asterisk were appended it.

      @ letter

      Your alias list is searched for an alias by the name _ letter and if an alias of this name is defined, its value will be inserted on the input queue for processing.

    Special Commands

      The following simple-commands are executed in the shell process. Input/Output redirection is permitted. Unless otherwise indicated, the output is written on file descriptor 1 and the exit status, when there is no syntax error, is 0 . Commands that are preceded by one or two * (asterisks) are treated specially in the following ways:

      1. Variable assignment lists preceding the command remain in effect when the command completes.

      2. I/O redirections are processed after variable assignments.

      3. Errors cause a script that contains them to abort.

      4. Words, following a command preceded by ** that are in the format of a variable assignment, are expanded with the same rules as a variable assignment. This means that tilde substitution is performed after the = sign and word splitting and file name generation are not performed.

      * : [ arg ... ]

      The command only expands parameters.

      * . file [ arg ... ]

      Read the complete file then execute the commands. The commands are executed in the current shell environment. The search path specified by PATH is used to find the directory containing file . If any arguments arg are given, they become the positional parameters. Otherwise the positional parameters are unchanged. The exit status is the exit status of the last command executed.

      ** alias [ -tx ] [ name [ = value ] ] ...

      alias with no arguments prints the list of aliases in the form name=value on standard output. An alias is defined for each name whose value is given. A trailing space in value causes the next word to be checked for alias substitution. The -t flag is used to set and list tracked aliases. The value of a tracked alias is the full pathname corresponding to the given name . The value becomes undefined when the value of PATH is reset but the aliases remained tracked. Without the -t flag, for each name in the argument list for which no value is given, the name and value of the alias is printed. The -x flag is used to set or print exported alias es. An exported alias is defined for scripts invoked by name. The exit status is non-zero if a name is given, but no value, and no alias has been defined for the name .

      bg [ % job ... ]

      This command is only on systems that support job control. Puts each specified job into the background. The current job is put in the background if job is not specified. See "Jobs" section above for a description of the format of job .

      * break [ n ]

      Exit from the enclosed for , while , until , or select loop, if any. If n is specified then break n levels.

      * continue [ n ]

      Resume the next iteration of the enclosed for , while , until , or select loop. If n is specified then resume at the n -th enclosed loop.

      cd [ arg ]
      cd old new

      This command can be in either of two forms. In the first form it changes the current directory to arg . If arg is - the directory is changed to the previous directory. The shell variable HOME is the default arg . The variable PWD is set to the current directory. The shell variable CD PATH defines the search path for the directory containing arg . Alternative directory names are separated by a colon ( : ). The default path is null (specifying the current directory). Note that the current directory is specified by a null path name, which can appear immediately after the equal sign or between the colon delimiters anywhere else in the path list. If arg begins with a / then the search path is not used. Otherwise, each directory in the path is searched for arg .

      The second form of cd substitutes the string new for the string old in the current directory name, PWD and tries to change to this new directory. The cd command may not be executed by rksh .

      command [ -p ] [ command_name ] [ argument ... ]
      command [ -v -V ] command_name

      The command utility causes the shell to treat the arguments as a simple command, suppressing the shell function lookup. The -p flag performs the command search using a default value for PATH that is guaranteed to find all of the standard utilities. The -v flag writes a string to standard output that indicates the pathname or command that will be used by the shell, in the current shell execution environment, to invoke command_name . The -V flag writes a string to standard output that indicates how the name given in the command_name operand will be interpreted by the shell, in the current shell execution environment.

      echo [ arg ... ]

      See echo(1) for usage and description.

      * eval [ arg ... ]

      The arguments are read as input to the shell and the resulting command(s) executed.

      * exec [ arg ... ]

      If arg is given, the command specified by the arguments is executed in place of this shell without creating a new process. Input/output arguments may appear and affect the current process. If no arguments are given the effect of this command is to modify file descriptors as prescribed by the input/output redirection list. In this case, any file descriptor numbers greater than 2 that are opened with this mechanism are closed when invoking another program.

      * exit [ n ]

      Causes the calling shell or shell script to exit with the exit status specified by n . The value will be the least significant 8 bits of the specified status. If n is omitted then the exit status is that of the last command executed. When exit occurs when executing a trap, the last command refers to the command that executed before the trap was invoked. An EOF will also cause the shell to exit except for a shell which has the ignoreeof option (See set below) turned on.

      ** export [ name [ = value ] ] ...

      The given name s are marked for automatic export to the environment of subsequently-executed commands.

      fc [ -e ename ] [ -nlr ] [ first [ last ] ]
      fc -e - [ old = new ] [ command ]

      In the first form, a range of commands from first to last is selected from the last HISTSIZE commands that were typed at the terminal. The arguments first and last may be specified as a number or as a string. A string is used to locate the most recent command starting with the given string. A negative number is used as an offset to the current command number. If the -l flag is selected, the commands are listed on standard output. Otherwise, the editor program ename is invoked on a file containing these keyboard commands. If ename is not supplied, then the value of the variable FCEDIT (default /bin/ed ) is used as the editor. When editing is complete, the edited command(s) is executed. If last is not specified then it will be set to first . If first is not specified the default is the previous command for editing and -16 for listing. The flag -r reverses the order of the commands and the flag -n suppresses command numbers when listing. In the second form the command is re-executed after the substitution old = new is performed. If there is not a command argument, the most recent command typed at this terminal is executed.

      fg [ % job ... ]

      This command is only on systems that support job control. Each job specified is brought to the foreground. Otherwise, the current job is brought into the foreground. See "Jobs" section above for a description of the format of job .

      getopts optstring name [ arg ... ]

      Checks arg for legal options. If arg is omitted, the positional parameters are used. An option argument begins with a + or a - . An option not beginning with + or - or the argument - ends the options. optstring contains the letters that getopts recognizes. If a letter is followed by a : , that option is expected to have an argument. The options can be separated from the argument by blanks.

      getopts places the next option letter it finds inside variable name each time it is invoked with a + prepended when arg begins with a + . The index of the next arg is stored in OPTIND . The option argument, if any, gets stored in OPTARG .

      A leading : in optstring causes getopts to store the letter of an invalid option in OPTARG , and to set name to ? for an unknown option and to : when a required option is missing. Otherwise, getopts prints an error message. The exit status is non-zero when there are no more options. See getoptcvt(1) for usage and description.

      hash [ name ... ]

      For each name , the location in the search path of the command specified by name is determined and remembered by the shell. The -r option causes the shell to forget all remembered locations. If no arguments are given, information about remembered commands is presented. Hits is the number of times a command has been invoked by the shell process. Cost is a measure of the work required to locate a command in the search path. If a command is found in a "relative" directory in the search path, after changing to that directory, the stored location of that command is recalculated. Commands for which this will be done are indicated by an asterisk ( * ) adjacent to the hits information. Cost will be incremented when the recalculation is done.

      jobs [ -lnp ] [ % job ... ]

      Lists information about each given job; or all active jobs if job is omitted. The -l flag lists process ids in addition to the normal information. The -n flag displays only jobs that have stopped or exited since last notified. The -p flag causes only the process group to be listed. See "Jobs" section above and jobs(1) for a description of the format of job .

      kill [ - sig ] % job ...
      kill [ -sig ] pid ...
      kill -l

      Sends either the TERM (terminate) signal or the specified signal to the specified jobs or processes. Signals are either given by number or by names (as given in signal(5) stripped of the prefix ``SIG'' with the exception that SIGCHD is named CHLD ). If the signal being sent is TERM (terminate) or HUP (hangup), then the job or process will be sent a CONT (continue) signal if it is stopped. The argument job can be the process id of a process that is not a member of one of the active jobs. See Jobs for a description of the format of job . In the second form, kill -l , the signal numbers and names are listed.

      let arg ...

      Each arg is a separate arithmetic expression to be evaluated. See the Arithmetic Evaluation section above, for a description of arithmetic expression evaluation.

      The exit status is 0 if the value of the last expression is non-zero, and 1 otherwise.

      login argument ...

      Equivalent to ` exec login argument ....' See login(1) for usage and description.

      * newgrp [ arg ... ]

      Equivalent to exec /bin/newgrp arg ....

      print [ -Rnprsu [ n ] ] [ arg ... ]

      The shell output mechanism. With no flags or with flag - or - , the arguments are printed on standard output as described by echo(1) . The exit status is 0 , unless the output file is not open for writing.

      -n

      Suppress NEWLINE from being added to the output.

      -R | -r

      Raw mode. Ignore the escape conventions of echo . The -R option will print all subsequent arguments and options other than -n .

      -p

      Write the arguments to the pipe of the process spawned with |& instead of standard output.

      -s

      Write the arguments to the history file instead of standard output.

      -u [ n ]

      Specify a one digit file descriptor unit number n on which the output will be placed. The default is 1 .

      pwd

      Equivalent to print -r - $PWD .

      read [ -prsu [ n ] ] [ name ? prompt ] [ name ... ]

      The shell input mechanism. One line is read and is broken up into fields using the characters in IFS as separators. The escape character, (\\) , is used to remove any special meaning for the next character and for line continuation. In raw mode, -r , the \\ character is not treated specially. The first field is assigned to the first name , the second field to the second name , etc., with leftover fields assigned to the last name . The -p option causes the input line to be taken from the input pipe of a process spawned by the shell using |& . If the -s flag is present, the input will be saved as a command in the history file. The flag -u can be used to specify a one digit file descriptor unit n to read from. The file descriptor can be opened with the exec special command. The default value of n is 0 . If name is omitted then REPLY is used as the default name . The exit status is 0 unless the input file is not open for reading or an EOF is encountered. An EOF with the -p option causes cleanup for this process so that another can be spawned. If the first argument contains a ? , the remainder of this word is used as a prompt on standard error when the shell is interactive. The exit status is 0 unless an EOF is encountered.

      ** readonly [ name [ = value ] ] ...

      The given name s are marked readonly and these names cannot be changed by subsequent assignment.

      * return [ n ]

      Causes a shell function or '.' script to return to the invoking script with the return status specified by n . The value will be the least significant 8 bits of the specified status. If n is omitted then the return status is that of the last command executed. If return is invoked while not in a function or a '.' script, then it is the same as an exit .

      set [ +-abCefhkmnopstuvx ] [ +-o option ]... [ +-A name ] [ arg ... ]

      The flags for this command have meaning as follows:

      -A

      Array assignment. Unset the variable name and assign values sequentially from the list arg . If +A is used, the variable name is not unset first.

      -a

      All subsequent variables that are defined are automatically exported.

      -b

      Causes the shell to notify the user asynchronously of background job completions. The following message will be written to standard error:


      "[%d]%c %s%s\ ", < job-number >, < current >, < status >, < job-name >


      where the fields are as follows:

      <current>

      The character + identifies the job that would be used as a default for the fg or bg utilities; this job can also be specified using the job_id %+ or %% . The character - identifies the job that would become the default if the current default job were to exit; this job can also be specified using the job_id %- . For other jobs, this field is a space character. At most one job can be identified with + and at most one job can be identified with - . If there is any suspended job, then the current job will be a suspended job. If there are at least two suspended jobs, then the previous job will also be a suspended job.

      <job-number>

      A number that can be used to identify the process group to the wait , fg , bg , and kill utilities. Using these utilities, the job can be identified by prefixing the job number with % .

      <status>

      Unspecified.

      <job-name>

      Unspecified.

      When the shell notifies the user a job has been completed, it may remove the job's process ID from the list of those known in the current shell execution environment. Asynchronous notification will not be enabled by default.

      -C

      Prevent existing files from being overwritten by the shell's > redirection operator; the >| redirection operator will override this noclobber option for an individual file.

      -e

      If a command has a non-zero exit status, execute the ERR trap, if set, and exit. This mode is disabled while reading profiles.

      -f

      Disables file name generation.

      -h

      Each command becomes a tracked alias when first encountered.

      -k

      All variable assignment arguments are placed in the environment for a command, not just those that precede the command name.

      -m

      Background jobs will run in a separate process group and a line will print upon completion. The exit status of background jobs is reported in a completion message. On systems with job control, this flag is turned on automatically for interactive shells.

      -n

      Read commands and check them for syntax errors, but do not execute them. Ignored for interactive shells.

      -o

      The following argument can be one of the following option names:

      allexport

      Same as -a .

      errexit

      Same as -e .

      bgnice

      All background jobs are run at a lower priority. This is the default mode.

      emacs

      Puts you in an emacs style in-line editor for command entry.

      gmacs

      Puts you in a gmacs style in-line editor for command entry.

      ignoreeof

      The shell will not exit on EOF . The command exit must be used.

      keyword

      Same as -k .

      markdirs

      All directory names resulting from file name generation have a trailing / appended.

      monitor

      Same as -m .

      noclobber

      Prevents redirection > from truncating existing files. Require >| to truncate a file when turned on. Equivalent to -C .

      noexec

      Same as -n .

      noglob

      Same as -f .

      nolog

      Do not save function definitions in history file.

      notify

      Equivalent to -b .

      nounset

      Same as -u .

      privileged

      Same as -p .

      verbose

      Same as -v .

      trackall

      Same as -h .

      vi

      Puts you in insert mode of a vi style in-line editor until you hit escape character 033 . This puts you in control mode. A return sends the line.

      viraw

      Each character is processed as it is typed in vi mode.

      xtrace

      Same as -x .

      If no option name is supplied, the current option settings are printed.

      -p

      Disables processing of the $HOME/.profile file and uses the file /etc/suid_profile instead of the ENV file. This mode is on whenever the effective uid is not equal to the real uid, or when the effective gid is not equal to the real gid. Turning this off causes the effective uid and gid to be set to the real uid and gid.

      -s

      Sort the positional parameters lexicographically.

      -t

      Exit after reading and executing one command.

      -u

      Treat unset parameters as an error when substituting.

      -v

      Print shell input lines as they are read.

      -x

      Print commands and their arguments as they are executed.

      -

      Turns off -x and -v flags and stops examining arguments for flags.

      --

      Do not change any of the flags; useful in setting $1 to a value beginning with - . If no arguments follow this flag then the positional parameters are unset.

      Using + rather than - causes these flags to be turned off. These flags can also be used upon invocation of the shell. The current set of flags may be found in $- . Unless -A is specified, the remaining arguments are positional parameters and are assigned, in order, to $1 $2 .... If no arguments are given, the names and values of all variables are printed on the standard output.

      * shift [ n ]

      The positional parameters from $ n +1 $ n +1 ... are renamed $1 ... , default n is 1. The parameter n can be any arithmetic expression that evaluates to a non-negative number less than or equal to $# .

      stop % jobid ...

      stop pid ...

      stop stops the execution of a background job(s) by using its jobid , or of any process by using its pid . (see ps(1) ).

      suspend

      Stops the execution of the current shell (but not if it is the login shell).

      test expression

      Evaluate conditional expressions. See Conditional Expressions section above and test(1) for usage and description.

      * times

      Print the accumulated user and system times for the shell and for processes run from the shell.

      * trap [ arg sig ... ]

      arg is a command to be read and executed when the shell receives signal(s) sig . arg is scanned once when the trap is set and once when the trap is taken. sig can be specified as a signal number or signal name. trap commands are executed in order of signal number. Any attempt to set a trap on a signal number that was ignored on entry to the current shell is ineffective.

      If arg is - , the shell will reset each sig to the default value. If arg is null ( '' ), the shell will ignore each specified sig if it arises. Otherwise, arg will be read and executed by the shell when one of the corresponding sigs arises. The action of the trap will override a previous action (either default action or one explicitly set). The value of $? after the trap action completes will be the value it had before the trap was invoked.

      sig can be EXIT, 0 (equivalent to EXIT) or a signal specified using a symbolic name, without the SIG prefix, for example, HUP , INT , QUIT , TERM . If sig is 0 or EXIT and the trap statement is executed inside the body of a function, then the command arg is executed after the function completes. If sig is 0 or EXIT for a trap set outside any function, the command arg is executed on exit from the shell. If sig is ERR , arg will be executed whenever a command has a non-zero exit status. If sig is DEBUG , arg will be executed after each command.

      The environment in which the shell executes a trap on EXIT will be identical to the environment immediately after the last command executed before the trap on EXIT was taken.

      Each time the trap is invoked, arg will be processed in a manner equivalent to:


      eval "$arg"


      Signals that were ignored on entry to a non-interactive shell cannot be trapped or reset, although no error need be reported when attempting to do so. An interactive shell may reset or catch signals ignored on entry. Traps will remain in place for a given shell until explicitly changed with another trap command.

      When a subshell is entered, traps are set to the default args. This does not imply that the trap command cannot be used within the subshell to set new traps.

      The trap command with no arguments will write to standard output a list of commands associated with each sig. The format is:


      trap -- %s %s ... <arg> , <sig> ...


      The shell will format the output, including the proper use of quoting, so that it is suitable for reinput to the shell as commands that achieve the same trapping results. For example:


      save_traps=$(trap)
      ...
      eval "$save_traps"
      

      If the trap name or number is invalid, a non-zero exit status will be returned; otherwise, 0 will be returned. For both interactive and non-interactive shells, invalid signal names or numbers will not be considered a syntax error and will not cause the shell to abort.

      Traps are not processed while a job is waiting for a foreground process. Thus, a trap on CHLD won't be executed until the foreground job terminates.

      type name ...

      For each name , indicate how it would be interpreted if used as a command name.

      ** typeset [ +-HLRZfilrtux [ n ] ] [ name [ = value ] ] ...

      Sets attributes and values for shell variables and functions. When typeset is invoked inside a function, a new instance of the variables name is created. The variables value and type are restored when the function completes. The following list of attributes may be specified:

      -H

      This flag provides UNIX to host-name file mapping on non-UNIX machines.

      -L

      Left justify and remove leading blanks from value . If n is non-zero it defines the width of the field; otherwise, it is determined by the width of the value of first assignment. When the variable is assigned to, it is filled on the right with blanks or truncated, if necessary, to fit into the field. Leading zeros are removed if the -Z flag is also set. The -R flag is turned off.

      -R

      Right justify and fill with leading blanks. If n is non-zero it defines the width of the field, otherwise it is determined by the width of the value of first assignment. The field is left filled with blanks or truncated from the end if the variable is reassigned. The -L flag is turned off.

      -Z

      Right justify and fill with leading zeros if the first non-blank character is a digit and the -L flag has not been set. If n is non-zero it defines the width of the field; otherwise, it is determined by the width of the value of first assignment.

      -f

      The names refer to function names rather than variable names. No assignments can be made and the only other valid flags are -t , -u , and -x . The flag -t turns on execution tracing for this function. The flag -u causes this function to be marked undefined. The FPATH variable will be searched to find the function definition when the function is referenced. The flag -x allows the function definition to remain in effect across shell procedures invoked by name.

      -i

      Parameter is an integer. This makes arithmetic faster. If n is non-zero it defines the output arithmetic base; otherwise, the first assignment determines the output base.

      -l

      All upper-case characters are converted to lower-case. The upper-case flag, -u is turned off.

      -r

      The given name s are marked readonly and these names cannot be changed by subsequent assignment.

      -t

      Tags the variables. Tags are user definable and have no special meaning to the shell.

      -u

      All lower-case characters are converted to upper-case characters. The lower-case flag, -l is turned off.

      -x

      The given name s are marked for automatic export to the environment of subsequently-executed commands.

      The -i attribute can not be specified along with -R , -L , -Z , or -f .

      Using + rather than - causes these flags to be turned off. If no name arguments are given but flags are specified, a list of names (and optionally the values ) of the variables which have these flags set is printed. (Using + rather than - keeps the values from being printed.) If no name s and flags are given, the names and attributes of all variables are printed.

      ulimit [ -HSacdfnstv ] [ limit ]

      Set or display a resource limit. The available resources limits are listed below. Many systems do not contain one or more of these limits. The limit for a specified resource is set when limit is specified. The value of limit can be a number in the unit specified below with each resource, or the value unlimited . The H and S flags specify whether the hard limit or the soft limit for the given resource is set. A hard limit cannot be increased once it is set. A soft limit can be increased up to the value of the hard limit. If neither the H or S options is specified, the limit applies to both. The current resource limit is printed when limit is omitted. In this case the soft limit is printed unless H is specified. When more that one resource is specified, then the limit name and unit is printed before the value.

      -a

      Lists all of the current resource limits.

      -c

      The number of 512-byte blocks on the size of core dumps.

      -d

      The number of K-bytes on the size of the data area.

      -f

      The number of 512-byte blocks on files written by child processes (files of any size may be read).

      -n

      The number of file descriptors plus 1.

      -s

      The number of K-bytes on the size of the stack area.

      -t

      The number of seconds to be used by each process.

      -v

      The number of K-bytes for virtual memory.

      If no option is given, -f is assumed.

      umask [ -S ] [ mask ]

      The user file-creation mask is set to mask (see umask(2) ). mask can either be an octal number or a symbolic value as described in chmod(1) . If a symbolic value is given, the new umask value is the complement of the result of applying mask to the complement of the previous umask value. If mask is omitted, the current value of the mask is printed. The -S flag produces symbolic output.

      unalias name ...

      The alias es given by the list of name s are removed from the alias list.

      unset [ -f ] name ...

      The variables given by the list of name s are unassigned, that is, their values and attributes are erased. readonly variables cannot be unset. If the -f , flag is set, then the names refer to function names. Unsetting ERRNO , LINENO , MAILCHECK , OPTARG , OPTIND , RANDOM , SECONDS , TMOUT , and _ removes their special meaning even if they are subsequently assigned to.

      * wait [ job ]

      Wait for the specified job and report its termination status. If job is not given then all currently active child processes are waited for. The exit status from this command is that of the process waited for. See Jobs for a description of the format of job .

      whence [ -pv ] name ...

      For each name , indicate how it would be interpreted if used as a command name.

      The -v flag produces a more verbose report.

      The -p flag does a path search for name even if name is an alias, a function, or a reserved word.

    Invocation

      If the shell is invoked by exec(2) , and the first character of argument zero ( $0 ) is - , then the shell is assumed to be a login shell and commands are read from /etc/profile and then from either .profile in the current directory or $HOME/.profile , if either file exists. Next, commands are read from the file named by performing parameter substitution on the value of the environment variable ENV if the file exists. If the -s flag is not present and arg is, then a path search is performed on the first arg to determine the name of the script to execute. The script arg must have read permission and any setuid and setgid settings will be ignored. If the script is not found on the path, arg is processed as if it named a builtin command or function. Commands are then read as described below; the following flags are interpreted by the shell when it is invoked:

      -c

      Read commands from the command_string operand. Set the value of special parameter 0 from the value of the command_name operand and the positional parameters ( $1 , $2 , and so on) in sequence from the remaining arg operands. No commands will be read from the standard input.

      -s

      If the -s flag is present or if no arguments remain, commands are read from the standard input. Shell output, except for the output of the Special Commands listed above, is written to file descriptor 2.

      -i

      If the -i flag is present or if the shell input and output are attached to a terminal (as told by ioctl(2) ), then this shell is interactive . In this case, TERM is ignored (so that kill 0 does not kill an interactive shell) and INTR is caught and ignored (so that wait is interruptible). In all cases, QUIT is ignored by the shell.

      -r

      If the -r flag is present the shell is a restricted shell.

      The remaining flags and arguments are described under the set command above.

    rksh Only

      rksh is used to set up login names and execution environments whose capabilities are more controlled than those of the standard shell. The actions of rksh are identical to those of ksh , except that the following are disallowed:

      • changing directory (see cd(1) )

      • setting the value of SHELL , ENV , or PATH

      • specifying path or command names containing /

      • redirecting output ( > , >| , <> , and >> )

      • changing group (see newgrp(1) ).

      The restrictions above are enforced after .profile and the ENV files are interpreted.

      When a command to be executed is found to be a shell procedure, rksh invokes ksh to execute it. Thus, it is possible to provide to the end-user shell procedures that have access to the full power of the standard shell, while imposing a limited menu of commands; this scheme assumes that the end-user does not have write and execute permissions in the same directory.

      The net effect of these rules is that the writer of the .profile has complete control over user actions, by performing guaranteed setup actions and leaving the user in an appropriate directory (probably not the login directory).

      The system administrator often sets up a directory of commands (that is, /usr/rbin ) that can be safely invoked by rksh .

ERRORS

    Errors detected by the shell, such as syntax errors, cause the shell to return a non-zero exit status. Otherwise, the shell returns the exit status of the last command executed (see also the exit command above). If the shell is being used non-interactively then execution of the shell file is abandoned. Run time errors detected by the shell are reported by printing the command or function name and the error condition. If the line number that the error occurred on is greater than one, then the line number is also printed in square brackets ( [] ) after the command or function name.

    For a non-interactive shell, an error condition encountered by a special built-in or other type of utility will cause the shell to write a diagnostic message to standard error and exit as shown in the following table:

    Error Special Built-in Other Utilities
    Shell language syntax error will exit will exit
    Utility syntax error (option or operand error) will exit will not exit
    Redirection error will exit will not exit
    Variable assignment error will exit will not exit
    Expansion error will exit will exit
    Command not found n/a may exit
    Dot script not found will exit n/a

    An expansion error is one that occurs when the shell expansions are carried out (for example, ${x!y} , because ! is not a valid operator); an implementation may treat these as syntax errors if it is able to detect them during tokenization, rather than during expansion.

    If any of the errors shown as "will (may) exit" occur in a subshell, the subshell will (may) exit with a non-zero status, but the script containing the subshell will not exit because of the error.

    In all of the cases shown in the table, an interactive shell will write a diagnostic message to standard error without exiting.

USAGE

    See largefile(5) for the description of the behavior of ksh and rksh when encountering files greater than or equal to 2 Gbyte ( 2 31 bytes).

EXIT STATUS

    Each command has an exit status that can influence the behavior of other shell commands. The exit status of commands that are not utilities is documented in this section. The exit status of the standard utilities is documented in their respective sections.

    If a command is not found, the exit status will be 127 . If the command name is found, but it is not an executable utility, the exit status will be 126 . Applications that invoke utilities without using the shell should use these exit status values to report similar errors.

    If a command fails during word expansion or redirection, its exit status will be greater than zero.

    When reporting the exit status with the special parameter ? , the shell will report the full eight bits of exit status available. The exit status of a command that terminated because it received a signal will be reported as greater than 128 .

FILES

    /etc/profile

    /etc/suid_profile

    $HOME/.profile

    /tmp/sh*

    /dev/null

ATTRIBUTES

    See attributes(5) for descriptions of the following attributes:

    /usr/bin/ksh

    /usr/bin/rksh

      ATTRIBUTE TYPE ATTRIBUTE VALUE
      Availability SUNWcsu
      CSI Enabled

    /usr/xpg4/bin/ksh

      ATTRIBUTE TYPE ATTRIBUTE VALUE
      Availability SUNWxcu4
      CSI Enabled

SEE ALSO

WARNINGS

    The use of setuid shell scripts is strongly discouraged.

NOTES

    If a command which is a tracked alias is executed, and then a command with the same name is installed in a directory in the search path before the directory where the original command was found, the shell will continue to exec the original command. Use the -t option of the alias command to correct this situation.

    Some very old shell scripts contain a ^ as a synonym for the pipe character | .

    Using the fc built-in command within a compound command will cause the whole command to disappear from the history file.

    The built-in command . file reads the whole file before any commands are executed. Therefore, alias and unalias commands in the file will not apply to any functions defined in the file.

    When the shell executes a shell script that attempts to execute a non-existent command interpreter, the shell returns an erroneous diagnostic message that the shell script file does not exist.


2010-09-05 00:35:46

Name

    zoneadmd– zone administration daemon

Synopsis

    /usr/lib/zones/zoneadmd 
    

Description

    zoneadmd is a system daemon that is started when the system needs to manage a particular zone. Because each instance of the zoneadmd daemon manages a particular zone, it is not unexpected to see multiple zoneadmd daemons running.

    This daemon is started automatically by the zone management software and should not be invoked directly. The daemon shuts down automatically when no longer in use. It does not constitute a programming interface, but is classified as a private interface.

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWzoneu

    Interface Stability

    Private

See Also

Notes

    The zones(5) service is managed by the service management facility, smf(5), under the service identifier:


    svc:/system/zones:default

    Administrative actions on this service, such as enabling, disabling, or requesting restart, can be performed using svcadm(1M). The service's status can be queried using the svcs(1) command.


2010-09-05 00:34:26

Name

    zonecfg– set up zone configuration

Synopsis

    zonecfg -z zonename
    
    zonecfg -z zonename subcommand
    
    zonecfg -z zonename -f command_file
    
    zonecfg help

Description

    The zonecfg utility creates and modifies the configuration of a zone. Zone configuration consists of a number of resources and properties.

    To simplify the user interface, zonecfg uses the concept of a scope. The default scope is global.

    The following synopsis of the zonecfg command is for interactive usage:


    zonecfg -z zonename subcommand
    

    Parameters changed through zonecfg do not affect a running zone. The zone must be rebooted for the changes to take effect.

    In addition to creating and modifying a zone, the zonecfg utility can also be used to persistently specify the resource management settings for the global zone.

    In the following text, “rctl” is used as an abbreviation for “resource control”. See resource_controls(5).

    Types of Non-Global Zones

      In the administration of zones, it is useful to distinguish between the global zone and non-global zones. Within non-global zones, there are two types of zone root file system models: sparse and whole root. The sparse root zone model optimizes the sharing of objects. The whole root zone model provides the maximum configurability.

      Sparse Root Zones

        Non-global zones that have inherit-pkg-dir resources (described under “Resources”, below) are called sparse root zones.

        The sparse root zone model optimizes the sharing of objects in the following ways:

        • Only a subset of the packages installed in the global zone are installed directly into the non-global zone.

        • Read-only loopback file systems, identified as inherit-pkg-dir resources, are used to gain access to other files.

        In this model, all packages appear to be installed in the non-global zone. Packages that do not deliver content into read-only loopback mount file systems are fully installed. There is no need to install content delivered into read-only loopback mounted file systems since that content is inherited (and visible) from the global zone.

        • As a general guideline, a zone requires about 100 megabytes of free disk space per zone when the global zone has been installed with all of the standard Solaris packages.

        • By default, any additional packages installed in the global zone also populate the non-global zones. The amount of disk space required might be increased accordingly, depending on whether the additional packages deliver files that reside in the inherit-pkg-dir resource space.

        An additional 40 megabytes of RAM per zone are suggested, but not required on a machine with sufficient swap space.

        A sparse zone inherits the following directories:


        /lib
        /platform
        /sbin
        /usr

        Although zonecfg allows you to remove one of these as an inherited directory, you should not do so. You should either follow the whole-root model or the sparse model; a subset of the sparse model is not tested and you might encounter unexpected problems.

        Adding an additional inherit-pkg-dir directory, such as /opt, to a sparse root zone is acceptable.

      Whole Root Zones

        The whole root zone model provides the maximum configurability. All of the required and any selected optional Solaris packages are installed into the private file systems of the zone. The advantages of this model include the capability for global administrators to customize their zones file system layout. This would be done, for example, to add arbitrary unbundled or third-party packages.

        The disk requirements for this model are determined by the disk space used by the packages currently installed in the global zone.


        Note –

        If you create a sparse root zone that contains the following inherit-pkg-dir directories, you must remove these directories from the non-global zone's configuration before the zone is installed to have a whole root zone:

        • /lib

        • /platform

        • /sbin

        • /usr


    Resources

      The following resource types are supported:

      attr

      Generic attribute.

      capped-cpu

      Limits for CPU usage.

      capped-memory

      Limits for physical, swap, and locked memory.

      dataset

      ZFS dataset.

      dedicated-cpu

      Subset of the system's processors dedicated to this zone while it is running.

      device

      Device.

      fs

      file-system

      inherit-pkg-dir

      Directory inherited from the global zone. Software packages whose contents have been transferred into that directory are inherited in read-only mode by the non-global zone and the non-global zone's packaging database is updated to reflect those packages. Such resources are not modifiable or removable once a zone has been installed with zoneadm.

      net

      Network interface.

      rctl

      Resource control.

    Properties

      Each resource type has one or more properties. There are also some global properties, that is, properties of the configuration as a whole, rather than of some particular resource.

      The following properties are supported:

      (global)

      zonename

      (global)

      zonepath

      (global)

      autoboot

      (global)

      bootargs

      (global)

      pool

      (global)

      limitpriv

      (global)

      brand

      (global)

      cpu-shares

      (global)

      max-lwps

      (global)

      max-msg-ids

      (global)

      max-sem-ids

      (global)

      max-shm-ids

      (global)

      max-shm-memory

      (global)

      scheduling-class

      fs

      dir, special, raw, type, options

      inherit-pkg-dir

      dir

      net

      address, physical, defrouter

      device

      match

      rctl

      name, value

      attr

      name, type, value

      dataset

      name

      dedicated-cpu

      ncpus, importance

      capped-memory

      physical, swap, locked

      capped-cpu

      ncpus

      As for the property values which are paired with these names, they are either simple, complex, or lists. The type allowed is property-specific. Simple values are strings, optionally enclosed within quotation marks. Complex values have the syntax:


      (<name>=<value>,<name>=<value>,...)

      where each <value> is simple, and the <name> strings are unique within a given property. Lists have the syntax:


      [<value>,...]

      where each <value> is either simple or complex. A list of a single value (either simple or complex) is equivalent to specifying that value without the list syntax. That is, “foo” is equivalent to “[foo]”. A list can be empty (denoted by “[]”).

      In interpreting property values, zonecfg accepts regular expressions as specified in fnmatch(5). See EXAMPLES.

      The property types are described as follows:

      global: zonename

      The name of the zone.

      global: zonepath

      Path to zone's file system.

      global: autoboot

      Boolean indicating that a zone should be booted automatically at system boot. Note that if the zones service is disabled, the zone will not autoboot, regardless of the setting of this property. You enable the zones service with a svcadm command, such as:


      # svcadm enable svc:/system/zones:default
      

      Replace enable with disable to disable the zones service. See svcadm(1M).

      global: bootargs

      Arguments (options) to be passed to the zone bootup, unless options are supplied to the “zoneadm boot” command, in which case those take precedence. The valid arguments are described in zoneadm(1M).

      global: pool

      Name of the resource pool that this zone must be bound to when booted. This property is incompatible with the dedicated-cpu resource.

      global: limitpriv

      The maximum set of privileges any process in this zone can obtain. The property should consist of a comma-separated privilege set specification as described in priv_str_to_set(3C). Privileges can be excluded from the resulting set by preceding their names with a dash (-) or an exclamation point (!). The special privilege string “zone” is not supported in this context. If the special string “default” occurs as the first token in the property, it expands into a safe set of privileges that preserve the resource and security isolation described in zones(5). A missing or empty property is equivalent to this same set of safe privileges.

      The system administrator must take extreme care when configuring privileges for a zone. Some privileges cannot be excluded through this mechanism as they are required in order to boot a zone. In addition, there are certain privileges which cannot be given to a zone as doing so would allow processes inside a zone to unduly affect processes in other zones. zoneadm(1M) indicates when an invalid privilege has been added or removed from a zone's privilege set when an attempt is made to either “boot” or “ready” the zone.

      See privileges(5) for a description of privileges. The command “ppriv -l” (see ppriv(1)) produces a list of all Solaris privileges. You can specify privileges as they are displayed by ppriv. In privileges(5), privileges are listed in the form PRIV_privilege_name. For example, the privilege sys_time, as you would specify it in this property, is listed in privileges(5) as PRIV_SYS_TIME.

      global: brand

      The zone's brand type. A zone that is not assigned a brand is considered a “native” zone.

      global: ip-type

      A zone can either share the IP instance with the global zone, which is the default, or have its own exclusive instance of IP.

      This property takes the values shared and exclusive.

      fs: dir, special, raw, type, options

      Values needed to determine how, where, and so forth to mount file systems. See mount(1M), mount(2), fsck(1M), and vfstab(4).

      inherit-pkg-dir: dir

      The directory path.

      net: address, physical, defrouter

      The network address and physical interface name of the network interface. The network address is one of:

      • a valid IPv4 address, optionally followed by “/” and a prefix length;

      • a valid IPv6 address, which must be followed by “/” and a prefix length;

      • a host name which resolves to an IPv4 address.

      Note that host names that resolve to IPv6 addresses are not supported.

      The physical interface name is the network interface name.

      The default router is specified similarly to the network address except that it must not be followed by a / (slash) and a network prefix length.

      A zone can be configured to be either exclusive-IP or shared-IP. For a shared-IP zone, you must set both the physical and address properties; setting the default router is optional. The interface specified in the physical property must be plumbed in the global zone prior to booting the non-global zone. However, if the interface is not used by the global zone, it should be configured down in the global zone, and the default router for the interface should be specified here.

      For an exclusive-IP zone, the physical property must be set and the address and default router properties cannot be set.

      device: match

      Device name to match.

      rctl: name, value

      The name and priv/limit/action triple of a resource control. See prctl(1) and rctladm(1M). The preferred way to set rctl values is to use the global property name associated with a specific rctl.

      attr: name, type, value

      The name, type and value of a generic attribute. The type must be one of int, uint, boolean or string, and the value must be of that type. uint means unsigned , that is, a non-negative integer.

      dataset: name

      The name of a ZFS dataset to be accessed from within the zone. See zfs(1M).

      global: cpu-shares

      The number of Fair Share Scheduler (FSS) shares to allocate to this zone. This property is incompatible with the dedicated-cpu resource. This property is the preferred way to set the zone.cpu-shares rctl.

      global: max-lwps

      The maximum number of LWPs simultaneously available to this zone. This property is the preferred way to set the zone.max-lwps rctl.

      global: max-msg-ids

      The maximum number of message queue IDs allowed for this zone. This property is the preferred way to set the zone.max-msg-ids rctl.

      global: max-sem-ids

      The maximum number of semaphore IDs allowed for this zone. This property is the preferred way to set the zone.max-sem-ids rctl.

      global: max-shm-ids

      The maximum number of shared memory IDs allowed for this zone. This property is the preferred way to set the zone.max-shm-ids rctl.

      global: max-shm-memory

      The maximum amount of shared memory allowed for this zone. This property is the preferred way to set the zone.max-shm-memory rctl. A scale (K, M, G, T) can be applied to the value for this number (for example, 1M is one megabyte).

      global: scheduling-class

      Specifies the scheduling class used for processes running in a zone. When this property is not specified, the scheduling class is established as follows:

      • If the cpu-shares property or equivalent rctl is set, the scheduling class FSS is used.

      • If neither cpu-shares nor the equivalent rctl is set and the zone's pool property references a pool that has a default scheduling class, that class is used.

      • Under any other conditions, the system default scheduling class is used.

      dedicated-cpu: ncpus, importance

      The number of CPUs that should be assigned for this zone's exclusive use. The zone will create a pool and processor set when it boots. See pooladm(1M) and poolcfg(1M) for more information on resource pools. The ncpu property can specify a single value or a range (for example, 1-4) of processors. The importance property is optional; if set, it will specify the pset.importance value for use by poold(1M). If this resource is used, there must be enough free processors to allocate to this zone when it boots or the zone will not boot. The processors assigned to this zone will not be available for the use of the global zone or other zones. This resource is incompatible with both the pool and cpu-shares properties. Only a single instance of this resource can be added to the zone.

      capped-memory: physical, swap, locked

      The caps on the memory that can be used by this zone. A scale (K, M, G, T) can be applied to the value for each of these numbers (for example, 1M is one megabyte). Each of these properties is optional but at least one property must be set when adding this resource. Only a single instance of this resource can be added to the zone. The physical property sets the max-rss for this zone. This will be enforced by rcapd(1M) running in the global zone. The swap property is the preferred way to set the zone.max-swap rctl. The locked property is the preferred way to set the zone.max-locked-memory rctl.

      capped-cpu: ncpus

      Sets a limit on the amount of CPU time that can be used by a zone. The unit used translates to the percentage of a single CPU that can be used by all user threads in a zone, expressed as a fraction (for example, .75) or a mixed number (whole number and fraction, for example, 1.25). An ncpu value of 1 means 100% of a CPU, a value of 1.25 means 125%, .75 mean 75%, and so forth. When projects within a capped zone have their own caps, the minimum value takes precedence.

      The capped-cpu property is an alias for zone.cpu-cap resource control and is related to the zone.cpu-cap resource control. See resource_controls(5).

      The following table summarizes resources, property-names, and types:


      resource          property-name   type
      (global)          zonename        simple
      (global)          zonepath        simple
      (global)          autoboot        simple
      (global)          bootargs        simple
      (global)          pool            simple
      (global)          limitpriv       simple
      (global)          brand           simple
      (global)          ip-type         simple
      (global)          cpu-shares      simple
      (global)          max-lwps        simple
      (global)          max-msg-ids     simple
      (global)          max-sem-ids     simple
      (global)          max-shm-ids     simple
      (global)          max-shm-memory  simple
      (global)          scheduling-class simple
      fs                dir             simple
                         special         simple
                         raw             simple
                         type            simple
                         options         list of simple
      inherit-pkg-dir   dir             simple
      net               address         simple
                         physical        simple
      device            match           simple
      rctl              name            simple
                         value           list of complex
      attr              name            simple
                         type            simple
                         value           simple
      dataset           name            simple
      dedicated-cpu     ncpus           simple or range
                         importance      simple
      
      capped-memory     physical        simple with scale
                         swap            simple with scale
                         locked          simple with scale
      
      capped-cpu        ncpus           simple

      To further specify things, the breakdown of the complex property “value” of the “rctl” resource type, it consists of three name/value pairs, the names being “priv”, “limit” and “action”, each of which takes a simple value. The “name” property of an “attr” resource is syntactically restricted in a fashion similar but not identical to zone names: it must begin with an alphanumeric, and can contain alphanumerics plus the hyphen (-), underscore (_), and dot (.) characters. Attribute names beginning with “zone” are reserved for use by the system. Finally, the “autoboot” global property must have a value of “true“ or “false”.

    Using Kernel Statistics to Monitor CPU Caps

      Using the kernel statistics (kstat(3KSTAT)) module caps, the system maintains information for all capped projects and zones. You can access this information by reading kernel statistics (kstat(3KSTAT)), specifying caps as the kstat module name. The following command displays kernel statistics for all active CPU caps:


      # kstat caps::'/cpucaps/'
      

      A kstat(1M) command running in a zone displays only CPU caps relevant for that zone and for projects in that zone. See EXAMPLES.

      The following are cap-related arguments for use with kstat(1M):

      caps

      The kstat module.

      project_caps or zone_caps

      kstat class, for use with the kstat -c option.

      cpucaps_project_id or cpucaps_zone_id

      kstat name, for use with the kstat -n option. id is the project or zone identifier.

      The following fields are displayed in response to a kstat(1M) command requesting statistics for all CPU caps.

      module

      In this usage of kstat, this field will have the value caps.

      name

      As described above, cpucaps_project_id or cpucaps_zone_id

      above_sec

      Total time, in seconds, spent above the cap.

      below_sec

      Total time, in seconds, spent below the cap.

      maxusage

      Maximum observed CPU usage.

      nwait

      Number of threads on cap wait queue.

      usage

      Current aggregated CPU usage for all threads belonging to a capped project or zone, in terms of a percentage of a single CPU.

      value

      The cap value, in terms of a percentage of a single CPU.

      zonename

      Name of the zone for which statistics are displayed.

      See EXAMPLES for sample output from a kstat command.

Options

    The following options are supported:

    -f command_file

    Specify the name of zonecfg command file. command_file is a text file of zonecfg subcommands, one per line.

    -z zonename

    Specify the name of a zone. Zone names are case sensitive. Zone names must begin with an alphanumeric character and can contain alphanumeric characters, the underscore (_) the hyphen (-), and the dot (.). The name global and all names beginning with SUNW are reserved and cannot be used.

SUBCOMMANDS

    You can use the add and select subcommands to select a specific resource, at which point the scope changes to that resource. The end and cancel subcommands are used to complete the resource specification, at which time the scope is reverted back to global. Certain subcommands, such as add, remove and set, have different semantics in each scope.

    Subcommands which can result in destructive actions or loss of work have an -F option to force the action. If input is from a terminal device, the user is prompted when appropriate if such a command is given without the -F option otherwise, if such a command is given without the -F option, the action is disallowed, with a diagnostic message written to standard error.

    The following subcommands are supported:

    add resource-type (global scope)
    add property-name property-value (resource scope)

    In the global scope, begin the specification for a given resource type. The scope is changed to that resource type.

    In the resource scope, add a property of the given name with the given value. The syntax for property values varies with different property types. In general, it is a simple value or a list of simple values enclosed in square brackets, separated by commas ([foo,bar,baz]). See PROPERTIES.

    cancel

    End the resource specification and reset scope to global. Abandons any partially specified resources. cancel is only applicable in the resource scope.

    clear property-name

    Clear the value for the property.

    commit

    Commit the current configuration from memory to stable storage. The configuration must be committed to be used by zoneadm. Until the in-memory configuration is committed, you can remove changes with the revert subcommand. The commit operation is attempted automatically upon completion of a zonecfg session. Since a configuration must be correct to be committed, this operation automatically does a verify.

    create [-F] [ -a path |-b | -t template]

    Create an in-memory configuration for the specified zone. Use create to begin to configure a new zone. See commit for saving this to stable storage.

    If you are overwriting an existing configuration, specify the -F option to force the action. Specify the -t template option to create a configuration identical to template, where template is the name of a configured zone.

    Use the -a path option to facilitate configuring a detached zone on a new host. The path parameter is the zonepath location of a detached zone that has been moved on to this new host. Once the detached zone is configured, it should be installed using the “zoneadm attach” command (see zoneadm(1M)). All validation of the new zone happens during the attach process, not during zone configuration.

    Use the -b option to create a blank configuration. Without arguments, create applies the Sun default settings.

    delete [-F]

    Delete the specified configuration from memory and stable storage. This action is instantaneous, no commit is necessary. A deleted configuration cannot be reverted.

    Specify the -F option to force the action.

    end

    End the resource specification. This subcommand is only applicable in the resource scope. zonecfg checks to make sure the current resource is completely specified. If so, it is added to the in-memory configuration (see commit for saving this to stable storage) and the scope reverts to global. If the specification is incomplete, it issues an appropriate error message.

    export [-f output-file]

    Print configuration to standard output. Use the -f option to print the configuration to output-file. This option produces output in a form suitable for use in a command file.

    help [usage] [subcommand] [syntax] [command-name]

    Print general help or help about given topic.

    info zonename | zonepath | autoboot | brand | pool | limitpriv
    info [resource-type [property-name=property-value]*]

    Display information about the current configuration. If resource-type is specified, displays only information about resources of the relevant type. If any property-name value pairs are specified, displays only information about resources meeting the given criteria. In the resource scope, any arguments are ignored, and info displays information about the resource which is currently being added or modified.

    remove resource-type{property-name=property-value}(global scope)

    In the global scope, removes the specified resource. The [] syntax means 0 or more of whatever is inside the square braces. If you want only to remove a single instance of the resource, you must specify enough property name-value pairs for the resource to be uniquely identified. If no property name-value pairs are specified, all instances will be removed. If there is more than one pair is specified, a confirmation is required, unless you use the -F option.

    select resource-type {property-name=property-value}

    Select the resource of the given type which matches the given property-name property-value pair criteria, for modification. This subcommand is applicable only in the global scope. The scope is changed to that resource type. The {} syntax means 1 or more of whatever is inside the curly braces. You must specify enough property -name property-value pairs for the resource to be uniquely identified.

    set property-name=property-value

    Set a given property name to the given value. Some properties (for example, zonename and zonepath) are global while others are resource-specific. This subcommand is applicable in both the global and resource scopes.

    verify

    Verify the current configuration for correctness:

    • All resources have all of their required properties specified.

    • A zonepath is specified.

    revert [-F]

    Revert the configuration back to the last committed state. The -F option can be used to force the action.

    exit [-F]

    Exit the zonecfg session. A commit is automatically attempted if needed. You can also use an EOF character to exit zonecfg. The -F option can be used to force the action.

Examples


    Example 1 Creating the Environment for a New Zone

    In the following example, zonecfg creates the environment for a new zone. /usr/local is loopback mounted from the global zone into /opt/local. /opt/sfw is loopback mounted from the global zone, three logical network interfaces are added, and a limit on the number of fair-share scheduler (FSS) CPU shares for a zone is set using the rctl resource type. The example also shows how to select a given resource for modification.


    example# zonecfg -z myzone3
    my-zone3: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:myzone3> create
    zonecfg:myzone3> set zonepath=/export/home/my-zone3
    zonecfg:myzone3> set autoboot=true
    zonecfg:myzone3> add fs
    zonecfg:myzone3:fs> set dir=/usr/local
    zonecfg:myzone3:fs> set special=/opt/local
    zonecfg:myzone3:fs> set type=lofs
    zonecfg:myzone3:fs> add options [ro,nodevices]
    zonecfg:myzone3:fs> end
    zonecfg:myzone3> add fs
    zonecfg:myzone3:fs> set dir=/mnt
    zonecfg:myzone3:fs> set special=/dev/dsk/c0t0d0s7
    zonecfg:myzone3:fs> set raw=/dev/rdsk/c0t0d0s7
    zonecfg:myzone3:fs> set type=ufs
    zonecfg:myzone3:fs> end
    zonecfg:myzone3> add inherit-pkg-dir
    zonecfg:myzone3:inherit-pkg-dir> set dir=/opt/sfw
    zonecfg:myzone3:inherit-pkg-dir> end
    zonecfg:myzone3> add net
    zonecfg:myzone3:net> set address=192.168.0.1/24
    zonecfg:myzone3:net> set physical=eri0
    zonecfg:myzone3:net> end
    zonecfg:myzone3> add net
    zonecfg:myzone3:net> set address=192.168.1.2/24
    zonecfg:myzone3:net> set physical=eri0
    zonecfg:myzone3:net> end
    zonecfg:myzone3> add net
    zonecfg:myzone3:net> set address=192.168.2.3/24
    zonecfg:myzone3:net> set physical=eri0
    zonecfg:myzone3:net> end
    zonecfg:my-zone3> set cpu-shares=5
    zonecfg:my-zone3> add capped-memory
    zonecfg:my-zone3:capped-memory> set physical=50m
    zonecfg:my-zone3:capped-memory> set swap=100m
    zonecfg:my-zone3:capped-memory> end
    zonecfg:myzone3> exit
    


    Example 2 Creating a Non-Native Zone

    The following example creates a new Linux zone:


    example# zonecfg -z lxzone
    lxzone: No such zone configured
    Use 'create' to begin configuring a new zone
    zonecfg:lxzone> create -t SUNWlx
    zonecfg:lxzone> set zonepath=/export/zones/lxzone
    zonecfg:lxzone> set autoboot=true
    zonecfg:lxzone> exit
    


    Example 3 Creating an Exclusive-IP Zone

    The following example creates a zone that is granted exclusive access to bge1 and bge33000 and that is isolated at the IP layer from the other zones configured on the system.

    The IP addresses and routing is configured inside the new zone using sysidtool(1M).


    example# zonecfg -z excl
    excl: No such zone configured
    Use 'create' to begin configuring a new zone
    zonecfg:excl> create
    zonecfg:excl> set zonepath=/export/zones/excl
    zonecfg:excl> set ip-type=exclusive
    zonecfg:excl> add net
    zonecfg:excl:net> set physical=bge1
    zonecfg:excl:net> end
    zonecfg:excl> add net
    zonecfg:excl:net> set physical=bge33000
    zonecfg:excl:net> end
    zonecfg:excl> exit
    


    Example 4 Associating a Zone with a Resource Pool

    The following example shows how to associate an existing zone with an existing resource pool:


    example# zonecfg -z myzone
    zonecfg:myzone> set pool=mypool
    zonecfg:myzone> exit
    

    For more information about resource pools, see pooladm(1M) and poolcfg(1M).



    Example 5 Changing the Name of a Zone

    The following example shows how to change the name of an existing zone:


    example# zonecfg -z myzone
    zonecfg:myzone> set zonename=myzone2
    zonecfg:myzone2> exit
    


    Example 6 Changing the Privilege Set of a Zone

    The following example shows how to change the set of privileges an existing zone's processes will be limited to the next time the zone is booted. In this particular case, the privilege set will be the standard safe set of privileges a zone normally has along with the privilege to change the system date and time:


    example# zonecfg -z myzone
    zonecfg:myzone> set limitpriv="default,sys_time"
    zonecfg:myzone2> exit
    


    Example 7 Setting the zone.cpu-shares Property for the Global Zone

    The following command sets the zone.cpu-shares property for the global zone:


    example# zonecfg -z global
    zonecfg:global> set cpu-shares=5
    zonecfg:global> exit
    


    Example 8 Using Pattern Matching

    The following commands illustrate zonecfg support for pattern matching. In the zone flexlm, enter:


    zonecfg:flexlm> add device
    zonecfg:flexlm:device> set match="/dev/cua/a00[2-5]"
    zonecfg:flexlm:device> end
    

    In the global zone, enter:


    global# ls /dev/cua
    a     a000  a001  a002  a003  a004  a005  a006  a007  b

    In the zone flexlm, enter:


    flexlm# ls /dev/cua
    a002  a003  a004  a005


    Example 9 Setting a Cap for a Zone to Three CPUs

    The following sequence uses the zonecfg command to set the CPU cap for a zone to three CPUs.


    zonecfg:myzone> add capped-cpu
    zonecfg:myzone>capped-cpu> set ncpus=3
    zonecfg:myzone>capped-cpu>capped-cpu> end
    

    The preceding sequence, which uses the capped-cpu property, is equivalent to the following sequence, which makes use of the zone.cpu-cap resource control.


    zonecfg:myzone> add rctl
    zonecfg:myzone:rctl> set name=zone.cpu-cap
    zonecfg:myzone:rctl> add value (priv=privileged,limit=300,action=none)
    zonecfg:myzone:rctl> end
    


    Example 10 Using kstat to Monitor CPU Caps

    The following command displays information about all CPU caps.


    # kstat -n /cpucaps/
    module: caps                            instance: 0     
    name:   cpucaps_project_0               class:    project_caps
            above_sec                       0
            below_sec                       2157
            crtime                          821.048183159
            maxusage                        2
            nwait                           0
            snaptime                        235885.637253027
            usage                           0
            value                           18446743151372347932
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_project_1               class:    project_caps
            above_sec                       0
            below_sec                       0
            crtime                          225339.192787265
            maxusage                        5
            nwait                           0
            snaptime                        235885.637591677
            usage                           5
            value                           18446743151372347932
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_project_201             class:    project_caps
            above_sec                       0
            below_sec                       235105
            crtime                          780.37961782
            maxusage                        100
            nwait                           0
            snaptime                        235885.637789687
            usage                           43
            value                           100
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_project_202             class:    project_caps
            above_sec                       0
            below_sec                       235094
            crtime                          791.72983782
            maxusage                        100
            nwait                           0
            snaptime                        235885.637967512
            usage                           48
            value                           100
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_project_203             class:    project_caps
            above_sec                       0
            below_sec                       235034
            crtime                          852.104401481
            maxusage                        75
            nwait                           0
            snaptime                        235885.638144304
            usage                           47
            value                           100
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_project_86710           class:    project_caps
            above_sec                       22
            below_sec                       235166
            crtime                          698.441717859
            maxusage                        101
            nwait                           0
            snaptime                        235885.638319871
            usage                           54
            value                           100
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_zone_0                  class:    zone_caps
            above_sec                       100733
            below_sec                       134332
            crtime                          821.048177123
            maxusage                        207
            nwait                           2
            snaptime                        235885.638497731
            usage                           199
            value                           200
            zonename                        global
    
    module: caps                            instance: 1     
    name:   cpucaps_project_0               class:    project_caps
            above_sec                       0
            below_sec                       0
            crtime                          225360.256448422
            maxusage                        7
            nwait                           0
            snaptime                        235885.638714404
            usage                           7
            value                           18446743151372347932
            zonename                        test_001
    
    module: caps                            instance: 1     
    name:   cpucaps_zone_1                  class:    zone_caps
            above_sec                       2
            below_sec                       10524
            crtime                          225360.256440278
            maxusage                        106
            nwait                           0
            snaptime                        235885.638896443
            usage                           7
            value                           100
            zonename                        test_001


    Example 11 Displaying CPU Caps for a Specific Zone or Project

    Using the kstat -c and -i options, you can display CPU caps for a specific zone or project, as below. The first command produces a display for a specific project, the second for the same project within zone 1.


    # kstat -c project_caps
    
    # kstat -c project_caps -i 1
    

Exit Status

    The following exit values are returned:

    0

    Successful completion.

    1

    An error occurred.

    2

    Invalid usage.

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWzoneu

    Interface Stability

    Volatile

See Also

Notes

    All character data used by zonecfg must be in US-ASCII encoding.


2010-09-05 00:33:02

Name

    zpool– configures ZFS storage pools

Synopsis

    zpool [-?]
    zpool create [-fn] [-o property=value] ... [-O file-system-property=value] ...
         [-m mountpoint] [-R root] pool vdev ...
    zpool destroy [-f] pool
    
    zpool add [-fn] pool vdev ...
    zpool remove pool device ...
    zpool list [-H] [-o property[,...]] [pool] ...
    zpool iostat [-v] [pool] ... [interval[count]]
    zpool status [-xv] [pool] ...
    zpool online pool device ...
    zpool offline [-t] pool device ...
    zpool clear pool [device]
    zpool attach [-f] pool device new_device
    
    zpool detach pool device
    
    zpool replace [-f] pool device [new_device]
    zpool scrub [-s] pool ...
    zpool import [-d dir] [-D]
    zpool import [-o mntopts] [-p property=value] ... [-d dir | -c cachefile] 
         [-D] [-f] [-R root] -a
    
    zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
         [-D] [-f] [-R root] pool |id [newpool]
    zpool export [-f] pool ...
    zpool upgrade 
    
    zpool upgrade -v
    
    zpool upgrade [-V version] -a | pool ...
    zpool history [-il] [pool] ...
    zpool get "all" | property[,...] pool ...
    zpool set property=value pool
    

Description

    The zpool command configures ZFS storage pools. A storage pool is a collection of devices that provides physical storage and data replication for ZFS datasets.

    All datasets within a storage pool share the same space. See zfs(1M) for information on managing datasets.

    Virtual Devices (vdevs)

      A “virtual device” describes a single device or a collection of devices organized according to certain performance and fault characteristics. The following virtual devices are supported:

      disk

      A block device, typically located under “/dev/dsk”. ZFS can use individual slices or partitions, though the recommended mode of operation is to use whole disks. A disk can be specified by a full path, or it can be a shorthand name (the relative portion of the path under “/dev/dsk”). A whole disk can be specified by omitting the slice or partition designation. For example, “c0t0d0” is equivalent to “/dev/dsk/c0t0d0s2”. When given a whole disk, ZFS automatically labels the disk, if necessary.

      file

      A regular file. The use of files as a backing store is strongly discouraged. It is designed primarily for experimental purposes, as the fault tolerance of a file is only as good as the file system of which it is a part. A file must be specified by a full path.

      mirror

      A mirror of two or more devices. Data is replicated in an identical fashion across all components of a mirror. A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices failing before data integrity is compromised.

      raidz
      raidz1
      raidz2

      A variation on RAID-5 that allows for better distribution of parity and eliminates the “RAID-5 write hole” (in which data and parity become inconsistent after a power loss). Data and parity is striped across all disks within a raidz group.

      A raidz group can have either single- or double-parity, meaning that the raidz group can sustain one or two failures respectively without losing any data. The raidz1 vdev type specifies a single-parity raidz group and the raidz2 vdev type specifies a double-parity raidz group. The raidz vdev type is an alias for raidz1.

      A raidz group with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised. The minimum number of devices in a raidz group is one more than the number of parity disks. The recommended number is between 3 and 9 to help increase performance.

      spare

      A special pseudo-vdev which keeps track of available hot spares for a pool. For more information, see the “Hot Spares” section.

      log

      A separate intent log device. If more than one log device is specified, then writes are load-balanced between devices. Log devices can be mirrored. However, raidz and raidz2 are not supported for the intent log. For more information, see the “Intent Log” section.

      cache

      A device used to cache storage pool data. A cache device cannot be cannot be configured as a mirror or raidz group. For more information, see the “Cache Devices” section.

      Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed.

      A pool can have any number of virtual devices at the top of the configuration (known as “root vdevs”). Data is dynamically distributed across all top-level devices to balance data among devices. As new virtual devices are added, ZFS automatically places data on the newly available devices.

      Virtual devices are specified one at a time on the command line, separated by whitespace. The keywords “mirror” and “raidz” are used to distinguish where a group ends and another begins. For example, the following creates two root vdevs, each a mirror of two disks:


      # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
      

    Device Failure and Recovery

      ZFS supports a rich set of mechanisms for handling device failure and data corruption. All metadata and data is checksummed, and ZFS automatically repairs bad data from a good copy when corruption is detected.

      In order to take advantage of these features, a pool must make use of some form of redundancy, using either mirrored or raidz groups. While ZFS supports running in a non-redundant configuration, where each root vdev is simply a disk or file, this is strongly discouraged. A single case of bit corruption can render some or all of your data unavailable.

      A pool's health status is described by one of three states: online, degraded, or faulted. An online pool has all devices operating normally. A degraded pool is one in which one or more devices have failed, but the data is still available due to a redundant configuration. A faulted pool has corrupted metadata, or one or more faulted devices, and insufficient replicas to continue functioning.

      The health of the top-level vdev, such as mirror or raidz device, is potentially impacted by the state of its associated vdevs, or component devices. A top-level vdev or component device is in one of the following states:

      DEGRADED

      One or more top-level vdevs is in the degraded state because one or more component devices are offline. Sufficient replicas exist to continue functioning.

      One or more component devices is in the degraded or faulted state, but sufficient replicas exist to continue functioning. The underlying conditions are as follows:

      • The number of checksum errors exceeds acceptable levels and the device is degraded as an indication that something may be wrong. ZFS continues to use the device as necessary.

      • The number of I/O errors exceeds acceptable levels. The device could not be marked as faulted because there are insufficient replicas to continue functioning.

      FAULTED

      One or more top-level vdevs is in the faulted state because one or more component devices are offline. Insufficient replicas exist to continue functioning.

      One or more component devices is in the faulted state, and insufficient replicas exist to continue functioning. The underlying conditions are as follows:

      • The device could be opened, but the contents did not match expected values.

      • The number of I/O errors exceeds acceptable levels and the device is faulted to prevent further use of the device.

      OFFLINE

      The device was explicitly taken offline by the “zpool offline” command.

      ONLINE

      The device is online and functioning.

      REMOVED

      The device was physically removed while the system was running. Device removal detection is hardware-dependent and may not be supported on all platforms.

      UNAVAIL

      The device could not be opened. If a pool is imported when a device was unavailable, then the device will be identified by a unique identifier instead of its path since the path was never correct in the first place.

      If a device is removed and later re-attached to the system, ZFS attempts to put the device online automatically. Device attach detection is hardware-dependent and might not be supported on all platforms.

    Hot Spares

      ZFS allows devices to be associated with pools as “hot spares”. These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. To create a pool with hot spares, specify a “spare” vdev with any number of devices. For example,


      # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0

      Spares can be shared across multiple pools, and can be added with the “zpool add” command and removed with the “zpool remove” command. Once a spare replacement is initiated, a new ”spare” vdev is created within the configuration that will remain there until the original device is replaced. At this point, the hot spare becomes available again if another device fails.

      If a pool has a shared spare that is currently being used, the pool can not be exported since other pools may use this shared spare, which may lead to potential data corruption.

      An in-progress spare replacement can be cancelled by detaching the hot spare. If the original faulted device is detached, then the hot spare assumes its place in the configuration, and is removed from the spare list of all active pools.

      Spares cannot replace log devices.

    Intent Log

      The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous transactions. For instance, databases often require their transactions to be on stable storage devices when returning from a system call. NFS and other applications can also use fsync() to ensure data stability. By default, the intent log is allocated from blocks within the main pool. However, it might be possible to get better performance using separate intent log devices such as NVRAM or a dedicated disk. For example:


      # zpool create pool c0d0 c1d0 log c2d0
      

      Multiple log devices can also be specified, and they can be mirrored. See the EXAMPLES section for an example of mirroring multiple log devices.

      Log devices can be added, replaced, attached, detached, and imported and exported as part of the larger pool.

    Cache Devices

      Devices can be added to a storage pool as “cache devices.” These devices provide an additional layer of caching between main memory and disk. For read-heavy workloads, where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low latency media. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content.

      To create a pool with cache devices, specify a “cache” vdev with any number of devices. For example:


      # zpool create pool c0d0 c1d0 cache c2d0 c3d0
      

      Cache devices cannot be mirrored or part of a raidz configuration. If a read error is encountered on a cache device, that read I/O is reissued to the original storage pool device, which might be part of a mirrored or raidz configuration.

      The content of the cache devices is considered volatile, as is the case with other system caches.

    Properties

      Each pool has several properties associated with it. Some properties are read-only statistics while others are configurable and change the behavior of the pool. The following are read-only properties:

      available

      Amount of storage available within the pool. This property can also be referred to by its shortened column name, “avail”.

      capacity

      Percentage of pool space used. This property can also be referred to by its shortened column name, “cap”.

      health

      The current health of the pool. Health can be “ONLINE”, “DEGRADED”, “FAULTED”, “ OFFLINE”, “REMOVED”, or “UNAVAIL”.

      guid

      A unique identifier for the pool.

      size

      Total size of the storage pool.

      used

      Amount of storage space used within the pool.

      These space usage properties report actual physical space available to the storage pool. The physical space can be different from the total amount of space that any contained datasets can actually use. The amount of space used in a raidz configuration depends on the characteristics of the data being written. In addition, ZFS reserves some space for internal accounting that the zfs(1M) command takes into account, but the zpool command does not. For non-full pools of a reasonable size, these effects should be invisible. For small pools, or pools that are close to being completely full, these discrepancies may become more noticeable.

      The following property can be set at creation time and import time:

      altroot

      Alternate root directory. If set, this directory is prepended to any mount points within the pool. This can be used when examining an unknown pool where the mount points cannot be trusted, or in an alternate boot environment, where the typical paths are not valid. altroot is not a persistent property. It is valid only while the system is up. Setting altroot defaults to using cachefile=none, though this may be overridden using an explicit setting.

      The following properties can be set at creation time and import time, and later changed with the “zpool set” command:

      autoreplace=on | off

      Controls automatic device replacement. If set to “off”, device replacement must be initiated by the administrator by using the “zpool replace” command. If set to “on”, any new device, found in the same physical location as a device that previously belonged to the pool, is automatically formatted and replaced. The default behavior is “off”. This property can also be referred to by its shortened column name, “replace”.

      bootfs=pool/dataset

      Identifies the default bootable dataset for the root pool. This property is expected to be set mainly by the installation and upgrade programs.

      cachefile=path | “none”

      Controls the location of where the pool configuration is cached. Discovering all pools on system startup requires a cached copy of the configuration data that is stored on the root file system. All pools in this cache are automatically imported when the system boots. Some environments, such as install and clustering, need to cache this information in a different location so that pools are not automatically imported. Setting this property caches the pool configuration in a different location that can later be imported with “zpool import -c”. Setting it to the special value “none” creates a temporary pool that is never cached, and the special value '' (empty string) uses the default location.

      Multiple pools can share the same cache file. Because the kernel destroys and recreates this file when pools are added and removed, care should be taken when attempting to access this file. When the last pool using a cachefile is exported or destroyed, the file is removed.

      delegation=on | off

      Controls whether a non-privileged user is granted access based on the dataset permissions defined on the dataset. See zfs(1M) for more information on ZFS delegated administration.

      failmode=wait | continue | panic

      Controls the system behavior in the event of catastrophic pool failure. This condition is typically a result of a loss of connectivity to the underlying storage device(s) or a failure of all devices within the pool. The behavior of such an event is determined as follows:

      wait

      Blocks all I/O access until the device connectivity is recovered and the errors are cleared. This is the default behavior.

      continue

      Returns EIO to any new write I/O requests but allows reads to any of the remaining healthy devices. Any write requests that have yet to be committed to disk would be blocked.

      panic

      Prints out a message to the console and generates a system crash dump.

      version=version

      The current on-disk version of the pool. This can be increased, but never decreased. The preferred method of updating pools is with the “zpool upgrade” command, though this property can be used when a specific version is needed for backwards compatibility. This property can be any number between 1 and the current version reported by “zpool upgrade -v”. The special value “current” is an alias for the latest supported version.

    Subcommands

      All subcommands that modify state are logged persistently to the pool in their original form.

      The zpool command provides subcommands to create and destroy storage pools, add capacity to storage pools, and provide information about the storage pools. The following subcommands are supported:

      zpool -?

      Displays a help message.

      zpool create [-fn] [-o property=value] ... [-O file-system-property=value] ... [-m mountpoint] [-R root] pool vdev ...

      Creates a new storage pool containing the virtual devices specified on the command line. The pool name must begin with a letter, and can only contain alphanumeric characters as well as underscore (“_”), dash (“-”), and period (“.”). The pool names “mirror”, “raidz”, “spare” and “log” are reserved, as are names beginning with the pattern “c[0-9]”. The vdev specification is described in the “Virtual Devices” section.

      The command verifies that each device specified is accessible and not currently in use by another subsystem. There are some uses, such as being currently mounted, or specified as the dedicated dump device, that prevents a device from ever being used by ZFS. Other uses, such as having a preexisting UFS file system, can be overridden with the -f option.

      The command also checks that the replication strategy for the pool is consistent. An attempt to combine redundant and non-redundant storage in a single pool, or to mix disks and files, results in an error unless -f is specified. The use of differently sized devices within a single raidz or mirror group is also flagged as an error unless -f is specified.

      Unless the -R option is specified, the default mount point is “/pool”. The mount point must not exist or must be empty, or else the root dataset cannot be mounted. This can be overridden with the -m option.

      -f

      Forces use of vdevs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner.

      -n

      Displays the configuration that would be used without actually creating the pool. The actual pool creation can still fail due to insufficient privileges or device sharing.

      -o property=value [-o property=value] ...

      Sets the given pool properties. See the “Properties” section for a list of valid properties that can be set.

      -O file-system-property=value [-O file-system-property=value] ...

      Sets the given file system properties in the root file system of the pool. See the “Properties” section of zfs(1M) for a list of valid properties that can be set.

      -R root

      Equivalent to “-o cachefile=none,altroot=root

      -m mountpoint

      Sets the mount point for the root dataset. The default mount point is “/pool” or “altroot/pool” if altroot is specified. The mount point must be an absolute path, “legacy”, or “none”. For more information on dataset mount points, see zfs(1M).

      zpool destroy [-f] pool

      Destroys the given pool, freeing up any devices for other use. This command tries to unmount any active datasets before destroying the pool.

      -f

      Forces any active datasets contained within the pool to be unmounted.

      zpool add [-fn] pool vdev ...

      Adds the specified virtual devices to the given pool. The vdev specification is described in the “Virtual Devices” section. The behavior of the -f option, and the device checks performed are described in the “zpool create” subcommand.

      -f

      Forces use of vdevs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner.

      -n

      Displays the configuration that would be used without actually adding the vdevs. The actual pool creation can still fail due to insufficient privileges or device sharing.

      Do not add a disk that is currently configured as a quorum device to a zpool. After a disk is in the pool, that disk can then be configured as a quorum device.

      zpool remove pool device ...

      Removes the specified device from the pool. This command currently only supports removing hot spares and cache devices. Devices that are part of a mirrored configuration can be removed using the zpool detach command. Non-redundant and raidz devices cannot be removed from a pool.

      zpool list [-H] [-o props[,...]] [pool] ...

      Lists the given pools along with a health status and space usage. When given no arguments, all pools in the system are listed.

      -H

      Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space.

      -o props

      Comma-separated list of properties to display. See the “Properties” section for a list of valid properties. The default list is “name, size, used, available, capacity, health, altroot”

      zpool iostat [-v] [pool] ... [interval[count]]

      Displays I/O statistics for the given pools. When given an interval, the statistics are printed every interval seconds until Ctrl-C is pressed. If no pools are specified, statistics for every pool in the system is shown. If count is specified, the command exits after count reports are printed.

      -v

      Verbose statistics. Reports usage statistics for individual vdevs within the pool, in addition to the pool-wide statistics.

      zpool status [-xv] [pool] ...

      Displays the detailed health status for the given pools. If no pool is specified, then the status of each pool in the system is displayed. For more information on pool and device health, see the “Device Failure and Recovery” section.

      If a scrub or resilver is in progress, this command reports the percentage done and the estimated time to completion. Both of these are only approximate, because the amount of data in the pool and the other workloads on the system can change.

      -x

      Only display status for pools that are exhibiting errors or are otherwise unavailable.

      -v

      Displays verbose data error information, printing out a complete list of all data errors since the last complete pool scrub.

      zpool online pool device ...

      Brings the specified physical device online.

      This command is not applicable to spares or cache devices.

      zpool offline [-t] pool device ...

      Takes the specified physical device offline. While the device is offline, no attempt is made to read or write to the device.

      This command is not applicable to spares or cache devices.

      -t

      Temporary. Upon reboot, the specified physical device reverts to its previous state.

      zpool clear pool [device] ...

      Clears device errors in a pool. If no arguments are specified, all device errors within the pool are cleared. If one or more devices is specified, only those errors associated with the specified device or devices are cleared.

      zpool attach [-f] pool device new_device

      Attaches new_device to an existing zpool device. The existing device cannot be part of a raidz configuration. If device is not currently part of a mirrored configuration, device automatically transforms into a two-way mirror of device and new_device. If device is part of a two-way mirror, attaching new_device creates a three-way mirror, and so on. In either case, new_device begins to resilver immediately.

      -f

      Forces use of new_device, even if its appears to be in use. Not all devices can be overridden in this manner.

      zpool detach pool device

      Detaches device from a mirror. The operation is refused if there are no other valid replicas of the data.

      zpool replace [-f] pool old_device [new_device]

      Replaces old_device with new_device. This is equivalent to attaching new_device, waiting for it to resilver, and then detaching old_device.

      The size of new_device must be greater than or equal to the minimum size of all the devices in a mirror or raidz configuration.

      new_device is required if the pool is not redundant. If new_device is not specified, it defaults to old_device. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same /dev/dsk path as the old device, even though it is actually a different disk. ZFS recognizes this.

      -f

      Forces use of new_device, even if its appears to be in use. Not all devices can be overridden in this manner.

      zpool scrub [-s] pool ...

      Begins a scrub. The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror or raidz) devices, ZFS automatically repairs any damage discovered during the scrub. The “zpool status” command reports the progress of the scrub and summarizes the results of the scrub upon completion.

      Scrubbing and resilvering are very similar operations. The difference is that resilvering only examines data that ZFS knows to be out of date (for example, when attaching a new device to a mirror or replacing an existing device), whereas scrubbing examines all data to discover silent errors due to hardware faults or disk failure.

      Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows one at a time. If a scrub is already in progress, the “zpool scrub” command terminates it and starts a new scrub. If a resilver is in progress, ZFS does not allow a scrub to be started until the resilver completes.

      -s

      Stop scrubbing.

      zpool import [-d dir | -c cachefile] [-D]

      Lists pools available to import. If the -d option is not specified, this command searches for devices in “/dev/dsk”. The -d option can be specified multiple times, and all directories are searched. If the device appears to be part of an exported pool, this command displays a summary of the pool with the name of the pool, a numeric identifier, as well as the vdev layout and current health of the device for each device or file. Destroyed pools, pools that were previously destroyed with the “zpool destroy” command, are not listed unless the -D option is specified.

      The numeric identifier is unique, and can be used instead of the pool name when multiple exported pools of the same name are available.

      -c cachefile

      Reads configuration from the given cachefile that was created with the “cachefile” pool property. This cachefile is used instead of searching for devices.

      -d dir

      Searches for devices or files in dir. The -d option can be specified multiple times.

      -D

      Lists destroyed pools only.

      zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile] [-D] [-f] [-R root] -a

      Imports all pools found in the search directories. Identical to the previous command, except that all pools with a sufficient number of devices available are imported. Destroyed pools, pools that were previously destroyed with the “zpool destroy” command, will not be imported unless the -D option is specified.

      -o mntopts

      Comma-separated list of mount options to use when mounting datasets within the pool. See zfs(1M) for a description of dataset properties and mount options.

      -o property=value

      Sets the specified property on the imported pool. See the “Properties” section for more information on the available pool properties.

      -c cachefile

      Reads configuration from the given cachefile that was created with the “cachefile” pool property. This cachefile is used instead of searching for devices.

      -d dir

      Searches for devices or files in dir. The -d option can be specified multiple times. This option is incompatible with the -c option.

      -D

      Imports destroyed pools only. The -f option is also required.

      -f

      Forces import, even if the pool appears to be potentially active.

      -a

      Searches for and imports all pools found.

      -R root

      Sets the “cachefile” property to “none” and the “altroot” property to “root”.

      zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile] [-D] [-f] [-R root] pool | id [newpool]

      Imports a specific pool. A pool can be identified by its name or the numeric identifier. If newpool is specified, the pool is imported using the name newpool. Otherwise, it is imported with the same name as its exported name.

      If a device is removed from a system without running “zpool export” first, the device appears as potentially active. It cannot be determined if this was a failed export, or whether the device is really in use from another host. To import a pool in this state, the -f option is required.

      -o mntopts

      Comma-separated list of mount options to use when mounting datasets within the pool. See zfs(1M) for a description of dataset properties and mount options.

      -o property=value

      Sets the specified property on the imported pool. See the “Properties” section for more information on the available pool properties.

      -c cachefile

      Reads configuration from the given cachefile that was created with the “cachefile” pool property. This cachefile is used instead of searching for devices.

      -d dir

      Searches for devices or files in dir. The -d option can be specified multiple times. This option is incompatible with the -c option.

      -D

      Imports destroyed pool. The -f option is also required.

      -f

      Forces import, even if the pool appears to be potentially active.

      -R root

      Sets the “cachefile” property to “none” and the “altroot” property to “root”.

      zpool export [-f] pool ...

      Exports the given pools from the system. All devices are marked as exported, but are still considered in use by other subsystems. The devices can be moved between systems (even those of different endianness) and imported as long as a sufficient number of devices are present.

      Before exporting the pool, all datasets within the pool are unmounted. A pool can not be exported if it has a shared spare that is currently being used.

      For pools to be portable, you must give the zpool command whole disks, not just slices, so that ZFS can label the disks with portable EFI labels. Otherwise, disk drivers on platforms of different endianness will not recognize the disks.

      -f

      Forcefully unmount all datasets, using the “unmount -f” command.

      This command will forcefully export the pool even if it has a shared spare that is currently being used. This may lead to potential data corruption.

      zpool upgrade

      Displays all pools formatted using a different ZFS on-disk version. Older versions can continue to be used, but some features may not be available. These pools can be upgraded using “zpool upgrade -a”. Pools that are formatted with a more recent version are also displayed, although these pools will be inaccessible on the system.

      zpool upgrade -v

      Displays ZFS versions supported by the current software. The current ZFS versions and all previous supported versions are displayed, along with an explanation of the features provided with each version.

      zpool upgrade [-V version] -a | pool ...

      Upgrades the given pool to the latest on-disk version. Once this is done, the pool will no longer be accessible on systems running older versions of the software.

      -a

      Upgrades all pools.

      -V version

      Upgrade to the specified version. If the -V flag is not specified, the pool is upgraded to the most recent version. This option can only be used to increase the version number, and only up to the most recent version supported by this software.

      zpool history [-il] [pool] ...

      Displays the command history of the specified pools or all pools if no pool is specified.

      -i

      Displays internally logged ZFS events in addition to user initiated events.

      -l

      Displays log records in long format, which in addition to standard format includes, the user name, the hostname, and the zone in which the operation was performed.

      zpool getall” | property[,...] pool ...

      Retrieves the given list of properties (or all properties if “all” is used) for the specified storage pool(s). These properties are displayed with the following fields:


             name          Name of storage pool
              property      Property name
              value         Property value
              source        Property source, either 'default' or 'local'.

      See the “Properties” section for more information on the available pool properties.

      zpool set property=value pool

      Sets the given property on the specified pool. See the “Properties” section for more information on what properties can be set and acceptable values.

Examples


    Example 1 Creating a RAID-Z Storage Pool

    The following command creates a pool with a single raidz root vdev that consists of six disks.


    # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
    


    Example 2 Creating a Mirrored Storage Pool

    The following command creates a pool with two mirrors, where each mirror contains two disks.


    # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
    


    Example 3 Creating a ZFS Storage Pool by Using Slices

    The following command creates an unmirrored pool using two disk slices.


    # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
    


    Example 4 Creating a ZFS Storage Pool by Using Files

    The following command creates an unmirrored pool using files. While not recommended, a pool based on files can be useful for experimental purposes.


    # zpool create tank /path/to/file/a /path/to/file/b
    


    Example 5 Adding a Mirror to a ZFS Storage Pool

    The following command adds two mirrored disks to the pool tank, assuming the pool is already made up of two-way mirrors. The additional space is immediately available to any datasets within the pool.


    # zpool add tank mirror c1t0d0 c1t1d0
    


    Example 6 Listing Available ZFS Storage Pools

    The following command lists all available pools on the system. In this case, the pool zion is faulted due to a missing device.

    The results from this command are similar to the following:


    # zpool list
         NAME              SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
         pool             67.5G   2.92M   67.5G     0%  ONLINE     -
         tank             67.5G   2.92M   67.5G     0%  ONLINE     -
         zion                 -       -       -     0%  FAULTED    -


    Example 7 Destroying a ZFS Storage Pool

    The following command destroys the pool tank and any datasets contained within.


    # zpool destroy -f tank
    


    Example 8 Exporting a ZFS Storage Pool

    The following command exports the devices in pool tank so that they can be relocated or later imported.


    # zpool export tank
    


    Example 9 Importing a ZFS Storage Pool

    The following command displays available pools, and then imports the pool tank for use on the system.

    The results from this command are similar to the following:


    # zpool import
      pool: tank
        id: 15451357997522795478
     state: ONLINE
    action: The pool can be imported using its name or numeric identifier.
    config:
    
            tank        ONLINE
              mirror    ONLINE
                c1t2d0  ONLINE
                c1t3d0  ONLINE
    
    # zpool import tank
    


    Example 10 Upgrading All ZFS Storage Pools to the Current Version

    The following command upgrades all ZFS Storage pools to the current version of the software.


    # zpool upgrade -a
    This system is currently running ZFS version 2.


    Example 11 Managing Hot Spares

    The following command creates a new pool with an available hot spare:


    # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
    

    If one of the disks were to fail, the pool would be reduced to the degraded state. The failed device can be replaced using the following command:


    # zpool replace tank c0t0d0 c0t3d0
    

    Once the data has been resilvered, the spare is automatically removed and is made available should another device fails. The hot spare can be permanently removed from the pool using the following command:


    # zpool remove tank c0t2d0
    


    Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs

    The following command creates a ZFS storage pool consisting of two, two-way mirrors and mirrored log devices:


    # zpool create pool mirror c0d0 c1t0 mirror c2d0 c3d0 log mirror \
       c4d0 c5d0
    


    Example 13 Adding Cache Devices to a ZFS Pool

    The following command adds two disks for use as cache devices to a ZFS storage pool:


    # zpool add pool cache c2d0 c3d0
    

    Once added, the cache devices gradually fill with content from main memory. Depending on the size of your cache devices, it could take over an hour for them to fill. Capacity and reads can be monitored using the iostat option as follows:


    # zpool iostat -v pool 5
    

Exit Status

    The following exit values are returned:

    0

    Successful completion.

    1

    An error occurred.

    2

    Invalid command line options were specified.

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWzfsu

    Interface Stability

    Evolving

See Also


2010-09-05 00:31:44

Name

    inetd– Solaris Management Facility delegated restarter for inet services

Synopsis

    inetd  [configuration-file] start |  stop |  refresh
     svc:/network/inetd:default

Description

    inetd is the delegated restarter for internet services for the Service Management Facility (SMF). Its basic responsibilities are to manage service states in response to administrative requests, system failures, and service failures; and, when appropriate, to listen for network requests for services.

    Services are no longer managed by editing the inetd configuration file, inetd.conf(4). Instead, you use inetconv(1M) to convert the configuration file content into SMF format services, then manage these services using inetadm(1M) and svcadm(1M). Once a service has been converted by inetconv, any changes to the legacy data in the inetd config file will not become effective. However, inetd does alert the administrator when it notices change in the configuration file. See the start description under the “inetd Methods” section for further information.

    Also note that the current inetd cannot be run from outside the SMF. This means it cannot be run from the command line, as was supported by the previous inetd. If you attempt to do this, a message is sent to stderr displaying mappings between the options supported by the previous inetd to the SMF version of inetd.

    inetd listens for connections on behalf of all services that are in either the online or degraded state. A service enters one of these states when the service is enabled by the user and inetd manages to listen on its behalf. A listen attempt can fail if another server (whether standalone or a third-party internet service) is already listening on the same port. When this occurs, inetd logs this condition and continues trying to bind to the port at configured intervals a configured number of times. See the property bind_fail_max under “Service Properties,” below, for more details.

    The configuration of all inetd's managed SMF services is read when it is started. It is reread when inetd is refreshed, which occurs in response to an SMF request, or when it receives a SIGHUP signal. See the refresh description under “inetd Methods” for the behavior on configuration refresh.

    You can use the inetadm(1M) or svccfg(1M) utilities to make configuration changes to Internet services within the SMF repository. inetadm has the advantage over svccfg in that it provides an Internet/RPC service context.

    Service States

      As part of its service management duties, inetd implements a state machine for each of its managed services. The states in this machine are made up of the smf(5) set of states. The semantics of these states are as follows:

      uninitialized

      inetd has yet to process this service.

      online

      The service is handling new network requests and might have existing connections active.

      degraded

      The service has entered this state because it was able to listen and process requests for some, but not all, of the protocols specified for the service, having exhausted its listen retries. Existing network connections might be active.

      offline

      Connections might be active, but no new requests are being handled. This is a transient state. A service might be offline for any of the following reasons:

      • The service's dependencies are unmet. When its dependencies become met the service's state will be re-evaluated.

      • The service has exceeded its configured connection rate limit, max_con_rate. The service's state is re-evaluated when its connection offline timer, con_rate_offline, expires.

      • The service has reached its allowed number of active connections, max_copies. The service's state is re-evaluated when the number of active connections drops below max_copies.

      • inetd failed to listen on behalf of the service on all its protocols. As mentioned above, inetd retries up to a configured maximum number of times, at configured intervals.The service's state is re-evaluated when either a listen attempt is successful or the retry limit is reached.

      disabled

      The service has been turned off by an administrator, is not accepting new connections, and has none active. Administrator intervention is required to exit this state.

      maintenance

      A service is in this state because it is either malfunctioning and needs adminstrator attention or because an administrator has requested it.

      Events constituting malfunctioning include: inetd's inability to listen on behalf on any of the service's protocols before exceeding the service's bind retry limit, non-start methods returning with non-success return values, and the service exceeding its failure rate.

      You request the maintenance state to perform maintenance on the service, such as applying a patch. No new requests are handled in this state, but existing connections might be active. Administrator intervention is required to exit this state.

      Use inetadm(1M) to obtain the current state of a managed service.

    Service Methods

      As part of certain state transitions inetd will execute, if supplied, one of a set of methods provided by the service. The set of supported methods are:

      inetd_start

      Executed to handle a request for an online or degraded service. Since there is no separate state to distinguish a service with active connections, this method is not executed as part of a state transition.

      inetd_offline

      Executed when a service is taken from the online or degraded state to the offline state. For a wait-type service that at the time of execution is performing its own listening, this method should result in it ceasing listening. This method will be executed before the disable method in the case an online/degraded service is disabled. This method is required to be implemented for a wait-type service.

      inetd_online

      Executed when a service transitions from the offline state to the online state. This method allows a service author to carry out some preparation prior to a service starting to handle requests.

      inetd_disable

      Executed when a service transitions from the offline state to the disabled state. It should result in any active connections for a service being terminated.

      inetd_refresh

      Executed when both of the following conditions are met:

      • inetd is refreshed, by means of the framework or a SIGHUP, or a request comes in to refresh the service, and

      • the service is currently in the online state and there are no configuration changes that would result in the service needing to be taken offline and brought back again.

      The only compulsory method is the inetd_start method. In the absence of any of the others, inetd runs no method but behaves as if one was run successfully.

    Service Properties

      Configuration for SMF–managed services is stored in the SMF repository. The configuration is made up of the basic configuration of a service, the configuration for each of the service's methods, and the default configuration applicable to all inetd-managed services.

      For details on viewing and modifying the configuration of a service and the defaults, refer to inetadm(1M).

      The basic configuration of a service is stored in a property group named inetd in the service. The properties comprising the basic configuration are as follows:

      bind_addr

      The address of the network interface to which the service should be bound. An empty string value causes the service to accept connections on any network interface.

      bind_fail_interval

      The time interval in seconds between a failed bind attempt and a retry. The values 0 and -1 specify that no retries are attempted and the first failure is handled the same as exceeding bind_fail_max.

      bind_fail_max

      The maximum number of times inetd retries binding to a service's associated port before giving up. The value -1 specifies that no retry limit is imposed. If none of the service's protocols were bound to before any imposed limit is reached, the service goes to the maintenance state; otherwise, if not all of the protocols were bound to, the service goes to the degraded state.

      con_rate_offline

      The time in seconds a service will remain offline if it exceeds its configured maximum connection rate, max_con_rate. The values 0 and -1 specify that connection rate limiting is disabled.

      connection_backlog

      The backlog queue size. Represents a limit on the number of incoming client requests that can be queued at the listening endpoints for servers.

      endpoint_type

      The type of the socket used by the service or the value tli to signify a TLI-based service. Valid socket type values are: stream, dgram, raw, seqpacket.

      failrate_cnt

      The count portion of the service's failure rate limit. The failure rate limit applies to wait-type services and is reached when count instances of the service are started within a given time. Exceeding the rate results in the service being transitioned to the maintenance state. This is different from the behavior of the previous inetd, which continued to retry every 10 minutes, indefinitely. The failrate_cnt check accounts for badly behaving servers that fail before consuming the service request and which would otherwise be continually restarted, taxing system resources. Failure rate is equivalent to the -r option of the previous inetd. The values 0 and -1 specify that this feature is disabled.

      failrate_interval

      The time portion in seconds of the service's failure rate. The values 0 and -1 specify that the failure rate limit feature is disabled.

      inherit_env

      If true, pass inetd's environment on to the service's start method. Regardless of this setting, inetd will set the variables SMF_FMRI, SMF_METHOD, and SMF_RESTARTER in the start method's environment, as well as any environment variables set in the method context. These variables are described in smf_method(5).

      isrpc

      If true, this is an RPC service.

      max_con_rate

      The maximum allowed connection rate, in connections per second, for a nowait-type service. The values 0 and -1 specify that that connection rate limiting is disabled.

      max_copies

      The maximum number of copies of a nowait service that can run concurrently. The values 0 and -1 specify that copies limiting is disabled.

      name

      Can be set to one of the following values:

      proto

      In the case of socket-based services, this is a list of protocols supported by the service. Valid protocols are: tcp, tcp6, tcp6only, udp, udp6, and udp6only. In the case of TLI services, this is a list of netids recognized by getnetconfigent(3NSL) supported by the service, plus the values tcp6only and udp6only. RPC/TLI services also support nettypes in this list, and inetd first tries to interpret the list member as a nettype for these service types. The values tcp6only and udp6only are new to inetd; these values request that inetd listen only for and pass on true IPv6 requests (not IPv4 mapped ones). See “Configuring Protocols for Sockets-Based Services,” below.

      rpc_low_version

      Lowest supported RPC version. Required when isrpc is set to true.

      rpc_high_version

      Highest supported RPC version. Required when isrpc is set to true.

      tcp_trace

      If true, and this is a nowait-type service, inetd logs the client's IP address and TCP port number, along with the name of the service, for each incoming connection, using the syslog(3C) facility. inetd uses the syslog facility code daemon and notice priority level. See syslog.conf(4) for a description of syslog codes and severity levels. This logging is separate from the logging done by the TCP wrappers facility.

      tcp_trace is equivalent to the previous inetd's -t option (and the /etc/default/inetd property ENABLE_CONNECTION_LOGGING).

      tcp_wrappers

      If true, enable TCP wrappers access control. This applies only to services with endpoint_type set to streams and wait set to false. The syslog facility code daemon is used to log allowed connections (using the notice severity level) and denied traffic (using the warning severity level). See syslog.conf(4) for a description of syslog codes and severity levels. The stability level of the TCP wrappers facility and its configuration files is External. As the TCP wrappers facility is not controlled by Sun, intra-release incompatibilities are not uncommon. See attributes(5).

      For more information about configuring TCP wrappers, you can refer to the tcpd(1M) and hosts_access(4) man pages, which are delivered as part of the Solaris operating system at /usr/sfw/man. These pages are not part of the standard Solaris man pages, available at /usr/man.

      tcp_wrappers is equivalent to the previous inetd's /etc/default/inetd property ENABLE_TCPWRAPPERS.

      wait

      If true this is a wait-type service, otherwise it is a nowait-type service. A wait-type service has the following characteristics:

      • Its inetd_start method will take over listening duties on the service's bound endpoint when it is executed.

      • inetd will wait for it to exit after it is executed before it resumes listening duties.

      Datagram servers must be configured as being of type wait, as they are always invoked with the original datagram endpoint that will participate in delivering the service bound to the specified service. They do not have separate “listening” and “accepting” sockets. Connection-oriented services, such as TCP stream services can be designed to be either of type wait or nowait.

      A number of the basic properties are optional for a service. In their absence, their values are taken from the set of default values present in the defaults property group in the inetd service. These properties, with their seed values, are listed below. Note that these values are configurable through inetadm(1M).

      bind_fail_interval  -1
      bind_fail_max       -1
      con_rate_offline    -1
      connection_backlog  10
      failrate_count      40
      failrate_time       60
      inherit_env         true
      max_con_rate        -1
      max_copies          -1
      tcp_trace           false
      tcp_wrappers        false

      Each method specified for a service will have its configuration stored in the SMF repository, within a property group of the same name as the method. The set of properties allowable for these methods includes those specified for the services managed by svc.startd(1M). (See svc.startd(1M) for further details.) Additionally, for the inetd_start method, you can set the arg0 property.

      The arg0 property allows external wrapper programs to be used with inetd services. Specifically, it allows the first argument, argv[0], of the service's start method to be something other than the path of the server program.

      In the case where you want to use an external wrapper program and pass arguments to the service's daemon, the arguments should be incorporated as arguments to the wrapper program in the exec property. For example:

      exec='/path/to/wrapper/prog service_daemon_args'
      arg0='/path/to/service/daemon'

      In addition to the special method tokens mentioned in smf_method(5), inetd also supports the :kill_process token for wait-type services. This results in behavior identical to that if the :kill token were supplied, except that the kill signal is sent only to the parent process of the wait-type service's start method, not to all members of its encompassing process contract (see process(4)).

    Configuring Protocols for Sockets-Based Services

      When configuring inetd for a sockets-based service, you have the choice, depending on what is supported by the service, of the alternatives described under the proto property, above. The following are guidelines for which proto values to use:

      • For a service that supports only IPv4: tcp and udp

      • For a service that supports only IPv6: tcp6only and udp6only

      • For a service that supports both IPv4 and IPv6:

        • Obsolete and not recommended: tcp6 and udp6

        • Recommended: use two separate entries that differ only in the proto field. One entry has tcp and the other has tcp6only, or udp plus udp6only.

      See EXAMPLES for an example of a configuration of a service that supports both IPv4 and IPv6.

    inetd Methods

      inetd provides the methods listed below for consumption by the master restarter, svc.startd(1M).

      start

      Causes inetd to start providing service. This results in inetd beginning to handle smf requests for its managed services and network requests for those services that are in either the online or degraded state.

      In addition, inetd also checks if the inetd.conf(4)–format configuration file it is monitoring has changed since the last inetconv(1M) conversion was carried out. If it has, then a message telling the administrator to re-run inetconv to effect the changes made is logged in syslog.

      stop

      Causes inetd to stop providing service. At this point, inetd transitions each of its services that are not in either the maintenance or disabled states to the offline state, running any appropriate methods in the process.

      refresh

      Results in a refresh being performed for each of its managed services and the inetd.conf(4) format configuration file being checked for change, as in the start method. When a service is refreshed, its behavior depends on its current state:

      • if it is in the maintenance or disabled states, no action is performed because the configuration will be read and consumed when the service leaves the state;

      • if it is in the offline state, the configuration will be read and any changes consumed immediately;

      • if it is in the online or degraded state and the configuration has changed such that a re-binding is necessary to conform to it, then the service will be transitioned to the offline state and back again, using the new configuration for the bind;

      • if it is in the online state and a re-binding is not necessary, then the inetd_refresh method of the service, if provided, will be run to allow online wait–type services to consume any other changes.

Options

    No options are supported.

Operands

    configuration-file

    Specifies an alternate location for the legacy service file (inetd.conf(4)).

    start|stop|refresh

    Specifies which of inetd's methods should be run.

Examples


    Example 1 Configuring a Service that Supports Both IPv4 and IPv6

    The following commands illustrate the existence of services that support both IPv4 and IPv6 and assign proto properties to those services.


    example# svcs -a | grep mysvc
    online         15:48:29 svc:/network/mysvc:dgram4
    online         15:48:29 svc:/network/mysvc:dgram6
    online         15:51:47 svc:/network/mysvc:stream4  
    online         15:52:10 svc:/network/mysvc:stream6  
     
    # inetadm -M network/rpc/mysvc:dgram4 proto=udp
    # inetadm -M network/rpc/mysvc:dgram6 proto=udp6only
    # inetadm -M network/rpc/mysvc:stream4 proto=tcp
    # inetadm -M network/rpc/mysvc:stream6 proto=tcp6only
    

    See svcs(1) and inetadm(1M) for descriptions of those commands.


Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWcsu

    Interface Stability

    Evolving

See Also

Notes

    The inetd daemon performs the same function as, but is implemented significantly differently from, the daemon of the same name in Solaris 9 and prior Solaris operating system releases. In the current Solaris release, inetd is part of the Solaris Management Facility (see smf(5)) and will run only within that facility.

    The /etc/default/inetd file has been deprecated. The functionality represented by the properties ENABLE_CONNECTION_LOGGING and ENABLE_TCP_WRAPPERS are now available as the tcp_trace and tcp_wrappers properties, respectively. These properties are described above, under “Service Properties”.


2010-09-05 00:30:16

Name

    syslogd– log system messages

Synopsis

    /usr/sbin/syslogd [-d] [-f configfile] [-m markinterval] 
         [-p path] [-t | -T]

Description

    syslogd reads and forwards system messages to the appropriate log files or users, depending upon the priority of a message and the system facility from which it originates. The configuration file /etc/syslog.conf (see syslog.conf(4)) controls where messages are forwarded. syslogd logs a mark (timestamp) message every markinterval minutes (default 20) at priority LOG_INFO to the facility whose name is given as mark in the syslog.conf file.

    A system message consists of a single line of text, which may be prefixed with a priority code number enclosed in angle-brackets (< >); priorities are defined in <sys/syslog.h>.

    syslogd reads from the STREAMS log driver, /dev/log, and from any transport provider specified in /etc/netconfig, /etc/net/transport/hosts, and /etc/net/transport/services.

    syslogd reads the configuration file when it starts up, and again whenever it receives a HUP signal (see signal.h(3HEAD), at which time it also closes all files it has open, re-reads its configuration file, and then opens only the log files that are listed in that file. syslogd exits when it receives a TERM signal.

    As it starts up, syslogd creates the file /var/run/syslog.pid, if possible, containing its process identifier (PID).

    If message ID generation is enabled (see log(7D)), each message will be preceded by an identifier in the following format: [ID msgid facility.priority]. msgid is the message's numeric identifier described in msgid(1M). facility and priority are described in syslog.conf(4). [ID 123456 kern.notice] is an example of an identifier when message ID generation is enabled.

    If the message originated in a loadable kernel module or driver, the kernel module's name (for example, ufs) will be displayed instead of unix. See EXAMPLES for sample output from syslogd with and without message ID generation enabled.

    In an effort to reduce visual clutter, message IDs are not displayed when writing to the console; message IDs are only written to the log file. See Examples.

    The /etc/default/syslogd file contains the following default parameter settings, which are in effect if neither the -t nor -T option is selected. See FILES.

    The recommended way to allow or disallow message logging is through the use of the service management facility (smf(5)) property:

    svc:/system/system-log/config/log_from_remote

    This property specifies whether remote messages are logged. log_from_remote=true is equivalent to the -t command-line option and false is equivalent to the -T command-line option. The default value for -log_from_remote is false. See NOTES, below.

    LOG_FROM_REMOTE

    Specifies whether remote messages are logged. LOG_FROM_REMOTE=NO is equivalent to the -t command-line option. The default value for LOG_FROM_REMOTE is YES.

Options

    The following options are supported:

    -d

    Turn on debugging. This option should only be used interactively in a root shell once the system is in multi-user mode. It should not be used in the system start-up scripts, as this will cause the system to hang at the point where syslogd is started.

    -f configfile

    Specify an alternate configuration file.

    -m markinterval

    Specify an interval, in minutes, between mark messages.

    -p path

    Specify an alternative log device name. The default is /dev/log.

    -T

    Enable the syslogd UDP port to turn on logging of remote messages. This is the default behavior. See Files.

    -t

    Disable the syslogd UDP port to turn off logging of remote messages. See Files.

Examples


    Example 1 syslogd Output Without Message ID Generation Enabled

    The following example shows the output from syslogd when message ID generation is not enabled:


    Sep 29 21:41:18 cathy unix: alloc /: file system full


    Example 2 syslogd Output with ID generation Enabled

    The following example shows the output from syslogd when message ID generation is enabled. The message ID is displayed when writing to log file/var/adm/messages.


    Sep 29 21:41:18 cathy ufs: [ID 845546 kern.notice] 
                                        alloc /: file system full


    Example 3 syslogd Output with ID Generation Enabled

    The following example shows the output from syslogd when message ID generation is enabled when writing to the console. Even though message ID is enabled, the message ID is not displayed at the console.


    Sep 29 21:41:18 cathy ufs: alloc /: file system full


    Example 4 Enabling Acceptance of UDP Messages from Remote Systems

    The following commands enable syslogd to accept entries from remote systems.


    # svccfg -s svc:/system/system-log setprop config/log_from_remote = true
    # svcadm restart svc:/system/system-log
    

Files

    /etc/syslog.conf

    Configuration file

    /var/run/syslog.pid

    Process ID

    /etc/default/syslogd

    Contains default settings. You can override some of the settings by command-line options.

    /dev/log

    STREAMS log driver

    /etc/netconfig

    Transport providers available on the system

    /etc/net/transport/hosts

    Network hosts for each transport

    /etc/net/transport/services

    Network services for each transport

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWcsu

See Also

Notes

    The mark message is a system time stamp, and so it is only defined for the system on which syslogd is running. It can not be forwarded to other systems.

    When syslogd receives a HUP signal, it attempts to complete outputting pending messages, and close all log files to which it is currently logging messages. If, for some reason, one (or more) of these files does not close within a generous grace period, syslogd discards the pending messages, forcibly closes these files, and starts reconfiguration. If this shutdown procedure is disturbed by an unexpected error and syslogd cannot complete reconfiguration, syslogd sends a mail message to the superuser on the current system stating that it has shut down, and exits.

    Care should be taken to ensure that each window displaying messages forwarded by syslogd (especially console windows) is run in the system default locale (which is syslogd's locale). If this advice is not followed, it is possible for a syslog message to alter the terminal settings for that window, possibly even allowing remote execution of arbitrary commands from that window.

    The syslogd service is managed by the service management facility, smf(5), under the service identifier:


     svc:/system/system-log:default

    Administrative actions on this service, such as enabling, disabling, or requesting restart, can be performed using svcadm(1M). The service's status can be queried using the svcs(1) command.

    When syslogd is started by means of svcadm(1M), if a value is specified for LOG_FROM_REMOTE in the /etc/defaults/syslogd file, the SMF property svc:/system/system-log/config/log_from_remote is set to correspond to the LOG_FROM_REMOTE value and the /etc/default/syslogd file is modified to replace the LOG_FROM_REMOTE specification with the following comment:

    # LOG_FROM_REMOTE is now set using svccfg(1m), see syslogd(1m).

    If neither LOG_FROM_REMOTE nor svc:/system/system-log/config/log_from_remote are defined, the default is to log remote messages.

    On installation, the initial value of svc:/system/system-log/config/log_from_remote is false.


2010-09-05 00:29:04

Name

    mdlogd– Solaris Volume Manager daemon

Synopsis

    mdlogd 
    

Description

    mdlogd implements a simple daemon that watches the system console looking for messages written by the Solaris Volume Manger. When a Solaris Volume Manager message is detected, mdlogd sends a generic SNMP trap.

    To enable traps, you must configure mdlogd into the SNMP framework. See Solaris Volume Manager Administration Guide.

Usage

    mdlogd implements the following SNMP MIB:

    SOLARIS-VOLUME-MGR-MIB DEFINITIONS ::= BEGIN
            IMPORTS
                     enterprises FROM RFC1155-SMI
                     DisplayString FROM SNMPv2-TC;
    
            -- Sun Private MIB for Solaris Volume Manager
    
    
            sun       OBJECT IDENTIFIER ::= { enterprises 42 }
            sunSVM       OBJECT IDENTIFIER ::= { sun 104 }
    
            -- this is actually just the string from /dev/log that
            -- matches the md: regular expressions.
            -- This is an interim SNMP trap generator to provide
            -- information until a more complete version is available.
    
            -- this definition is a formalization of the old
            -- Solaris DiskSuite mdlogd trap mib.
    
            svmOldTrapString OBJECT-TYPE
                            SYNTAX DisplayString (SIZE (0..255))
                            ACCESS read-only
                            STATUS mandatory
                            DESCRIPTION
                            "This is the matched string that
                             was obtained from /dev/log."
            ::= { sunSVM 1 }
    
            -- SVM Compatibility ( error trap )
    
            svmNotice        TrapTRAP-TYPE
                            ENTERPRISE sunSVM
                            VARIABLES { svmOldTrapString }
                            DESCRIPTION
                                    "SVM error log trap for NOTICE.
                                     This matches 'NOTICE: md:'"
            ::= 1
    
            svmWarningTrap  TRAP-TYPE
                            ENTERPRISE sunSVM
                            VARIABLES { svmOldTrapString }
                            DESCRIPTION
                                    "SVM error log trap for WARNING..
                                     This matches 'WARNING: md:'"
            ::= 2
    
            svmPanicTrap    TRAP-TYPE
                            ENTERPRISE sunSVM
                            VARIABLES { svmOldTrapString }
                            DESCRIPTION
                                    "SVM error log traps for PANIC..
                                    This matches 'PANIC: md:'"
            ::= 3
    END

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWlvma, SUNWlvmr

    Interface Stability

    Obsolete

See Also