샤브의 블로그 RSS 태그 관리 글쓰기 방명록
solaris 10 (35)
2010-09-05 00:34:26

Name

    zonecfg– set up zone configuration

Synopsis

    zonecfg -z zonename
    
    zonecfg -z zonename subcommand
    
    zonecfg -z zonename -f command_file
    
    zonecfg help

Description

    The zonecfg utility creates and modifies the configuration of a zone. Zone configuration consists of a number of resources and properties.

    To simplify the user interface, zonecfg uses the concept of a scope. The default scope is global.

    The following synopsis of the zonecfg command is for interactive usage:


    zonecfg -z zonename subcommand
    

    Parameters changed through zonecfg do not affect a running zone. The zone must be rebooted for the changes to take effect.

    In addition to creating and modifying a zone, the zonecfg utility can also be used to persistently specify the resource management settings for the global zone.

    In the following text, “rctl” is used as an abbreviation for “resource control”. See resource_controls(5).

    Types of Non-Global Zones

      In the administration of zones, it is useful to distinguish between the global zone and non-global zones. Within non-global zones, there are two types of zone root file system models: sparse and whole root. The sparse root zone model optimizes the sharing of objects. The whole root zone model provides the maximum configurability.

      Sparse Root Zones

        Non-global zones that have inherit-pkg-dir resources (described under “Resources”, below) are called sparse root zones.

        The sparse root zone model optimizes the sharing of objects in the following ways:

        • Only a subset of the packages installed in the global zone are installed directly into the non-global zone.

        • Read-only loopback file systems, identified as inherit-pkg-dir resources, are used to gain access to other files.

        In this model, all packages appear to be installed in the non-global zone. Packages that do not deliver content into read-only loopback mount file systems are fully installed. There is no need to install content delivered into read-only loopback mounted file systems since that content is inherited (and visible) from the global zone.

        • As a general guideline, a zone requires about 100 megabytes of free disk space per zone when the global zone has been installed with all of the standard Solaris packages.

        • By default, any additional packages installed in the global zone also populate the non-global zones. The amount of disk space required might be increased accordingly, depending on whether the additional packages deliver files that reside in the inherit-pkg-dir resource space.

        An additional 40 megabytes of RAM per zone are suggested, but not required on a machine with sufficient swap space.

        A sparse zone inherits the following directories:


        /lib
        /platform
        /sbin
        /usr

        Although zonecfg allows you to remove one of these as an inherited directory, you should not do so. You should either follow the whole-root model or the sparse model; a subset of the sparse model is not tested and you might encounter unexpected problems.

        Adding an additional inherit-pkg-dir directory, such as /opt, to a sparse root zone is acceptable.

      Whole Root Zones

        The whole root zone model provides the maximum configurability. All of the required and any selected optional Solaris packages are installed into the private file systems of the zone. The advantages of this model include the capability for global administrators to customize their zones file system layout. This would be done, for example, to add arbitrary unbundled or third-party packages.

        The disk requirements for this model are determined by the disk space used by the packages currently installed in the global zone.


        Note –

        If you create a sparse root zone that contains the following inherit-pkg-dir directories, you must remove these directories from the non-global zone's configuration before the zone is installed to have a whole root zone:

        • /lib

        • /platform

        • /sbin

        • /usr


    Resources

      The following resource types are supported:

      attr

      Generic attribute.

      capped-cpu

      Limits for CPU usage.

      capped-memory

      Limits for physical, swap, and locked memory.

      dataset

      ZFS dataset.

      dedicated-cpu

      Subset of the system's processors dedicated to this zone while it is running.

      device

      Device.

      fs

      file-system

      inherit-pkg-dir

      Directory inherited from the global zone. Software packages whose contents have been transferred into that directory are inherited in read-only mode by the non-global zone and the non-global zone's packaging database is updated to reflect those packages. Such resources are not modifiable or removable once a zone has been installed with zoneadm.

      net

      Network interface.

      rctl

      Resource control.

    Properties

      Each resource type has one or more properties. There are also some global properties, that is, properties of the configuration as a whole, rather than of some particular resource.

      The following properties are supported:

      (global)

      zonename

      (global)

      zonepath

      (global)

      autoboot

      (global)

      bootargs

      (global)

      pool

      (global)

      limitpriv

      (global)

      brand

      (global)

      cpu-shares

      (global)

      max-lwps

      (global)

      max-msg-ids

      (global)

      max-sem-ids

      (global)

      max-shm-ids

      (global)

      max-shm-memory

      (global)

      scheduling-class

      fs

      dir, special, raw, type, options

      inherit-pkg-dir

      dir

      net

      address, physical, defrouter

      device

      match

      rctl

      name, value

      attr

      name, type, value

      dataset

      name

      dedicated-cpu

      ncpus, importance

      capped-memory

      physical, swap, locked

      capped-cpu

      ncpus

      As for the property values which are paired with these names, they are either simple, complex, or lists. The type allowed is property-specific. Simple values are strings, optionally enclosed within quotation marks. Complex values have the syntax:


      (<name>=<value>,<name>=<value>,...)

      where each <value> is simple, and the <name> strings are unique within a given property. Lists have the syntax:


      [<value>,...]

      where each <value> is either simple or complex. A list of a single value (either simple or complex) is equivalent to specifying that value without the list syntax. That is, “foo” is equivalent to “[foo]”. A list can be empty (denoted by “[]”).

      In interpreting property values, zonecfg accepts regular expressions as specified in fnmatch(5). See EXAMPLES.

      The property types are described as follows:

      global: zonename

      The name of the zone.

      global: zonepath

      Path to zone's file system.

      global: autoboot

      Boolean indicating that a zone should be booted automatically at system boot. Note that if the zones service is disabled, the zone will not autoboot, regardless of the setting of this property. You enable the zones service with a svcadm command, such as:


      # svcadm enable svc:/system/zones:default
      

      Replace enable with disable to disable the zones service. See svcadm(1M).

      global: bootargs

      Arguments (options) to be passed to the zone bootup, unless options are supplied to the “zoneadm boot” command, in which case those take precedence. The valid arguments are described in zoneadm(1M).

      global: pool

      Name of the resource pool that this zone must be bound to when booted. This property is incompatible with the dedicated-cpu resource.

      global: limitpriv

      The maximum set of privileges any process in this zone can obtain. The property should consist of a comma-separated privilege set specification as described in priv_str_to_set(3C). Privileges can be excluded from the resulting set by preceding their names with a dash (-) or an exclamation point (!). The special privilege string “zone” is not supported in this context. If the special string “default” occurs as the first token in the property, it expands into a safe set of privileges that preserve the resource and security isolation described in zones(5). A missing or empty property is equivalent to this same set of safe privileges.

      The system administrator must take extreme care when configuring privileges for a zone. Some privileges cannot be excluded through this mechanism as they are required in order to boot a zone. In addition, there are certain privileges which cannot be given to a zone as doing so would allow processes inside a zone to unduly affect processes in other zones. zoneadm(1M) indicates when an invalid privilege has been added or removed from a zone's privilege set when an attempt is made to either “boot” or “ready” the zone.

      See privileges(5) for a description of privileges. The command “ppriv -l” (see ppriv(1)) produces a list of all Solaris privileges. You can specify privileges as they are displayed by ppriv. In privileges(5), privileges are listed in the form PRIV_privilege_name. For example, the privilege sys_time, as you would specify it in this property, is listed in privileges(5) as PRIV_SYS_TIME.

      global: brand

      The zone's brand type. A zone that is not assigned a brand is considered a “native” zone.

      global: ip-type

      A zone can either share the IP instance with the global zone, which is the default, or have its own exclusive instance of IP.

      This property takes the values shared and exclusive.

      fs: dir, special, raw, type, options

      Values needed to determine how, where, and so forth to mount file systems. See mount(1M), mount(2), fsck(1M), and vfstab(4).

      inherit-pkg-dir: dir

      The directory path.

      net: address, physical, defrouter

      The network address and physical interface name of the network interface. The network address is one of:

      • a valid IPv4 address, optionally followed by “/” and a prefix length;

      • a valid IPv6 address, which must be followed by “/” and a prefix length;

      • a host name which resolves to an IPv4 address.

      Note that host names that resolve to IPv6 addresses are not supported.

      The physical interface name is the network interface name.

      The default router is specified similarly to the network address except that it must not be followed by a / (slash) and a network prefix length.

      A zone can be configured to be either exclusive-IP or shared-IP. For a shared-IP zone, you must set both the physical and address properties; setting the default router is optional. The interface specified in the physical property must be plumbed in the global zone prior to booting the non-global zone. However, if the interface is not used by the global zone, it should be configured down in the global zone, and the default router for the interface should be specified here.

      For an exclusive-IP zone, the physical property must be set and the address and default router properties cannot be set.

      device: match

      Device name to match.

      rctl: name, value

      The name and priv/limit/action triple of a resource control. See prctl(1) and rctladm(1M). The preferred way to set rctl values is to use the global property name associated with a specific rctl.

      attr: name, type, value

      The name, type and value of a generic attribute. The type must be one of int, uint, boolean or string, and the value must be of that type. uint means unsigned , that is, a non-negative integer.

      dataset: name

      The name of a ZFS dataset to be accessed from within the zone. See zfs(1M).

      global: cpu-shares

      The number of Fair Share Scheduler (FSS) shares to allocate to this zone. This property is incompatible with the dedicated-cpu resource. This property is the preferred way to set the zone.cpu-shares rctl.

      global: max-lwps

      The maximum number of LWPs simultaneously available to this zone. This property is the preferred way to set the zone.max-lwps rctl.

      global: max-msg-ids

      The maximum number of message queue IDs allowed for this zone. This property is the preferred way to set the zone.max-msg-ids rctl.

      global: max-sem-ids

      The maximum number of semaphore IDs allowed for this zone. This property is the preferred way to set the zone.max-sem-ids rctl.

      global: max-shm-ids

      The maximum number of shared memory IDs allowed for this zone. This property is the preferred way to set the zone.max-shm-ids rctl.

      global: max-shm-memory

      The maximum amount of shared memory allowed for this zone. This property is the preferred way to set the zone.max-shm-memory rctl. A scale (K, M, G, T) can be applied to the value for this number (for example, 1M is one megabyte).

      global: scheduling-class

      Specifies the scheduling class used for processes running in a zone. When this property is not specified, the scheduling class is established as follows:

      • If the cpu-shares property or equivalent rctl is set, the scheduling class FSS is used.

      • If neither cpu-shares nor the equivalent rctl is set and the zone's pool property references a pool that has a default scheduling class, that class is used.

      • Under any other conditions, the system default scheduling class is used.

      dedicated-cpu: ncpus, importance

      The number of CPUs that should be assigned for this zone's exclusive use. The zone will create a pool and processor set when it boots. See pooladm(1M) and poolcfg(1M) for more information on resource pools. The ncpu property can specify a single value or a range (for example, 1-4) of processors. The importance property is optional; if set, it will specify the pset.importance value for use by poold(1M). If this resource is used, there must be enough free processors to allocate to this zone when it boots or the zone will not boot. The processors assigned to this zone will not be available for the use of the global zone or other zones. This resource is incompatible with both the pool and cpu-shares properties. Only a single instance of this resource can be added to the zone.

      capped-memory: physical, swap, locked

      The caps on the memory that can be used by this zone. A scale (K, M, G, T) can be applied to the value for each of these numbers (for example, 1M is one megabyte). Each of these properties is optional but at least one property must be set when adding this resource. Only a single instance of this resource can be added to the zone. The physical property sets the max-rss for this zone. This will be enforced by rcapd(1M) running in the global zone. The swap property is the preferred way to set the zone.max-swap rctl. The locked property is the preferred way to set the zone.max-locked-memory rctl.

      capped-cpu: ncpus

      Sets a limit on the amount of CPU time that can be used by a zone. The unit used translates to the percentage of a single CPU that can be used by all user threads in a zone, expressed as a fraction (for example, .75) or a mixed number (whole number and fraction, for example, 1.25). An ncpu value of 1 means 100% of a CPU, a value of 1.25 means 125%, .75 mean 75%, and so forth. When projects within a capped zone have their own caps, the minimum value takes precedence.

      The capped-cpu property is an alias for zone.cpu-cap resource control and is related to the zone.cpu-cap resource control. See resource_controls(5).

      The following table summarizes resources, property-names, and types:


      resource          property-name   type
      (global)          zonename        simple
      (global)          zonepath        simple
      (global)          autoboot        simple
      (global)          bootargs        simple
      (global)          pool            simple
      (global)          limitpriv       simple
      (global)          brand           simple
      (global)          ip-type         simple
      (global)          cpu-shares      simple
      (global)          max-lwps        simple
      (global)          max-msg-ids     simple
      (global)          max-sem-ids     simple
      (global)          max-shm-ids     simple
      (global)          max-shm-memory  simple
      (global)          scheduling-class simple
      fs                dir             simple
                         special         simple
                         raw             simple
                         type            simple
                         options         list of simple
      inherit-pkg-dir   dir             simple
      net               address         simple
                         physical        simple
      device            match           simple
      rctl              name            simple
                         value           list of complex
      attr              name            simple
                         type            simple
                         value           simple
      dataset           name            simple
      dedicated-cpu     ncpus           simple or range
                         importance      simple
      
      capped-memory     physical        simple with scale
                         swap            simple with scale
                         locked          simple with scale
      
      capped-cpu        ncpus           simple

      To further specify things, the breakdown of the complex property “value” of the “rctl” resource type, it consists of three name/value pairs, the names being “priv”, “limit” and “action”, each of which takes a simple value. The “name” property of an “attr” resource is syntactically restricted in a fashion similar but not identical to zone names: it must begin with an alphanumeric, and can contain alphanumerics plus the hyphen (-), underscore (_), and dot (.) characters. Attribute names beginning with “zone” are reserved for use by the system. Finally, the “autoboot” global property must have a value of “true“ or “false”.

    Using Kernel Statistics to Monitor CPU Caps

      Using the kernel statistics (kstat(3KSTAT)) module caps, the system maintains information for all capped projects and zones. You can access this information by reading kernel statistics (kstat(3KSTAT)), specifying caps as the kstat module name. The following command displays kernel statistics for all active CPU caps:


      # kstat caps::'/cpucaps/'
      

      A kstat(1M) command running in a zone displays only CPU caps relevant for that zone and for projects in that zone. See EXAMPLES.

      The following are cap-related arguments for use with kstat(1M):

      caps

      The kstat module.

      project_caps or zone_caps

      kstat class, for use with the kstat -c option.

      cpucaps_project_id or cpucaps_zone_id

      kstat name, for use with the kstat -n option. id is the project or zone identifier.

      The following fields are displayed in response to a kstat(1M) command requesting statistics for all CPU caps.

      module

      In this usage of kstat, this field will have the value caps.

      name

      As described above, cpucaps_project_id or cpucaps_zone_id

      above_sec

      Total time, in seconds, spent above the cap.

      below_sec

      Total time, in seconds, spent below the cap.

      maxusage

      Maximum observed CPU usage.

      nwait

      Number of threads on cap wait queue.

      usage

      Current aggregated CPU usage for all threads belonging to a capped project or zone, in terms of a percentage of a single CPU.

      value

      The cap value, in terms of a percentage of a single CPU.

      zonename

      Name of the zone for which statistics are displayed.

      See EXAMPLES for sample output from a kstat command.

Options

    The following options are supported:

    -f command_file

    Specify the name of zonecfg command file. command_file is a text file of zonecfg subcommands, one per line.

    -z zonename

    Specify the name of a zone. Zone names are case sensitive. Zone names must begin with an alphanumeric character and can contain alphanumeric characters, the underscore (_) the hyphen (-), and the dot (.). The name global and all names beginning with SUNW are reserved and cannot be used.

SUBCOMMANDS

    You can use the add and select subcommands to select a specific resource, at which point the scope changes to that resource. The end and cancel subcommands are used to complete the resource specification, at which time the scope is reverted back to global. Certain subcommands, such as add, remove and set, have different semantics in each scope.

    Subcommands which can result in destructive actions or loss of work have an -F option to force the action. If input is from a terminal device, the user is prompted when appropriate if such a command is given without the -F option otherwise, if such a command is given without the -F option, the action is disallowed, with a diagnostic message written to standard error.

    The following subcommands are supported:

    add resource-type (global scope)
    add property-name property-value (resource scope)

    In the global scope, begin the specification for a given resource type. The scope is changed to that resource type.

    In the resource scope, add a property of the given name with the given value. The syntax for property values varies with different property types. In general, it is a simple value or a list of simple values enclosed in square brackets, separated by commas ([foo,bar,baz]). See PROPERTIES.

    cancel

    End the resource specification and reset scope to global. Abandons any partially specified resources. cancel is only applicable in the resource scope.

    clear property-name

    Clear the value for the property.

    commit

    Commit the current configuration from memory to stable storage. The configuration must be committed to be used by zoneadm. Until the in-memory configuration is committed, you can remove changes with the revert subcommand. The commit operation is attempted automatically upon completion of a zonecfg session. Since a configuration must be correct to be committed, this operation automatically does a verify.

    create [-F] [ -a path |-b | -t template]

    Create an in-memory configuration for the specified zone. Use create to begin to configure a new zone. See commit for saving this to stable storage.

    If you are overwriting an existing configuration, specify the -F option to force the action. Specify the -t template option to create a configuration identical to template, where template is the name of a configured zone.

    Use the -a path option to facilitate configuring a detached zone on a new host. The path parameter is the zonepath location of a detached zone that has been moved on to this new host. Once the detached zone is configured, it should be installed using the “zoneadm attach” command (see zoneadm(1M)). All validation of the new zone happens during the attach process, not during zone configuration.

    Use the -b option to create a blank configuration. Without arguments, create applies the Sun default settings.

    delete [-F]

    Delete the specified configuration from memory and stable storage. This action is instantaneous, no commit is necessary. A deleted configuration cannot be reverted.

    Specify the -F option to force the action.

    end

    End the resource specification. This subcommand is only applicable in the resource scope. zonecfg checks to make sure the current resource is completely specified. If so, it is added to the in-memory configuration (see commit for saving this to stable storage) and the scope reverts to global. If the specification is incomplete, it issues an appropriate error message.

    export [-f output-file]

    Print configuration to standard output. Use the -f option to print the configuration to output-file. This option produces output in a form suitable for use in a command file.

    help [usage] [subcommand] [syntax] [command-name]

    Print general help or help about given topic.

    info zonename | zonepath | autoboot | brand | pool | limitpriv
    info [resource-type [property-name=property-value]*]

    Display information about the current configuration. If resource-type is specified, displays only information about resources of the relevant type. If any property-name value pairs are specified, displays only information about resources meeting the given criteria. In the resource scope, any arguments are ignored, and info displays information about the resource which is currently being added or modified.

    remove resource-type{property-name=property-value}(global scope)

    In the global scope, removes the specified resource. The [] syntax means 0 or more of whatever is inside the square braces. If you want only to remove a single instance of the resource, you must specify enough property name-value pairs for the resource to be uniquely identified. If no property name-value pairs are specified, all instances will be removed. If there is more than one pair is specified, a confirmation is required, unless you use the -F option.

    select resource-type {property-name=property-value}

    Select the resource of the given type which matches the given property-name property-value pair criteria, for modification. This subcommand is applicable only in the global scope. The scope is changed to that resource type. The {} syntax means 1 or more of whatever is inside the curly braces. You must specify enough property -name property-value pairs for the resource to be uniquely identified.

    set property-name=property-value

    Set a given property name to the given value. Some properties (for example, zonename and zonepath) are global while others are resource-specific. This subcommand is applicable in both the global and resource scopes.

    verify

    Verify the current configuration for correctness:

    • All resources have all of their required properties specified.

    • A zonepath is specified.

    revert [-F]

    Revert the configuration back to the last committed state. The -F option can be used to force the action.

    exit [-F]

    Exit the zonecfg session. A commit is automatically attempted if needed. You can also use an EOF character to exit zonecfg. The -F option can be used to force the action.

Examples


    Example 1 Creating the Environment for a New Zone

    In the following example, zonecfg creates the environment for a new zone. /usr/local is loopback mounted from the global zone into /opt/local. /opt/sfw is loopback mounted from the global zone, three logical network interfaces are added, and a limit on the number of fair-share scheduler (FSS) CPU shares for a zone is set using the rctl resource type. The example also shows how to select a given resource for modification.


    example# zonecfg -z myzone3
    my-zone3: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:myzone3> create
    zonecfg:myzone3> set zonepath=/export/home/my-zone3
    zonecfg:myzone3> set autoboot=true
    zonecfg:myzone3> add fs
    zonecfg:myzone3:fs> set dir=/usr/local
    zonecfg:myzone3:fs> set special=/opt/local
    zonecfg:myzone3:fs> set type=lofs
    zonecfg:myzone3:fs> add options [ro,nodevices]
    zonecfg:myzone3:fs> end
    zonecfg:myzone3> add fs
    zonecfg:myzone3:fs> set dir=/mnt
    zonecfg:myzone3:fs> set special=/dev/dsk/c0t0d0s7
    zonecfg:myzone3:fs> set raw=/dev/rdsk/c0t0d0s7
    zonecfg:myzone3:fs> set type=ufs
    zonecfg:myzone3:fs> end
    zonecfg:myzone3> add inherit-pkg-dir
    zonecfg:myzone3:inherit-pkg-dir> set dir=/opt/sfw
    zonecfg:myzone3:inherit-pkg-dir> end
    zonecfg:myzone3> add net
    zonecfg:myzone3:net> set address=192.168.0.1/24
    zonecfg:myzone3:net> set physical=eri0
    zonecfg:myzone3:net> end
    zonecfg:myzone3> add net
    zonecfg:myzone3:net> set address=192.168.1.2/24
    zonecfg:myzone3:net> set physical=eri0
    zonecfg:myzone3:net> end
    zonecfg:myzone3> add net
    zonecfg:myzone3:net> set address=192.168.2.3/24
    zonecfg:myzone3:net> set physical=eri0
    zonecfg:myzone3:net> end
    zonecfg:my-zone3> set cpu-shares=5
    zonecfg:my-zone3> add capped-memory
    zonecfg:my-zone3:capped-memory> set physical=50m
    zonecfg:my-zone3:capped-memory> set swap=100m
    zonecfg:my-zone3:capped-memory> end
    zonecfg:myzone3> exit
    


    Example 2 Creating a Non-Native Zone

    The following example creates a new Linux zone:


    example# zonecfg -z lxzone
    lxzone: No such zone configured
    Use 'create' to begin configuring a new zone
    zonecfg:lxzone> create -t SUNWlx
    zonecfg:lxzone> set zonepath=/export/zones/lxzone
    zonecfg:lxzone> set autoboot=true
    zonecfg:lxzone> exit
    


    Example 3 Creating an Exclusive-IP Zone

    The following example creates a zone that is granted exclusive access to bge1 and bge33000 and that is isolated at the IP layer from the other zones configured on the system.

    The IP addresses and routing is configured inside the new zone using sysidtool(1M).


    example# zonecfg -z excl
    excl: No such zone configured
    Use 'create' to begin configuring a new zone
    zonecfg:excl> create
    zonecfg:excl> set zonepath=/export/zones/excl
    zonecfg:excl> set ip-type=exclusive
    zonecfg:excl> add net
    zonecfg:excl:net> set physical=bge1
    zonecfg:excl:net> end
    zonecfg:excl> add net
    zonecfg:excl:net> set physical=bge33000
    zonecfg:excl:net> end
    zonecfg:excl> exit
    


    Example 4 Associating a Zone with a Resource Pool

    The following example shows how to associate an existing zone with an existing resource pool:


    example# zonecfg -z myzone
    zonecfg:myzone> set pool=mypool
    zonecfg:myzone> exit
    

    For more information about resource pools, see pooladm(1M) and poolcfg(1M).



    Example 5 Changing the Name of a Zone

    The following example shows how to change the name of an existing zone:


    example# zonecfg -z myzone
    zonecfg:myzone> set zonename=myzone2
    zonecfg:myzone2> exit
    


    Example 6 Changing the Privilege Set of a Zone

    The following example shows how to change the set of privileges an existing zone's processes will be limited to the next time the zone is booted. In this particular case, the privilege set will be the standard safe set of privileges a zone normally has along with the privilege to change the system date and time:


    example# zonecfg -z myzone
    zonecfg:myzone> set limitpriv="default,sys_time"
    zonecfg:myzone2> exit
    


    Example 7 Setting the zone.cpu-shares Property for the Global Zone

    The following command sets the zone.cpu-shares property for the global zone:


    example# zonecfg -z global
    zonecfg:global> set cpu-shares=5
    zonecfg:global> exit
    


    Example 8 Using Pattern Matching

    The following commands illustrate zonecfg support for pattern matching. In the zone flexlm, enter:


    zonecfg:flexlm> add device
    zonecfg:flexlm:device> set match="/dev/cua/a00[2-5]"
    zonecfg:flexlm:device> end
    

    In the global zone, enter:


    global# ls /dev/cua
    a     a000  a001  a002  a003  a004  a005  a006  a007  b

    In the zone flexlm, enter:


    flexlm# ls /dev/cua
    a002  a003  a004  a005


    Example 9 Setting a Cap for a Zone to Three CPUs

    The following sequence uses the zonecfg command to set the CPU cap for a zone to three CPUs.


    zonecfg:myzone> add capped-cpu
    zonecfg:myzone>capped-cpu> set ncpus=3
    zonecfg:myzone>capped-cpu>capped-cpu> end
    

    The preceding sequence, which uses the capped-cpu property, is equivalent to the following sequence, which makes use of the zone.cpu-cap resource control.


    zonecfg:myzone> add rctl
    zonecfg:myzone:rctl> set name=zone.cpu-cap
    zonecfg:myzone:rctl> add value (priv=privileged,limit=300,action=none)
    zonecfg:myzone:rctl> end
    


    Example 10 Using kstat to Monitor CPU Caps

    The following command displays information about all CPU caps.


    # kstat -n /cpucaps/
    module: caps                            instance: 0     
    name:   cpucaps_project_0               class:    project_caps
            above_sec                       0
            below_sec                       2157
            crtime                          821.048183159
            maxusage                        2
            nwait                           0
            snaptime                        235885.637253027
            usage                           0
            value                           18446743151372347932
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_project_1               class:    project_caps
            above_sec                       0
            below_sec                       0
            crtime                          225339.192787265
            maxusage                        5
            nwait                           0
            snaptime                        235885.637591677
            usage                           5
            value                           18446743151372347932
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_project_201             class:    project_caps
            above_sec                       0
            below_sec                       235105
            crtime                          780.37961782
            maxusage                        100
            nwait                           0
            snaptime                        235885.637789687
            usage                           43
            value                           100
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_project_202             class:    project_caps
            above_sec                       0
            below_sec                       235094
            crtime                          791.72983782
            maxusage                        100
            nwait                           0
            snaptime                        235885.637967512
            usage                           48
            value                           100
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_project_203             class:    project_caps
            above_sec                       0
            below_sec                       235034
            crtime                          852.104401481
            maxusage                        75
            nwait                           0
            snaptime                        235885.638144304
            usage                           47
            value                           100
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_project_86710           class:    project_caps
            above_sec                       22
            below_sec                       235166
            crtime                          698.441717859
            maxusage                        101
            nwait                           0
            snaptime                        235885.638319871
            usage                           54
            value                           100
            zonename                        global
    
    module: caps                            instance: 0     
    name:   cpucaps_zone_0                  class:    zone_caps
            above_sec                       100733
            below_sec                       134332
            crtime                          821.048177123
            maxusage                        207
            nwait                           2
            snaptime                        235885.638497731
            usage                           199
            value                           200
            zonename                        global
    
    module: caps                            instance: 1     
    name:   cpucaps_project_0               class:    project_caps
            above_sec                       0
            below_sec                       0
            crtime                          225360.256448422
            maxusage                        7
            nwait                           0
            snaptime                        235885.638714404
            usage                           7
            value                           18446743151372347932
            zonename                        test_001
    
    module: caps                            instance: 1     
    name:   cpucaps_zone_1                  class:    zone_caps
            above_sec                       2
            below_sec                       10524
            crtime                          225360.256440278
            maxusage                        106
            nwait                           0
            snaptime                        235885.638896443
            usage                           7
            value                           100
            zonename                        test_001


    Example 11 Displaying CPU Caps for a Specific Zone or Project

    Using the kstat -c and -i options, you can display CPU caps for a specific zone or project, as below. The first command produces a display for a specific project, the second for the same project within zone 1.


    # kstat -c project_caps
    
    # kstat -c project_caps -i 1
    

Exit Status

    The following exit values are returned:

    0

    Successful completion.

    1

    An error occurred.

    2

    Invalid usage.

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWzoneu

    Interface Stability

    Volatile

See Also

Notes

    All character data used by zonecfg must be in US-ASCII encoding.


2010-09-05 00:29:04

Name

    mdlogd– Solaris Volume Manager daemon

Synopsis

    mdlogd 
    

Description

    mdlogd implements a simple daemon that watches the system console looking for messages written by the Solaris Volume Manger. When a Solaris Volume Manager message is detected, mdlogd sends a generic SNMP trap.

    To enable traps, you must configure mdlogd into the SNMP framework. See Solaris Volume Manager Administration Guide.

Usage

    mdlogd implements the following SNMP MIB:

    SOLARIS-VOLUME-MGR-MIB DEFINITIONS ::= BEGIN
            IMPORTS
                     enterprises FROM RFC1155-SMI
                     DisplayString FROM SNMPv2-TC;
    
            -- Sun Private MIB for Solaris Volume Manager
    
    
            sun       OBJECT IDENTIFIER ::= { enterprises 42 }
            sunSVM       OBJECT IDENTIFIER ::= { sun 104 }
    
            -- this is actually just the string from /dev/log that
            -- matches the md: regular expressions.
            -- This is an interim SNMP trap generator to provide
            -- information until a more complete version is available.
    
            -- this definition is a formalization of the old
            -- Solaris DiskSuite mdlogd trap mib.
    
            svmOldTrapString OBJECT-TYPE
                            SYNTAX DisplayString (SIZE (0..255))
                            ACCESS read-only
                            STATUS mandatory
                            DESCRIPTION
                            "This is the matched string that
                             was obtained from /dev/log."
            ::= { sunSVM 1 }
    
            -- SVM Compatibility ( error trap )
    
            svmNotice        TrapTRAP-TYPE
                            ENTERPRISE sunSVM
                            VARIABLES { svmOldTrapString }
                            DESCRIPTION
                                    "SVM error log trap for NOTICE.
                                     This matches 'NOTICE: md:'"
            ::= 1
    
            svmWarningTrap  TRAP-TYPE
                            ENTERPRISE sunSVM
                            VARIABLES { svmOldTrapString }
                            DESCRIPTION
                                    "SVM error log trap for WARNING..
                                     This matches 'WARNING: md:'"
            ::= 2
    
            svmPanicTrap    TRAP-TYPE
                            ENTERPRISE sunSVM
                            VARIABLES { svmOldTrapString }
                            DESCRIPTION
                                    "SVM error log traps for PANIC..
                                    This matches 'PANIC: md:'"
            ::= 3
    END

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWlvma, SUNWlvmr

    Interface Stability

    Obsolete

See Also


2010-09-05 00:27:35

Name

    zdump– time zone dumper

Synopsis

    zdump [--version] [-v] [-c [loyear,]hiyear] [zonename]...

Description

    The zdump command prints the current time for each time zone (zonename) listed on the command line. Specify zonename as the name of the time zone database file relative to /usr/share/lib/zoneinfo.

    Specifying an invalid time zone (zonename) to zdump does not return an error, rather zdump uses GMTO. This is consistent with the behavior of the library calls; zdump reflects the same behavior of the time routines in libc. See ctime(3C) and mktime(3C).

Options

    The following options are supported:

    --version

    Outputs version information and exits.

    -v

    Displays the entire contents of the time zone database file for zonename. Prints the time at the lowest possible time value; the time one day after the lowest possible time value; the times both one second before and exactly at each time at which the rules for computing local time change; the time at the highest possible time value; and the time at one day less than the highest possible time value. See mktime(3C) and ctime(3C) for information regarding time value (time_t). Each line of output ends with isdst=1 if the given time is Daylight Saving Time, or isdst=0 otherwise.

    -c [loyear],hiyear

    Cuts off the verbose output near the start of the given year(s) . By default, the program cuts off verbose output near the start of the years -500 and 2500.

Exit Status

    The following exit values are returned:

    0

    Successful completion.

    1

    An error occurred.

Files

    /usr/share/lib/zoneinfo

    Standard zone information directory

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWcsu

    Interface Stability

    Stable

See Also


2010-09-05 00:26:23

Name

    snoop– capture and inspect network packets

Synopsis

    snoop  [-aqrCDNPSvV] [-t [r |  a |  d]] [-c maxcount]
     [-d device] [-i filename] [-n filename] [-o filename]
     [-p first [, last]] [-s snaplen] [-x offset [, length]]
     [expression]

Description

    snoop captures packets from the network and displays their contents. snoop uses both the network packet filter and streams buffer modules to provide efficient capture of packets from the network. Captured packets can be displayed as they are received, or saved to a file (which is RFC 1761–compliant) for later inspection.

    snoop can display packets in a single-line summary form or in verbose multi-line forms. In summary form, with the exception of certain VLAN packets, only the data pertaining to the highest level protocol is displayed. If a packet has a VLAN header and its VLAN ID is non-zero, then snoop will show that the packet is VLAN tagged. For example, an NFS packet will have only NFS information displayed. Except for VLAN information under the condition just described, the underlying RPC, UDP, IP, and Ethernet frame information is suppressed, but can be displayed if either of the verbose options are chosen.

    In the absence of a name service, such as LDAP or NIS, snoop displays host names as numeric IP addresses.

    snoop requires an interactive interface.

Options

    -C

    List the code generated from the filter expression for either the kernel packet filter, or snoop's own filter.

    -D

    Display number of packets dropped during capture on the summary line.

    -N

    Create an IP address-to-name file from a capture file. This must be set together with the -i option that names a capture file. The address-to-name file has the same name as the capture file with .names appended. This file records the IP address to hostname mapping at the capture site and increases the portability of the capture file. Generate a .names file if the capture file is to be analyzed elsewhere. Packets are not displayed when this flag is used.

    -P

    Capture packets in non-promiscuous mode. Only broadcast, multicast, or packets addressed to the host machine will be seen.

    -S

    Display size of the entire link layer frame in bytes on the summary line.

    -V

    Verbose summary mode. This is halfway between summary mode and verbose mode in degree of verbosity. Instead of displaying just the summary line for the highest level protocol in a packet, it displays a summary line for each protocol layer in the packet. For instance, for an NFS packet it will display a line each for the ETHER, IP, UDP, RPC and NFS layers. Verbose summary mode output may be easily piped through grep to extract packets of interest. For example, to view only RPC summary lines, enter the following: example# snoop -i rpc.cap -V | grep RPC

    -a

    Listen to packets on /dev/audio (warning: can be noisy).

    -c maxcount

    Quit after capturing maxcount packets. Otherwise keep capturing until there is no disk space left or until interrupted with Control-C.

    -d device

    Receive packets from the network using the interface specified by device, for example, eri0 or hme0. The program netstat(1M), when invoked with the -i flag, lists all the interfaces that a machine has. Normally, snoop will automatically choose the first non-loopback interface it finds.

    -i filename

    Display packets previously captured in filename. Without this option, snoop reads packets from the network interface. If a filename.names file is present, it is automatically loaded into the snoop IP address-to-name mapping table (See -N flag).

    -n filename

    Use filename as an IP address-to-name mapping table. This file must have the same format as the /etc/hosts file (IP address followed by the hostname).

    -o filename

    Save captured packets in filename as they are captured. (This filename is referred to as the “capture file”.) The format of the capture file is RFC 1761–compliant. During packet capture, a count of the number of packets saved in the file is displayed. If you wish just to count packets without saving to a file, name the file /dev/null.

    -p first [ , last ]

    Select one or more packets to be displayed from a capture file. The first packet in the file is packet number 1.

    -q

    When capturing network packets into a file, do not display the packet count. This can improve packet capturing performance.

    -r

    Do not resolve the IP address to the symbolic name. This prevents snoop from generating network traffic while capturing and displaying packets. However, if the -n option is used, and an address is found in the mapping file, its corresponding name will be used.

    -s snaplen

    Truncate each packet after snaplen bytes. Usually the whole packet is captured. This option is useful if only certain packet header information is required. The packet truncation is done within the kernel giving better utilization of the streams packet buffer. This means less chance of dropped packets due to buffer overflow during periods of high traffic. It also saves disk space when capturing large traces to a capture file. To capture only IP headers (no options) use a snaplen of 34. For UDP use 42, and for TCP use 54. You can capture RPC headers with a snaplen of 80 bytes. NFS headers can be captured in 120 bytes.

    -t [ r | a | d ]

    Time-stamp presentation. Time-stamps are accurate to within 4 microseconds. The default is for times to be presented in d (delta) format (the time since receiving the previous packet). Option a (absolute) gives wall-clock time. Option r (relative) gives time relative to the first packet displayed. This can be used with the -p option to display time relative to any selected packet.

    -v

    Verbose mode. Print packet headers in lots of detail. This display consumes many lines per packet and should be used only on selected packets.

    -xoffset [ , length]

    Display packet data in hexadecimal and ASCII format. The offset and length values select a portion of the packet to be displayed. To display the whole packet, use an offset of 0. If a length value is not provided, the rest of the packet is displayed.

Operands

    expression

    Select packets either from the network or from a capture file. Only packets for which the expression is true will be selected. If no expression is provided it is assumed to be true.

    Given a filter expression, snoop generates code for either the kernel packet filter or for its own internal filter. If capturing packets with the network interface, code for the kernel packet filter is generated. This filter is implemented as a streams module, upstream of the buffer module. The buffer module accumulates packets until it becomes full and passes the packets on to snoop. The kernel packet filter is very efficient, since it rejects unwanted packets in the kernel before they reach the packet buffer or snoop. The kernel packet filter has some limitations in its implementation; it is possible to construct filter expressions that it cannot handle. In this event, snoop tries to split the filter and do as much filtering in the kernel as possible. The remaining filtering is done by the packet filter for snoop. The -C flag can be used to view generated code for either the packet filter for the kernel or the packet filter for snoop. If packets are read from a capture file using the -i option, only the packet filter for snoop is used.

    A filter expression consists of a series of one or more boolean primitives that may be combined with boolean operators (AND, OR, and NOT). Normal precedence rules for boolean operators apply. Order of evaluation of these operators may be controlled with parentheses. Since parentheses and other filter expression characters are known to the shell, it is often necessary to enclose the filter expression in quotes. Refer to Example 2 for information about setting up more efficient filters.

    The primitives are:

    host hostname

    True if the source or destination address is that of hostname. The hostname argument may be a literal address. The keyword host may be omitted if the name does not conflict with the name of another expression primitive. For example, pinky selects packets transmitted to or received from the host pinky, whereas pinky and dinky selects packets exchanged between hosts pinky AND dinky.

    The type of address used depends on the primitive which precedes the host primitive. The possible qualifiers are inet, inet6, ether, or none. These three primitives are discussed below. Having none of the primitives present is equivalent to “inet host hostname or inet6 host hostname”. In other words, snoop tries to filter on all IP addresses associated with hostname.

    inet or inet6

    A qualifier that modifies the host primitive that follows. If it is inet, then snoop tries to filter on all IPv4 addresses returned from a name lookup. If it is inet6, snoop tries to filter on all IPv6 addresses returned from a name lookup.

    ipaddr, atalkaddr, or etheraddr

    Literal addresses, IP dotted, AppleTalk dotted, and Ethernet colon are recognized. For example,

    • 172.16.40.13” matches all packets with that IP

    • 2::9255:a00:20ff:fe73:6e35” matches all packets with that IPv6 address as source or destination;

    • 65281.13” matches all packets with that AppleTalk address;

    • 8:0:20:f:b1:51” matches all packets with the Ethernet address as source or destination.

    An Ethernet address beginning with a letter is interpreted as a hostname. To avoid this, prepend a zero when specifying the address. For example, if the Ethernet address is aa:0:45:23:52:44, then specify it by add a leading zero to make it 0aa:0:45:23:52:44.

    from or src

    A qualifier that modifies the following host, net, ipaddr, atalkaddr, etheraddr, port or rpc primitive to match just the source address, port, or RPC reply.

    to or dst

    A qualifier that modifies the following host, net, ipaddr, atalkaddr, etheraddr, port or rpc primitive to match just the destination address, port, or RPC call.

    ether

    A qualifier that modifies the following host primitive to resolve a name to an Ethernet address. Normally, IP address matching is performed. This option is not supported on media such as IPoIB (IP over InfiniBand).

    ethertype number

    True if the Ethernet type field has value number. If number is not 0x8100 (VLAN) and the packet is VLAN tagged, then the expression will match the encapsulated Ethernet type.

    ip, ip6, arp, rarp, pppoed, pppoes

    True if the packet is of the appropriate ethertype.

    vlan

    True if the packet has ethertype VLAN and the VLAN ID is not zero.

    vlan-id id

    True for packets of ethertype VLAN with the id id.

    pppoe

    True if the ethertype of the packet is either pppoed or pppoes.

    broadcast

    True if the packet is a broadcast packet. Equivalent to ether[2:4] = 0xffffffff for Ethernet. This option is not supported on media such as IPoIB (IP over InfiniBand).

    multicast

    True if the packet is a multicast packet. Equivalent to “ether[0] & 1 = 1” on Ethernet. This option is not supported on media such as IPoIB (IP over InfiniBand).

    bootp, dhcp

    True if the packet is an unfragmented IPv4 UDP packet with either a source port of BOOTPS (67) and a destination port of BOOTPC (68), or a source port of BOOTPC (68) and a destination of BOOTPS (67).

    dhcp6

    True if the packet is an unfragmented IPv6 UDP packet with either a source port of DHCPV6-SERVER (547) and a destination port of DHCPV6-CLIENT (546), or a source port of DHCPV6-CLIENT (546) and a destination of DHCPV6-SERVER (547).

    apple

    True if the packet is an Apple Ethertalk packet. Equivalent to “ethertype 0x809b or ethertype 0x80f3”.

    decnet

    True if the packet is a DECNET packet.

    greater length

    True if the packet is longer than length.

    less length

    True if the packet is shorter than length.

    udp, tcp, icmp, icmp6, ah, esp

    True if the IP or IPv6 protocol is of the appropriate type.

    net net

    True if either the IP source or destination address has a network number of net. The from or to qualifier may be used to select packets for which the network number occurs only in the source or destination address.

    port port

    True if either the source or destination port is port. The port may be either a port number or name from /etc/services. The tcp or udp primitives may be used to select TCP or UDP ports only. The from or to qualifier may be used to select packets for which the port occurs only as the source or destination.

    rpc prog [ , vers [ , proc ] ]

    True if the packet is an RPC call or reply packet for the protocol identified by prog. The prog may be either the name of an RPC protocol from /etc/rpc or a program number. The vers and proc may be used to further qualify the program version and procedure number, for example, rpc nfs,2,0 selects all calls and replies for the NFS null procedure. The to or from qualifier may be used to select either call or reply packets only.

    ldap

    True if the packet is an LDAP packet on port 389.

    gateway host

    True if the packet used host as a gateway, that is, the Ethernet source or destination address was for host but not the IP address. Equivalent to “ether host host and not host host”.

    nofrag

    True if the packet is unfragmented or is the first in a series of IP fragments. Equivalent to ip[6:2] & 0x1fff = 0.

    expr relop expr

    True if the relation holds, where relop is one of >, <, >=, <=, =, !=, and expr is an arithmetic expression composed of numbers, packet field selectors, the length primitive, and arithmetic operators +, -, *, &, |, ^, and %. The arithmetic operators within expr are evaluated before the relational operator and normal precedence rules apply between the arithmetic operators, such as multiplication before addition. Parentheses may be used to control the order of evaluation. To use the value of a field in the packet use the following syntax:


    base[expr [: size ] ]

    where expr evaluates the value of an offset into the packet from a base offset which may be ether, ip, ip6, udp, tcp, or icmp. The size value specifies the size of the field. If not given, 1 is assumed. Other legal values are 2 and 4. For example,

    ether[0] & 1 = 1

    is equivalent to multicast

    ether[2:4] = 0xffffffff

    is equivalent to broadcast.

    ip[ip[0] & 0xf * 4 : 2] = 2049

    is equivalent to udp[0:2] = 2049

    ip[0] & 0xf > 5

    selects IP packets with options.

    ip[6:2] & 0x1fff = 0

    eliminates IP fragments.

    udp and ip[6:2]&0x1fff = 0 and udp[6:2] != 0

    finds all packets with UDP checksums.

    The length primitive may be used to obtain the length of the packet. For instance “length > 60” is equivalent to “greater 60”, and “ether[length - 1]” obtains the value of the last byte in a packet.

    and

    Perform a logical AND operation between two boolean values. The AND operation is implied by the juxtaposition of two boolean expressions, for example “dinky pinky” is the same as “dinky AND pinky”.

    or or ,

    Perform a logical OR operation between two boolean values. A comma may be used instead, for example, “dinky,pinky” is the same as “dinky OR pinky”.

    not or !

    Perform a logical NOT operation on the following boolean value. This operator is evaluated before AND or OR.

    slp

    True if the packet is an SLP packet.

    sctp

    True if the packet is an SCTP packet.

    ospf

    True if the packet is an OSPF packet.

Examples


    Example 1 Using the snoop Command

    Capture all packets and display them as they are received:


    example# snoop
    

    Capture packets with host funky as either the source or destination and display them as they are received:


    example# snoop funky
    

    Capture packets between funky and pinky and save them to a file. Then inspect the packets using times (in seconds) relative to the first captured packet:


    example# snoop -o cap funky pinky
    example# snoop -i cap -t r | more
    

    To look at selected packets in another capture file:


    example# snoop -i pkts -p 99,108
     99   0.0027   boutique -> sunroof     NFS C GETATTR FH="8E6
    100" 0.0046   sunroof -> boutique     NFS R GETATTR OK
    101   0.0080   boutique -> sunroof NFS C RENAME FH="8E6C" MTra00192 to .nfs08
    102   0.0102   marmot -> viper        NFS C LOOKUP FH="561E" screen.r.13.i386
    103   0.0072   viper -> marmot       NFS R LOOKUP No such file or directory
    104   0.0085   bugbomb -> sunroof    RLOGIN C PORT="1023" h
    105   0.0005   kandinsky -> sparky    RSTAT C Get Statistics
    106   0.0004   beeblebrox -> sunroof  NFS C GETATTR FH="0307
    107" 0.0021   sparky -> kandinsky    RSTAT R
    108   0.0073   office -> jeremiah      NFS C READ FH="2584" at 40960 for 8192

    To look at packet 101 in more detail:


    example# snoop -i pkts -v -p101
    ETHER:  ----- Ether Header -----
    ETHER:
    ETHER:  Packet 101 arrived at 16:09:53.59
    ETHER:  Packet size ="210" bytes
    ETHER:  Destination ="8:0:20:1:3d:94," Sun
    ETHER:  Source      ="8:0:69:1:5f:e," Silicon Graphics
    ETHER:  Ethertype ="0800" (IP)
    ETHER:
    IP:   ----- IP Header -----
    IP:
    IP:   Version ="4," header length ="20" bytes
    IP:   Type of service ="00
    IP:" ..0. .... ="routine
    IP:" ...0 .... ="normal" delay
    IP:         .... 0... ="normal" throughput
    IP:         .... .0.. ="normal" reliability
    IP:   Total length ="196" bytes
    IP:   Identification 19846
    IP:   Flags ="0X
    IP:" .0.. .... ="may" fragment
    IP:   ..0. .... ="more" fragments
    IP:   Fragment offset ="0" bytes
    IP:   Time to live ="255" seconds/hops
    IP:   Protocol ="17" (UDP)
    IP:   Header checksum ="18DC
    IP:" Source address ="172.16.40.222," boutique
    IP:   Destination address ="172.16.40.200," sunroof
    IP:
    UDP:  ----- UDP Header -----
    UDP:
    UDP:  Source port ="1023
    UDP:" Destination port ="2049" (Sun RPC)
    UDP:  Length ="176
    UDP:" Checksum ="0
    UDP:
    RPC:" ----- SUN RPC Header -----
    RPC:
    RPC:  Transaction id ="665905
    RPC:" Type ="0" (Call)
    RPC:  RPC version ="2
    RPC:" Program ="100003" (NFS), version ="2," procedure ="1
    RPC:" Credentials: Flavor ="1" (Unix), len ="32" bytes
    RPC:     Time ="06-Mar-90" 07:26:58
    RPC:     Hostname ="boutique
    RPC:" Uid ="0," Gid ="1
    RPC:" Groups ="1
    RPC:" Verifier   : Flavor ="0" (None), len ="0" bytes
    RPC:
    NFS:  ----- SUN NFS -----
    NFS:
    NFS:  Proc ="11" (Rename)
    NFS:  File handle ="000016430000000100080000305A1C47
    NFS:" 597A0000000800002046314AFC450000
    NFS:  File name ="MTra00192
    NFS:" File handle ="000016430000000100080000305A1C47
    NFS:" 597A0000000800002046314AFC450000
    NFS:  File name =".nfs08
    NFS:" 

    To view just the NFS packets between sunroof and boutique:


    example# snoop -i pkts rpc nfs and sunroof and boutique
    1   0.0000   boutique -> sunroof    NFS C GETATTR FH="8E6C
    2" 0.0046    sunroof -> boutique   NFS R GETATTR OK  
    3   0.0080   boutique -> sunroof   NFS C RENAME FH="8E6C" MTra00192 to .nfs08

    To save these packets to a new capture file:


    example# snoop -i pkts -o pkts.nfs rpc nfs sunroof boutique
    

    To view encapsulated packets, there will be an indicator of encapsulation:


    example# snoop ip-in-ip
    sunroof -> boutique ICMP Echo request    (1 encap)

    If -V is used on an encapsulated packet:


    example# snoop -V ip-in-ip
    sunroof -> boutique  ETHER Type="0800" (IP), size ="118" bytes
    sunroof -> boutique  IP D="172.16.40.222" S="172.16.40.200" LEN="104," ID="27497
    sunroof" -> boutique  IP  D="10.1.1.2" S="10.1.1.1" LEN="84," ID="27497
    sunroof" -> boutique  ICMP Echo request


    Example 2 Setting Up A More Efficient Filter

    To set up a more efficient filter, the following filters should be used toward the end of the expression, so that the first part of the expression can be set up in the kernel: greater, less, port, rpc, nofrag, and relop. The presence of OR makes it difficult to split the filtering when using these primitives that cannot be set in the kernel. Instead, use parentheses to enforce the primitives that should be OR'd.

    To capture packets between funky and pinky of type tcp or udp on port 80:


    example# snoop funky and pinky and port 80 and tcp or udp
    

    Since the primitive port cannot be handled by the kernel filter, and there is also an OR in the expression, a more efficient way to filter is to move the OR to the end of the expression and to use parentheses to enforce the OR between tcp and udp:


    example# snoop funky and pinky and (tcp or udp) and port 80
    

Exit Status

    0

    Successful completion.

    1

    An error occurred.

Files

    /dev/audio

    Symbolic link to the system's primary audio device.

    /dev/null

    The null file.

    /etc/hosts

    Host name database.

    /etc/rpc

    RPC program number data base.

    /etc/services

    Internet services and aliases.

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWrcmdc

See Also

Warnings

    The processing overhead is much higher for realtime packet interpretation. Consequently, the packet drop count may be higher. For more reliable capture, output raw packets to a file using the -o option and analyze the packets off-line.

    Unfiltered packet capture imposes a heavy processing load on the host computer, particularly if the captured packets are interpreted realtime. This processing load further increases if verbose options are used. Since heavy use of snoop may deny computing resources to other processes, it should not be used on production servers. Heavy use of snoop should be restricted to a dedicated computer.

    snoop does not reassemble IP fragments. Interpretation of higher level protocol halts at the end of the first IP fragment.

    snoop may generate extra packets as a side-effect of its use. For example it may use a network name service (NIS or NIS+) to convert IP addresses to host names for display. Capturing into a file for later display can be used to postpone the address-to-name mapping until after the capture session is complete. Capturing into an NFS-mounted file may also generate extra packets.

    Setting the snaplen (-s option) to small values may remove header information that is needed to interpret higher level protocols. The exact cutoff value depends on the network and protocols being used. For NFS Version 2 traffic using UDP on 10 Mb/s Ethernet, do not set snaplen less than 150 bytes. For NFS Version 3 traffic using TCP on 100 Mb/s Ethernet, snaplen should be 250 bytes or more.

    snoop requires information from an RPC request to fully interpret an RPC reply. If an RPC reply in a capture file or packet range does not have a request preceding it, then only the RPC reply header will be displayed.


'Device & Language > Solaris 10' 카테고리의 다른 글

mdlogd– Solaris Volume Manager daemon  (0) 2010.09.05
zdump– time zone dumper  (0) 2010.09.05
ypserv, ypxfrd– NIS server and binder processes  (0) 2010.09.05
ypbind– NIS binder process  (0) 2010.09.05
mkdir– make directories  (0) 2010.09.05
2010-09-05 00:23:39

Name

    ypbind– NIS binder process

Synopsis

    /usr/lib/netsvc/yp/ypbind [-broadcast | -ypset | -ypsetme]

Description

    NIS provides a simple network lookup service consisting of databases and processes. The databases are stored at the machine that runs an NIS server process. The programmatic interface to NIS is described in ypclnt(3NSL). Administrative tools are described in ypinit(1M), ypwhich(1), and ypset(1M). Tools to see the contents of NIS maps are described in ypcat(1), and ypmatch(1).

    ypbind is a daemon process that is activated at system startup time from the svc:/network/nis/client:default service. By default, it is invoked as ypbind -broadcast. ypbind runs on all client machines that are set up to use NIS. See sysidtool(1M). The function of ypbind is to remember information that lets all NIS client processes on a node communicate with some NIS server process. ypbind must run on every machine which has NIS client processes. The NIS server may or may not be running on the same node, but must be running somewhere on the network. If the NIS server is a NIS+ server in NIS (YP) compatibility mode, see the NOTES section of the ypfiles(4)man page for more information.

    The information ypbind remembers is called a binding — the association of a domain name with a NIS server. The process of binding is driven by client requests. As a request for an unbound domain comes in, if started with the -broadcast option, the ypbind process broadcasts on the net trying to find an NIS server, either a ypserv process serving the domain or an rpc.nisd process in “YP-compatibility mode” serving NIS+ directory with name the same as (case sensitive) the domain in the client request. Since the binding is established by broadcasting, there must be at least one NIS server on the net. If started without the -broadcast option, ypbind process steps through the list of NIS servers that was created by ypinit -c for the requested domain. There must be an NIS server process on at least one of the hosts in the NIS servers file. All the hosts in the NIS servers file must be listed in the /etc/hosts file along with their IP addresses. Once a domain is bound by ypbind, that same binding is given to every client process on the node. The ypbind process on the local node or a remote node may be queried for the binding of a particular domain by using the ypwhich(1) command.

    If ypbind is unable to speak to the NIS server process it is bound to, it marks the domain as unbound, tells the client process that the domain is unbound, and tries to bind the domain once again. Requests received for an unbound domain will wait until the requested domain is bound. In general, a bound domain is marked as unbound when the node running the NIS server crashes or gets overloaded. In such a case, ypbind will try to bind to another NIS server using the process described above.ypbind also accepts requests to set its binding for a particular domain. The request is usually generated by the ypset(1M) command. In order for ypset to work, ypbind must have been invoked with flags -ypset or -ypsetme.

Options

    -broadcast

    Send a broadcast datagram using UDP/IP that requests the information needed to bind to a specific NIS server. This option is analogous to ypbind with no options in earlier Sun releases and is recommended for ease of use.

    -ypset

    Allow users from any remote machine to change the binding by means of the ypset command. By default, no one can change the binding. This option is insecure.

    -ypsetme

    Only allow root on the local machine to change the binding to a desired server by means of the ypset command. ypbind can verify the caller is indeed a root user by accepting such requests only on the loopback transport. By default, no external process can change the binding.

Files

    /var/yp/binding/ypdomain/ypservers

    /etc/inet/hosts

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWnisu

See Also

Notes

    ypbind supports multiple domains. The ypbind process can maintain bindings to several domains and their servers, the default domain is the one specified by the domainname(1M) command at startup time.

    The -broadcast option works only on the UDP transport. It is insecure since it trusts “any” machine on the net that responds to the broadcast request and poses itself as an NIS server.

    The ypbind service is managed by the service management facility, smf(5), under the service identifier:


    svc:/network/nis/client:default

    Administrative actions on this service, such as enabling, disabling, or requesting restart, can be performed using svcadm(1M). The service's status can be queried using the svcs(1) command.


2010-09-05 00:22:11

NAME

    mkdir– make directories

SYNOPSIS

    mkdir [-m mode] [-p] dir

DESCRIPTION

    The mkdir command creates the named directories in mode 777 (possibly altered by the file mode creation mask umask(1)).

    Standard entries in a directory (for instance, the files “.”, for the directory itself, and “. .”, for its parent) are made automatically. mkdir cannot create these entries by name. Creation of a directory requires write permission in the parent directory.

    The owner-ID and group-ID of the new directories are set to the process's effective user-ID and group-ID, respectively. mkdir calls the mkdir(2) system call.

    setgid and mkdir

      To change the setgid bit on a newly created directory, you must use chmod g+s or chmod g-s after executing mkdir.

      The setgid bit setting is inherited from the parent directory.

OPTIONS

    The following options are supported:

    -m mode

    This option allows users to specify the mode to be used for new directories. Choices for modes can be found in chmod(1).

    -p

    With this option, mkdir creates dir by creating all the non-existing parent directories first. The mode given to intermediate directories will be the difference between 777 and the bits set in the file mode creation mask. The difference, however, must be at least 300 (write and execute permission for the user).

OPERANDS

    The following operand is supported:

    dir

    A path name of a directory to be created.

USAGE

    See largefile(5) for the description of the behavior of mkdir when encountering files greater than or equal to 2 Gbyte ( 231 bytes).

EXAMPLES


    Example 1 Using mkdir

    The following example:


    example% mkdir -p ltr/jd/jan
    creates the subdirectory structure ltr/jd/jan.


ENVIRONMENT VARIABLES

    See environ(5) for descriptions of the following environment variables that affect the execution of mkdir: LC_CTYPE, LC_MESSAGES, and NLSPATH.

EXIT STATUS

    The following exit values are returned:

    0

    All the specified directories were created successfully or the -p option was specified and all the specified directories now exist.

    >0

    An error occurred.

ATTRIBUTES

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE ATTRIBUTE VALUE
    Availability SUNWcsu
    CSI enabled

SEE ALSO

2010-09-05 00:20:58

Name

    zfs– configures ZFS file systems

Synopsis

    zfs [-?]
    zfs create [-p] [-o property=value] ... filesystem
    
    zfs create [-ps] [-b blocksize] [-o property=value] ... -V size volume
    
    zfs destroy [-rRf] filesystem|volume|snapshot
    
    zfs snapshot [-r] [-o property=value]... 
          filesystem@snapname|volume@snapname
    
    zfs rollback [-rRf] snapshot
    
    zfs clone [-p] [-o property=value] ... snapshot filesystem|volume
    
    zfs promote clone-filesystem
    
    zfs rename filesystem|volume|snapshot 
         filesystem|volume|snapshot
    
    zfs rename [-p] filesystem|volume filesystem|volume
    
    zfs rename -r snapshot snapshot
    
    zfs list [-r] -d depth][-H][-o property[,...]] [-t type[,...]]
         [-s property] ... [-S property ... [filesystem|volume|snapshot] ...
    zfs set property=value filesystem|volume |snapshot ...
    zfs get [-r |-d depth][-Hp] [-o field[,...]] [-s source[,...]] "all" | property[,...]
          filesystem|volume|snapshot ...
    zfs inherit [-r] property filesystem|volume |snapshot ...
    zfs upgrade [-v]
    zfs upgrade [-r] [-V version] -a | filesystem
    
    zfs userspace [-niHp] [-o field[,...]] [-sS field] ...
         [-t type [,...]] filesystem|snapshot
    
    zfs groupspace [-niHp] [-o field[,...]] [-sS field] ...
         [-t type [,...]] filesystem|snapshot
    
    zfs mount 
    
    zfs mount [-vO] [-o options] -a | filesystem
    
    zfs unmount [-f] -a | filesystem|mountpoint
    
    zfs share -a | filesystem
    
    zfs unshare  -a filesystem|mountpoint
    
    zfs send [-vR] [-[iI] snapshot] snapshot
    
    zfs receive [-vnFu] filesystem|volume|snapshot
    
    zfs receive [-vnFu] -d filesystem
    
    zfs allow filesystem|volume
    
    zfs allow [-ldug] "everyone"|user|group[,...] perm|@setname[,...] 
         filesystem|volume
    
    zfs allow [-ld] -e perm|@setname[,...] filesystem|volume
    
    zfs allow -c perm|@setname[,...] filesystem|volume
    
    zfs allow -s @setname perm|@setname[,...] filesystem|volume
    
    zfs unallow [-rldug] "everyone"|user|group[,...] [perm|@setname[,... ]] 
         filesystem|volume
    
    zfs unallow [-rld] -e [perm|@setname[,... ]] filesystem|volume
    
    zfs unallow [-r] -c [perm|@setname[ ... ]] filesystem|volume
    
    zfs unallow [-r] -s @setname [perm|@setname[,... ]] filesystem|volume
    

Description

    The zfs command configures ZFS datasets within a ZFS storage pool, as described in zpool(1M). A dataset is identified by a unique path within the ZFS namespace. For example:


    pool/{filesystem,volume,snapshot}

    where the maximum length of a dataset name is MAXNAMELEN (256 bytes).

    A dataset can be one of the following:

    file system

    A ZFS dataset of type filesystem can be mounted within the standard system namespace and behaves like other file systems. While ZFS file systems are designed to be POSIX compliant, known issues exist that prevent compliance in some cases. Applications that depend on standards conformance might fail due to nonstandard behavior when checking file system free space.

    volume

    A logical volume exported as a raw or block device. This type of dataset should only be used under special circumstances. File systems are typically used in most environments.

    snapshot

    A read-only version of a file system or volume at a given point in time. It is specified as filesystem@name or volume@name.

    ZFS File System Hierarchy

      A ZFS storage pool is a logical collection of devices that provide space for datasets. A storage pool is also the root of the ZFS file system hierarchy.

      The root of the pool can be accessed as a file system, such as mounting and unmounting, taking snapshots, and setting properties. The physical storage characteristics, however, are managed by the zpool(1M) command.

      See zpool(1M) for more information on creating and administering pools.

    Snapshots

      A snapshot is a read-only copy of a file system or volume. Snapshots can be created extremely quickly, and initially consume no additional space within the pool. As data within the active dataset changes, the snapshot consumes more data than would otherwise be shared with the active dataset.

      Snapshots can have arbitrary names. Snapshots of volumes can be cloned or rolled back, but cannot be accessed independently.

      File system snapshots can be accessed under the .zfs/snapshot directory in the root of the file system. Snapshots are automatically mounted on demand and may be unmounted at regular intervals. The visibility of the .zfs directory can be controlled by the snapdir property.

    Clones

      A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and initially consumes no additional space.

      Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The origin property exposes this dependency, and the destroy command lists any such dependencies, if they exist.

      The clone parent-child dependency relationship can be reversed by using the promote subcommand. This causes the “origin” file system to become a clone of the specified file system, which makes it possible to destroy the file system that the clone was created from.

    Mount Points

      Creating a ZFS file system is a simple operation, so the number of file systems per system is likely to be numerous. To cope with this, ZFS automatically manages mounting and unmounting file systems without the need to edit the /etc/vfstab file. All automatically managed file systems are mounted by ZFS at boot time.

      By default, file systems are mounted under /path, where path is the name of the file system in the ZFS namespace. Directories are created and destroyed as needed.

      A file system can also have a mount point set in the mountpoint property. This directory is created as needed, and ZFS automatically mounts the file system when the zfs mount -a command is invoked (without editing /etc/vfstab). The mountpoint property can be inherited, so if pool/home has a mount point of /export/stuff, then pool/home/user automatically inherits a mount point of /export/stuff/user.

      A file system mountpoint property of none prevents the file system from being mounted.

      If needed, ZFS file systems can also be managed with traditional tools (mount, umount, /etc/vfstab). If a file system's mount point is set to legacy, ZFS makes no attempt to manage the file system, and the administrator is responsible for mounting and unmounting the file system.

    Zones

      A ZFS file system can be added to a non-global zone by using the zonecfg add fs subcommand. A ZFS file system that is added to a non-global zone must have its mountpoint property set to legacy.

      The physical properties of an added file system are controlled by the global administrator. However, the zone administrator can create, modify, or destroy files within the added file system, depending on how the file system is mounted.

      A dataset can also be delegated to a non-global zone by using the zonecfg add dataset subcommand. You cannot delegate a dataset to one zone and the children of the same dataset to another zone. The zone administrator can change properties of the dataset or any of its children. However, the “quota” property is controlled by the global administrator.

      A ZFS volume can be added as a device to a non-global zone by using the zonecfg add device subcommand. However, its physical properties can only be modified by the global administrator.

      For more information about zonecfg syntax, see zonecfg(1M).

      After a dataset is delegated to a non-global zone, the zoned property is automatically set. A zoned file system cannot be mounted in the global zone, since the zone administrator might have to set the mount point to an unacceptable value.

      The global administrator can forcibly clear the zoned property, though this should be done with extreme care. The global administrator should verify that all the mount points are acceptable before clearing the property.

    Native Properties

      Properties are divided into two types, native properties and user defined (or “user”) properties. Native properties either export internal statistics or control ZFS behavior. In addition, native properties are either editable or read-only. User properties have no effect on ZFS behavior, but you can use them to annotate datasets in a way that is meaningful in your environment. For more information about user properties, see the “User Properties” section.

      Every dataset has a set of properties that export statistics about the dataset as well as control various behavior. Properties are inherited from the parent unless overridden by the child. Some properties apply only to certain types of datasets (file systems, volumes, or snapshots).

      The values of numeric properties can be specified using the following human-readable suffixes (for example, k, KB, M, GB”, and so forth, up to z for zettabyte). The following are all valid (and equal) specifications:


      1536M, 1.5g, "1.50GB

      The values of non-numeric properties are case sensitive and must be lowercase, except for mountpoint, sharenfs and sharesmb.

      The following native properties consist of read-only statistics about the dataset. These properties cannot be set, nor are they inherited. Native properties apply to all dataset types unless otherwise noted.

      available

      The amount of space available to the dataset and all its children, assuming that there is no other activity in the pool. Because space is shared within a pool, availability can be limited by any number of factors, including physical pool size, quotas, reservations, or other datasets within the pool.

      This property can also be referred to by its shortened column name, avail.

      compressratio

      The compression ratio achieved for this dataset, expressed as a multiplier. Compression can be turned on by running zfs set compression=on dataset. The default value is off.

      creation

      The time this dataset was created.

      mounted

      For file systems, indicates whether the file system is currently mounted. This property can be either yesor no.

      origin

      For cloned file systems or volumes, the snapshot from which the clone was created. The origin cannot be destroyed (even with the -r or -f options) so long as a clone exists.

      referenced

      The amount of data that is accessible by this dataset, which may or may not be shared with other datasets in the pool. When a snapshot or clone is created, it initially references the same amount of space as the file system or snapshot it was created from, since its contents are identical.

      This property can also be referred to by its shortened column name, refer.

      type

      The type of dataset: filesystem, volume, snapshot, or clone.

      used

      The amount of space consumed by this dataset and all its descendents. This is the value that is checked against this dataset's quota and reservation. The space used does not include this dataset's reservation, but does take into account the reservations of any descendent datasets. The amount of space that a dataset consumes from its parent, as well as the amount of space that are freed if this dataset is recursively destroyed, is the greater of its space used and its reservation.

      When snapshots (see the “Snapshots” section) are created, their space is initially shared between the snapshot and the file system, and possibly with previous snapshots. As the file system changes, space that was previously shared becomes unique to the snapshot, and counted in the snapshot's space used. Additionally, deleting snapshots can increase the amount of space unique to (and used by) other snapshots.

      The amount of space used, available, or referenced does not take into account pending changes. Pending changes are generally accounted for within a few seconds. Committing a change to a disk using fsync(3C) or O_SYNC does not necessarily guarantee that the space usage information is updated immediately.

      usedby*

      The usedby* properties decompose the used properties into the various reasons that space is used. Specifically, used = usedbychildren + usedbydataset + usedbyrefreservation +, usedbysnapshots. These properties are only available for datasets created on zpool “version 13” pools.

      usedbychildren

      The amount of space used by children of this dataset, which would be freed if all the dataset's children were destroyed.

      usedbydataset

      The amount of space used by this dataset itself, which would be freed if the dataset were destroyed (after first removing any refreservation and destroying any necessary snapshots or descendents).

      usedbyrefreservation

      The amount of space used by a refreservation set on this dataset, which would be freed if the refreservation was removed.

      usedbysnapshots

      The amount of space consumed by snapshots of this dataset. In particular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not simply the sum of the snapshots' used properties because space can be shared by multiple snapshots.

      userused@user

      The amount of space consumed by the specified user in this dataset. Space is charged to the owner of each file, as displayed by ls -l. The amount of space charged is displayed by du and ls -s. See the zfs userspace subcommand for more information.

      Unprivileged users can access only their own space usage. The root user, or a user who has been granted the userused privilege with zfs allow, can access everyone's usage.

      The userused@... properties are not displayed by zfs get all. The user's name must be appended after the @ symbol, using one of the following forms:

      • POSIX name (for example, joe)

      • POSIX numeric ID (for example, 789)

      • SID name (for example, joe.smith@mydomain)

      • SID numeric ID (for example, S-1-123-456-789)

      groupused@group

      The amount of space consumed by the specified group in this dataset. Space is charged to the group of each file, as displayed by ls -l. See the userused@user property for more information.

      Unprivileged users can only access their own groups' space usage. The root user, or a user who has been granted the groupused privilege with zfs allow, can access all groups' usage.

      volblocksize=blocksize

      For volumes, specifies the block size of the volume. The blocksize cannot be changed once the volume has been written, so it should be set at volume creation time. The default blocksize for volumes is 8 Kbytes. Any power of 2 from 512 bytes to 128 Kbytes is valid.

      This property can also be referred to by its shortened column name, volblock.

      The following native properties can be used to change the behavior of a ZFS dataset.

      aclinherit=discard | noallow | restricted | passthrough | passthrough-x

      Controls how ACL entries are inherited when files and directories are created. A file system with an aclinherit property of discard does not inherit any ACL entries. A file system with an aclinherit property value of noallow only inherits inheritable ACL entries that specify “deny” permissions. The property value restricted (the default) removes the write_acl and write_owner permissions when the ACL entry is inherited. A file system with an aclinherit property value of passthrough inherits all inheritable ACL entries without any modifications made to the ACL entries when they are inherited. A file system with an aclinherit property value of passthrough-x has the same meaning as passthrough, except that the owner@, group@, and everyone@ ACEs inherit the execute permission only if the file creation mode also requests the execute bit.

      When the property value is set to passthrough, files are created with a mode determined by the inheritable ACEs. If no inheritable ACEs exist that affect the mode, then the mode is set in accordance to the requested mode from the application.

      aclmode=discard | groupmask | passthrough

      Controls how an ACL is modified during chmod(2). A file system with an aclmode property of discard deletes all ACL entries that do not represent the mode of the file. An aclmode property of groupmask (the default) reduces user or group permissions. The permissions are reduced, such that they are no greater than the group permission bits, unless it is a user entry that has the same UID as the owner of the file or directory. In this case, the ACL permissions are reduced so that they are no greater than owner permission bits. A file system with an aclmode property of passthrough indicates that no changes are made to the ACL other than generating the necessary ACL entries to represent the new mode of the file or directory.

      atime=on | off

      Controls whether the access time for files is updated when they are read. Turning this property off avoids producing write traffic when reading files and can result in significant performance gains, though it might confuse mailers and other similar utilities. The default value is on.

      canmount=on | off | noauto

      If this property is set to off, the file system cannot be mounted, and is ignored by zfs mount -a. Setting this property to off is similar to setting the mountpoint property to none, except that the dataset still has a normal mountpoint property, which can be inherited. Setting this property to off allows datasets to be used solely as a mechanism to inherit properties. One example of setting canmount=off is to have two datasets with the same mountpoint, so that the children of both datasets appear in the same directory, but might have different inherited characteristics.

      When the noauto option is set, a dataset can only be mounted and unmounted explicitly. The dataset is not mounted automatically when the dataset is created or imported, nor is it mounted by the zfs mount -a command or unmounted by the zfs unmount -a command.

      This property is not inherited.

      checksum=on | off | fletcher2, | fletcher4 | sha256

      Controls the checksum used to verify data integrity. The default value is on, which automatically selects an appropriate algorithm (currently, fletcher2, but this may change in future releases). The value off disables integrity checking on user data. Disabling checksums is NOT a recommended practice.

      compression=on | off | lzjb | gzip | gzip-N

      Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)).

      This property can also be referred to by its shortened column name compress.

      copies=1 | 2 | 3

      Controls the number of copies of data stored for this dataset. These copies are in addition to any redundancy provided by the pool, for example, mirroring or RAID-Z. The copies are stored on different disks, if possible. The space used by multiple copies is charged to the associated file and dataset, changing the used property and counting against quotas and reservations.

      Changing this property only affects newly-written data. Therefore, set this property at file system creation time by using the -o copies=N option.

      devices=on | off

      Controls whether device nodes can be opened on this file system. The default value is on.

      exec=on | off

      Controls whether processes can be executed from within this file system. The default value is on.

      mountpoint=path | none | legacy

      Controls the mount point used for this file system. See the “Mount Points” section for more information on how this property is used.

      When the mountpoint property is changed for a file system, the file system and any children that inherit the mount point are unmounted. If the new value is legacy, then they remain unmounted. Otherwise, they are automatically remounted in the new location if the property was previously legacy or none, or if they were mounted before the property was changed. In addition, any shared file systems are unshared and shared in the new location.

      nbmand=on | off

      Controls whether the file system should be mounted with nbmand (Non Blocking mandatory locks). This is used for CIFS clients. Changes to this property only take effect when the file system is umounted and remounted. See mount(1M) for more information on nbmand mounts.

      primarycache=all | none | metadata

      Controls what is cached in the primary cache (ARC). If this property is set to all, then both user data and metadata is cached. If this property is set to none, then neither user data nor metadata is cached. If this property is set to metadata, then only metadata is cached. The default value is all.

      quota=size | none

      Limits the amount of space a dataset and its descendents can consume. This property enforces a hard limit on the amount of space used. This includes all space consumed by descendents, including file systems and snapshots. Setting a quota on a descendent of a dataset that already has a quota does not override the ancestor's quota, but rather imposes an additional limit.

      Quotas cannot be set on volumes, as the volsize property acts as an implicit quota.

      userquota@user=size | none

      Limits the amount of space consumed by the specified user. User space consumption is identified by the userspace@user property.

      Enforcement of user quotas may be delayed by several seconds. This delay means that a user might exceed their quota before the system notices that they are over quota and begins to refuse additional writes with the EDQUOT error message . See the zfs userspace subcommand for more information.

      Unprivileged users can only access their own groups' space usage. The root user, or a user who has been granted the userquota privilege with zfs allow, can get and set everyone's quota.

      This property is not available on volumes, on file systems before version 4, or on pools before version 15. The userquota@... properties are not displayed by zfs get all. The user's name must be appended after the @ symbol, using one of the following forms:

      • POSIX name (for example, joe)

      • POSIX numeric ID (for example, 789)

      • SID name (for example, joe.smith@mydomain)

      • SID numeric ID (for example, S-1-123-456-789)

      groupquota@group=size | none

      Limits the amount of space consumed by the specified group. Group space consumption is identified by the userquota@user property.

      Unprivileged users can access only their own groups' space usage. The root user, or a user who has been granted the groupquota privilege with zfs allow, can get and set all groups' quotas.

      readonly=on | off

      Controls whether this dataset can be modified. The default value is off.

      This property can also be referred to by its shortened column name, rdonly.

      recordsize=size

      Specifies a suggested block size for files in the file system. This property is designed solely for use with database workloads that access files in fixed-size records. ZFS automatically tunes block sizes according to internal algorithms optimized for typical access patterns.

      For databases that create very large files but access them in small random chunks, these algorithms may be suboptimal. Specifying a recordsize greater than or equal to the record size of the database can result in significant performance gains. Use of this property for general purpose file systems is strongly discouraged, and may adversely affect performance.

      The size specified must be a power of two greater than or equal to 512 and less than or equal to 128 Kbytes.

      Changing the file system's recordsize only affects files created afterward; existing files are unaffected.

      This property can also be referred to by its shortened column name, recsize.

      refquota=size | none

      Limits the amount of space a dataset can consume. This property enforces a hard limit on the amount of space used. This hard limit does not include space used by descendents, including file systems and snapshots.

      refreservation=size | none

      The minimum amount of space guaranteed to a dataset, not including its descendents. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by refreservation. The refreservation reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations.

      If refreservation is set, a snapshot is only allowed if there is enough free pool space outside of this reservation to accommodate the current number of “referenced” bytes in the dataset.

      This property can also be referred to by its shortened column name, refreserv.

      reservation=size | none

      The minimum amount of space guaranteed to a dataset and its descendents. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by its reservation. Reservations are accounted for in the parent datasets' space used, and count against the parent datasets' quotas and reservations.

      This property can also be referred to by its shortened column name, reserv.

      secondarycache=all | none | metadata

      Controls what is cached in the secondary cache (L2ARC). If this property is set to all, then both user data and metadata is cached. If this property is set to none, then neither user data nor metadata is cached. If this property is set to metadata, then only metadata is cached. The default value is all.

      setuid=on | off

      Controls whether the set-UID bit is respected for the file system. The default value is on.

      shareiscsi=on | off

      Like the sharenfs property, shareiscsi indicates whether a ZFS volume is exported as an iSCSI target. The acceptable values for this property are on, off, and type=disk. The default value is off. In the future, other target types might be supported. For example, tape.

      You might want to set shareiscsi=on for a file system so that all ZFS volumes within the file system are shared by default. Setting this property on a file system has no direct effect, however.

      sharesmb=on | off | opts

      Controls whether the file system is shared by using the Solaris CIFS service, and what options are to be used. A file system with the sharesmb property set to off is managed through traditional UNIX tools such as share(1M). Otherwise, the file system is automatically shared and unshared with the zfs share and zfs unshare commands. If the property is set to on, the share(1M) command is invoked with no options. Otherwise, the share(1M) command is invoked with options equivalent to the contents of this property.

      Because SMB shares requires a resource name, a unique resource name is constructed from the dataset name. The constructed name is a copy of the dataset name except that the characters in the dataset name, which would be illegal in the resource name, are replaced with underscore (_) characters. A pseudo property “name” is also supported that allows you to replace the data set name with a specified name. The specified name is then used to replace the prefix dataset in the case of inheritance. For example, if the dataset data/home/john is set to name=john, then data/home/john has a resource name of john. If a child dataset of data/home/john/backups, it has a resource name of john_backups.

      When SMB shares are created, the SMB share name appears as an entry in the .zfs/shares directory. You can use the ls or chmod command to display the share-level ACLs on the entries in this directory.

      When the sharesmb property is changed for a dataset, the dataset and any children inheriting the property are re-shared with the new options, only if the property was previously set to off, or if they were shared before the property was changed. If the new property is set to off, the file systems are unshared.

      sharenfs=on | off | opts

      Controls whether the file system is shared via NFS, and what options are used. A file system with a sharenfs property of off is managed through traditional tools such as share(1M), unshare(1M), and dfstab(4). Otherwise, the file system is automatically shared and unshared with the zfs share and zfs unshare commands. If the property is set to on, the share(1M) command is invoked with no options. Otherwise, the share(1M) command is invoked with options equivalent to the contents of this property.

      When the sharenfs property is changed for a dataset, the dataset and any children inheriting the property are re-shared with the new options, only if the property was previously off, or if they were shared before the property was changed. If the new property is off, the file systems are unshared.

      snapdir=hidden | visible

      Controls whether the .zfs directory is hidden or visible in the root of the file system as discussed in the “Snapshots” section. The default value is hidden.

      version=1|2|current

      The on-disk version of this file system, which is independent of the pool version. This property can only be set to later supported versions. See zfs upgrade.

      volsize=size

      For volumes, specifies the logical size of the volume. By default, creating a volume establishes a reservation of equal size. For storage pools with a version number of 9 or higher, a refreservation is set instead. Any changes to volsize are reflected in an equivalent change to the reservation (or refreservation). The volsize can only be set to a multiple of volblocksize, and cannot be zero.

      The reservation is kept equal to the volume's logical size to prevent unexpected behavior for consumers. Without the reservation, the volume could run out of space, resulting in undefined behavior or data corruption, depending on how the volume is used. These effects can also occur when the volume size is changed while it is in use (particularly when shrinking the size). Extreme care should be used when adjusting the volume size.

      Though not recommended, a “sparse volume” (also known as “thin provisioning”) can be created by specifying the -s option to the zfs create -V command, or by changing the reservation after the volume has been created. A “sparse volume” is a volume where the reservation is less then the volume size. Consequently, writes to a sparse volume can fail with ENOSPC when the pool is low on space. For a sparse volume, changes to volsize are not reflected in the reservation.

      vscan=on|off

      Controls whether regular files should be scanned for viruses when a file is opened and closed. In addition to enabling this property, the virus scan service must also be enabled for virus scanning to occur. The default value is off.

      xattr=on | off

      Controls whether extended attributes are enabled for this file system. The default value is on.

      zoned=on | off

      Controls whether the dataset is managed from a non-global zone. See the “Zones” section for more information. The default value is off.

      The following three properties cannot be changed after the file system is created, and therefore, should be set when the file system is created. If the properties are not set with the zfs create or zpool create commands, these properties are inherited from the parent dataset. If the parent dataset lacks these properties due to having been created prior to these features being supported, the new file system will have the default values for these properties.

      casesensitivity = sensitive | insensitive | mixed

      Indicates whether the file name matching algorithm used by the file system should be case-sensitive, case-insensitive, or allow a combination of both styles of matching. The default value for the casesensitivity property is sensitive. Traditionally, UNIX and POSIX file systems have case-sensitive file names.

      The mixed value for the casesensitivity property indicates that the file system can support requests for both case-sensitive and case-insensitive matching behavior. Currently, case-insensitive matching behavior on a file system that supports mixed behavior is limited to the Solaris CIFS server product. For more information about the mixed value behavior, see the ZFS Administration Guide.

      normalization =none | formD | formKCf

      Indicates whether the file system should perform a unicode normalization of file names whenever two file names are compared, and which normalization algorithm should be used. File names are always stored unmodified, names are normalized as part of any comparison process. If this property is set to a legal value other than none, and the utf8only property was left unspecified, the utf8onlyproperty is automatically set to on. The default value of the normalization property is none. This property cannot be changed after the file system is created.

      utf8only =on | off

      Indicates whether the file system should reject file names that include characters that are not present in the UTF-8 character code set. If this property is explicitly set to off, the normalization property must either not be explicitly set or be set to none. The default value for the utf8only property is off. This property cannot be changed after the file system is created.

      The casesensitivity, normalization, and utf8only properties are also new permissions that can be assigned to non-privileged users by using the ZFS delegated administration feature.

    Temporary Mount Point Properties

      When a file system is mounted, either through mount(1M) for legacy mounts or the zfs mount command for normal file systems, its mount options are set according to its properties. The correlation between properties and mount options is as follows:


          PROPERTY                MOUNT OPTION
           devices                 devices/nodevices
           exec                    exec/noexec
           readonly                ro/rw
           setuid                  setuid/nosetuid
           xattr                   xattr/noxattr

      In addition, these options can be set on a per-mount basis using the -o option, without affecting the property that is stored on disk. The values specified on the command line override the values stored in the dataset. The -nosuid option is an alias for nodevices,nosetuid. These properties are reported as temporary by the zfs get command. If the properties are changed while the dataset is mounted, the new setting overrides any temporary settings.

    User Properties

      In addition to the standard native properties, ZFS supports arbitrary user properties. User properties have no effect on ZFS behavior, but applications or administrators can use them to annotate datasets (file systems, volumes, and snapshots).

      User property names must contain a colon (:) character, to distinguish them from native properties. They might contain lowercase letters, numbers, and the following punctuation characters: colon (:), dash (-), period (.), and underscore (_). The expected convention is that the property name is divided into two portions such as module:property, but this namespace is not enforced by ZFS. User property names can be at most 256 characters, and cannot begin with a dash (-).

      When making programmatic use of user properties, it is strongly suggested to use a reversed DNS domain name for the module component of property names to reduce the chance that two independently-developed packages use the same property name for different purposes. Property names beginning with com.sun. are reserved for use by Sun Microsystems.

      The values of user properties are arbitrary strings, are always inherited, and are never validated. All of the commands that operate on properties (zfs list, zfs get, zfs set, and so forth) can be used to manipulate both native properties and user properties. Use the zfs inherit command to clear a user property . If the property is not defined in any parent dataset, it is removed entirely. Property values are limited to 1024 characters.

    Volumes as Swap or Dump Devices

      During an initial installation or a live upgrade from a UFS file system, a swap device and dump device are created on ZFS volumes in the ZFS root pool. By default, the swap area size is based on 1/2 the size of physical memory up to 2 Gbytes. The size of the dump device depends on the kernel's requirements at installation time. Separate ZFS volumes must be used for the swap area and dump devices. Do not swap to a file on a ZFS file system. A ZFS swap file configuration is not supported.

      If you need to change your swap area or dump device after the system is installed or upgraded, use the swap(1M) and dumpadm(1M) commands. If you need to change the size of your swap area or dump device, see the Solaris ZFS Administration Guide

SUBCOMMANDS

    All subcommands that modify state are logged persistently to the pool in their original form.

    zfs ?

    Displays a help message.

    zfs create [-p] [-o property=value] ... filesystem

    Creates a new ZFS file system. The file system is automatically mounted according to the mountpoint property inherited from the parent.

    -p

    Creates all the non-existing parent datasets. Datasets created in this manner are automatically mounted according to the mountpoint property inherited from their parent. Any property specified on the command line using the -o option is ignored. If the target filesystem already exists, the operation completes successfully.

    -o property=value

    Sets the specified property as if zfs set property=value was invoked at the same time the dataset was created. Any editable ZFS property can also be set at creation time. Multiple -o options can be specified. An error results if the same property is specified in multiple -o options.

    zfs create [-ps] [-b blocksize] [-o property=value] ... -V size volume

    Creates a volume of the given size. The volume is exported as a block device in /dev/zvol/{dsk,rdsk}/path, where path is the name of the volume in the ZFS namespace. The size represents the logical size as exported by the device. By default, a reservation of equal size is created.

    size is automatically rounded up to the nearest 128 Kbytes to ensure that the volume has an integral number of blocks regardless of blocksize.

    -p

    Creates all the non-existing parent datasets. Datasets created in this manner are automatically mounted according to the “mountpoint” property inherited from their parent. Any property specified on the command line using the -o option is ignored. If the target filesystem already exists, the operation completes successfully.

    -s

    Creates a sparse volume with no reservation. See volsize in the Native Properties section for more information about sparse volumes.

    -o property=value

    Sets the specified property as if zfs set property=value was invoked at the same time the dataset was created. Any editable ZFS property can also be set at creation time. Multiple -o options can be specified. An error results if the same property is specified in multiple -o options.

    -b blocksize

    Equivalent to -o volblocksize=blocksize. If this option is specified in conjunction with -o volblocksize, the resulting behavior is undefined.

    zfs destroy [-rRf] filesystem|volume|snapshot

    Destroys the given dataset. By default, the command unshares any file systems that are currently shared, unmounts any file systems that are currently mounted, and refuses to destroy a dataset that has active dependents (children, snapshots, clones).

    -r

    Recursively destroy all children. If a snapshot is specified, destroy all snapshots with this name in descendent file systems.

    -R

    Recursively destroy all dependents, including cloned file systems outside the target hierarchy. If a snapshot is specified, destroy all snapshots with this name in descendent file systems.

    -f

    Force an unmount of any file systems using the unmount -f command. This option has no effect on non-file systems or unmounted file systems.

    Extreme care should be taken when applying either the -r or the -f options, as they can destroy large portions of a pool and cause unexpected behavior for mounted file systems in use.

    zfs snapshot [-r] [-o property=value] ...filesystem@snapname|volume@snapname

    Creates a snapshot with the given name. All previous modifications by successful system calls to the file system are part of the snapshot. See the “Snapshots” section for details.

    -r

    Recursively create snapshots of all descendent datasets. Snapshots are taken atomically, so that all recursive snapshots correspond to the same moment in time.

    -o property=value

    Sets the specified property; see zfs create for details.

    zfs rollback [-rRf] snapshot

    Roll back the given dataset to a previous snapshot. When a dataset is rolled back, all data that has changed since the snapshot is discarded, and the dataset reverts to the state at the time of the snapshot. By default, the command refuses to roll back to a snapshot other than the most recent one. In order to do so, all intermediate snapshots must be destroyed by specifying the -r option.

    -r

    Recursively destroy any snapshots more recent than the one specified.

    -R

    Recursively destroy any more recent snapshots, as well as any clones of those snapshots.

    -f

    Used with the -R option to force an unmount of any clone file systems that are to be destroyed.

    zfs clone [-p] [-o property=value] ... snapshot filesystem|volume

    Creates a clone of the given snapshot. See the “Clones” section for details. The target dataset can be located anywhere in the ZFS hierarchy, and is created as the same type as the original.

    -p

    Creates all the non-existing parent datasets. Datasets created in this manner are automatically mounted according to the mountpoint property inherited from their parent. If the target filesystem or volume already exists, the operation completes successfully.

    -o property=value

    Sets the specified property; see zfs create for details.

    zfs promote clone-filesystem

    Promotes a clone file system to no longer be dependent on its “origin“ snapshot. This makes it possible to destroy the file system that the clone was created from. The clone parent-child dependency relationship is reversed, so that the origin file system becomes a clone of the specified file system.

    The snapshot that was cloned, and any snapshots previous to this snapshot, are now owned by the promoted clone. The space they use moves from the “origin” file system to the promoted clone, so enough space must be available to accommodate these snapshots. No new space is consumed by this operation, but the space accounting is adjusted. The promoted clone must not have any conflicting snapshot names of its own. The rename subcommand can be used to rename any conflicting snapshots.

    zfs rename filesystem|volume|snapshot
    filesystem|volume|snapshot
    zfs rename [-p] filesystem|volume filesystem|volume

    Renames the given dataset. The new target can be located anywhere in the ZFS hierarchy, with the exception of snapshots. Snapshots can only be renamed within the parent file system or volume. When renaming a snapshot, the parent file system of the snapshot does not need to be specified as part of the second argument. Renamed file systems can inherit new mount points, in which case they are unmounted and remounted at the new mount point.

    -p

    Creates all the non-existing parent datasets. Datasets created in this manner are automatically mounted according to the mountpoint property inherited from their parent.

    zfs rename -r snapshot snapshot

    Recursively rename the snapshots of all descendent datasets. Snapshots are the only dataset that can be renamed recursively.

    zfs list [-r] -d depth] [-H] [-o property[,...]] [ -t type[,...]]
    [ -s property ] ... [ -S property ] ... [filesystem|volume|snapshot] ...

    Lists the property information for the given datasets in tabular form. If specified, you can list property information by the absolute pathname or the relative pathname. By default, all file systems and volumes are displayed. Snapshots are displayed if the listsnaps property is on (the default is off) . The following fields are displayed, name,used,available,referenced,mountpoint.

    -H

    Used for scripting mode. Do not print headers and separate fields by a single tab instead of arbitrary white space.

    -r

    Recursively display any children of the dataset on the command line.

    -d depth

    Recursively display any children of the dataset, limiting the recursion to depth. A depth of 1 will display only the dataset and its direct children.

    -o property

    A comma-separated list of properties to display. The property must be:

    • One of the properties described in the “Native Properties” section

    • A user property

    • The value name to display the dataset name

    • The value space to display space usage properties on file systems and volumes. This is a shortcut for specifying -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild-t filesystem,volume syntax.


    -s property

    A property for sorting the output by column in ascending order based on the value of the property. The property must be one of the properties described in the “Properties” section, or the special value “name” to sort by the dataset name. Multiple properties can be specified at one time using multiple -s property options. Multiple -s options are evaluated from left to right in decreasing order of importance.

    The following is a list of sorting criteria:

    • Numeric types sort in numeric order.

    • String types sort in alphabetical order.

    • Types inappropriate for a row sort that row to the literal bottom, regardless of the specified ordering.

    • If no sorting options are specified the existing behavior of zfs list is preserved.

    -S property

    Same as the -s option, but sorts by property in descending order.

    -t type

    A comma-separated list of types to display, where type is one of filesystem, snapshot, volume, or all. For example, specifying -t snapshot displays only snapshots.

    zfs set property=value filesystem|volume | snapshot ...

    Sets the property to the given value for each dataset. Only some properties can be edited. See the “Properties” section for more information on what properties can be set and acceptable values. Numeric values can be specified as exact values, or in a human-readable form with a suffix of B, K, M, G, T, P, E, Z (for bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or zettabytes, respectively). User properties can be set on snapshots. For more information, see the “User Properties” section.

    zfs get [-r|-d depth] [-Hp] [-o field[,...] [-s source[,...] “all” | property[,...] filesystem|volume|snapshot ...

    Displays properties for the given datasets. If no datasets are specified, then the command displays properties for all datasets on the system. For each property, the following columns are displayed:


        name      Dataset name
         property  Property name
         value     Property value
         source    Property source. Can either be local, default,
                   temporary, inherited, or none (-).

    All columns are displayed by default, though this can be controlled by using the -o option. This command takes a comma-separated list of properties as described in the “Native Properties” and “User Properties” sections.

    The special value all can be used to display all properties that apply to the given dataset's type (filesystem, volume, or snapshot).

    -r

    Recursively display properties for any children.

    -d depth

    Recursively display any children of the dataset, limiting the recursion to depth. A depth of 1 will display only the dataset and its direct children.

    -H

    Display output in a form more easily parsed by scripts. Any headers are omitted, and fields are explicitly separated by a single tab instead of an arbitrary amount of space.

    -o field

    A comma-separated list of columns to display. name,property,value,source is the default value.

    -s source

    A comma-separated list of sources to display. Those properties coming from a source other than those in this list are ignored. Each source must be one of the following: local,default,inherited,temporary,none. The default value is all sources.

    -p

    Display numbers in parsable (exact) values.

    zfs inherit [-r] property filesystem|volume|snapshot ...

    Clears the specified property, causing it to be inherited from an ancestor. If no ancestor has the property set, then the default value is used. See the “Properties” section for a listing of default values, and details on which properties can be inherited.

    -r

    Recursively inherit the given property for all children.

    zfs upgrade [-v]

    Displays a list of file systems that are not the most recent version.

    zfs upgrade [-r] [-V version] [-a | filesystem]

    Upgrades file systems to a new on-disk version. Once this is done, the file systems will no longer be accessible on systems running older versions of the software. zfs send streams generated from new snapshots of these file systems can not be accessed on systems running older versions of the software.

    The file system version is independent of the pool version (see zpool(1M) for information on the zpool upgrade command).

    The file system version does not have to be upgraded when the pool version is upgraded, and vice versa.

    -a

    Upgrade all file systems on all imported pools.

    filesystem

    Upgrade the specified file system.

    -r

    Upgrade the specified file system and all descendent file systems

    -V version

    Upgrade to the specified version. If the -V flag is not specified, this command upgrades to the most recent version. This option can only be used to increase the version number, and only up to the most recent version supported by this software.

    zfs userspace [-niHp] [-o field[,...]] [-sS field]... [-t type [,...]] filesystem | snapshot

    Displays space consumed by, and quotas on, each user in the specified filesystem or snapshot. This corresponds to the userused@user and userquota@user properties.

    -n

    Print numeric ID instead of user/group name.

    -H

    Do not print headers, use tab-delimited output.

    -p

    Use exact (parseable) numeric output.

    -o field[,...]

    Display only the specified fields from the following set, type,name,used,quota. The default is to display all fields.

    -s field

    Sort output by this field. The s and S flags may be specified multiple times to sort first by one field, then by another. The default is -s type -s name.

    -S field

    Sort by this field in reverse order. See -s.

    -t type[,...]

    Print only the specified types from the following set, all,posixuser,smbuser,posixgroup,smbgroup


    The default is -t posixuser,smbuser.


    The default can be changed to include group types.

    -i

    Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping exists. Normal POSIX interfaces (for example, stat(2), ls -l) perform this translation, so the -i option allows the output from zfs userspace to be compared directly with those utilities. However, -i may lead to confusion if some files were created by an SMB user before a SMB-to-POSIX name mapping was established. In such a case, some files are owned by the SMB entity and some by the POSIX entity. However, the -i option will report that the POSIX entity has the total usage and quota for both.

    zfs groupspace [-niHp] [-o field[,...]] [-sS field]... [-t type [,...]] filesystem | snapshot

    Displays space consumed by, and quotas on, each group in the specified filesystem or snapshot. This subcommand is identical to zfs userspace, except that the default types to display are -t posixgroup,smbgroup.


    zfs mount

    Displays all ZFS file systems currently mounted.

    zfs mount [-vO] [-o options] -a | filesystem

    Mounts ZFS file systems. Invoked automatically as part of the boot process.

    -o options

    An optional comma-separated list of mount options to use temporarily for the duration of the mount. See the “Temporary Mount Point Properties” section for details.

    -O

    Perform an overlay mount. See mount(1M) for more information.

    -v

    Report mount progress.

    -a

    Mount all available ZFS file systems. Invoked automatically as part of the boot process.

    filesystem

    Mount the specified filesystem.

    zfs unmount [-f] -a | filesystem|mountpoint

    Unmounts currently mounted ZFS file systems. Invoked automatically as part of the shutdown process.

    -f

    Forcefully unmount the file system, even if it is currently in use.

    -a

    Unmount all available ZFS file systems. Invoked automatically as part of the boot process.

    filesystem|mountpoint

    Unmount the specified filesystem. The command can also be given a path to a ZFS file system mount point on the system.

    zfs share -a | filesystem

    Shares available ZFS file systems.

    -a

    Share all available ZFS file systems. Invoked automatically as part of the boot process.

    filesystem

    Share the specified filesystem according to the sharenfs and sharesmb properties. File systems are shared when the sharenfs or sharesmb property is set.

    zfs unshare -a | filesystem|mountpoint

    Unshares currently shared ZFS file systems. This is invoked automatically as part of the shutdown process.

    -a

    Unshare all available ZFS file systems. Invoked automatically as part of the boot process.

    filesystem|mountpoint

    Unshare the specified filesystem. The command can also be given a path to a ZFS file system shared on the system.

    zfs send [-vR] [-[iI] snapshot] snapshot

    Creates a stream representation of the second snapshot, which is written to standard output. The output can be redirected to a file or to a different system (for example, using ssh(1). By default, a full stream is generated.

    -i snapshot

    Generate an incremental stream from the first snapshot to the second snapshot. The incremental source (the first snapshot) can be specified as the last component of the snapshot name (for example, the part after the @), and it is assumed to be from the same file system as the second snapshot.

    If the destination is a clone, the source may be the origin snapshot, which must be fully specified (for example, pool/fs@origin, not just @origin).

    -I snapshot

    Generate a stream package that sends all intermediary snapshots from the first snapshot to the second snapshot. For example, -I @a fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The incremental source snapshot may be specified as with the -i option.

    -R

    Generate a replication stream package, which will replicate the specified filesystem, and all descendent file systems, up to the named snapshot. When received, all properties, snapshots, descendent file systems, and clones are preserved.

    If the -i or -I flags are used in conjunction with the -R flag, an incremental replication stream is generated. The current values of properties, and current snapshot and file system names are set when the stream is received. If the -F flag is specified when this stream is received, snapshots and file systems that do not exist on the sending side are destroyed.

    -v

    Print verbose information about the stream package generated.

    The format of the stream is committed. You will be able to receive your streams on future versions of ZFS.

    zfs receive [-vnFu] filesystem|volume|snapshot
    zfs receive [-vnFu] -d filesystem

    Creates a snapshot whose contents are as specified in the stream provided on standard input. If a full stream is received, then a new file system is created as well. Streams are created using the zfs send subcommand, which by default creates a full stream. zfs recv can be used as an alias for zfs receive”.

    If an incremental stream is received, then the destination file system must already exist, and its most recent snapshot must match the incremental stream's source. For zvols, the destination device link is destroyed and recreated, which means the zvol cannot be accessed during the receive operation.

    The name of the snapshot (and file system, if a full stream is received) that this subcommand creates depends on the argument type and the -d option.

    If the argument is a snapshot name, the specified snapshot is created. If the argument is a file system or volume name, a snapshot with the same name as the sent snapshot is created within the specified filesystem or volume. If the -d option is specified, the snapshot name is determined by appending the sent snapshot's name to the specified filesystem. If the -d option is specified, any required file systems within the specified one are created.

    -d

    Use the name of the sent snapshot to determine the name of the new snapshot as described in the paragraph above.

    -u

    File system that is associated with the received stream is not mounted.

    -v

    Print verbose information about the stream and the time required to perform the receive operation.

    -n

    Do not actually receive the stream. This can be useful in conjunction with the -v option to verify the name the receive operation would use.

    -F

    Force a rollback of the file system to the most recent snapshot before performing the receive operation. If receiving an incremental replication stream (for example, one generated by zfs send -R -[iI]), destroy snapshots and file systems that do not exist on the sending side.

    zfs allow [-ldug] “everyone”|user|group[,...] perm|@setname[,...] filesystem| volume
    zfs allow [-ld] -e perm|@setname[,...] filesystem|volume

    Delegates ZFS administration permission for the file systems to non-privileged users.

    [-ug] “everyone”|user|group[,...]

    Specifies to whom the permissions are delegated. Multiple entities can be specified as a comma-separated list. If neither of the -ug options are specified, then the argument is interpreted preferentially as the keyword “everyone”, then as a user name, and lastly as a group name. To specify a user or group named “everyone”, use the -u or -g options. To specify a group with the same name as a user, use the -g options.

    [-e] perm|@setname[,...]

    Specifies that the permissions be delegated to “everyone.” Multiple permissions may be specified as a comma-separated list. Permission names are the same as ZFS subcommand and property names. See the property list below. Property set names, which begin with an at sign (@) , may be specified. See the -s form below for details.

    [-ld] filesystem|volume

    Specifies where the permissions are delegated. If neither of the -ld options are specified, or both are, then the permissions are allowed for the file system or volume, and all of its descendents. If only the -l option is used, then is allowed “locally” only for the specified file system. If only the -d option is used, then is allowed only for the descendent file systems.

    Permissions are generally the ability to use a ZFS subcommand or change a ZFS property. The following permissions are available:


    NAME             TYPE           NOTES
    allow            subcommand     Must also have the permission that is being
                                    allowed
    clone            subcommand     Must also have the 'create' ability and 'mount'
                                    ability in the origin file system
    create           subcommand     Must also have the 'mount' ability
    destroy          subcommand     Must also have the 'mount' ability
    mount            subcommand     Allows mount/umount of ZFS datasets
    promote          subcommand     Must also have the 'mount'
                                    and 'promote' ability in the origin file system
    receive          subcommand     Must also have the 'mount' and 'create' ability
    rename           subcommand     Must also have the 'mount' and 'create'
                                    ability in the new parent
    rollback         subcommand     Must also have the 'mount' ability
    send             subcommand     
    share            subcommand     Allows sharing file systems over NFS or SMB
                                    protocols
    snapshot         subcommand     Must also have the 'mount' ability
    groupquota       other          Allows accessing any groupquota@... property
    groupused        other          Allows reading any groupused@... property
    userprop         other          Allows changing any user property
    userquota        other          Allows accessing any userquota@... property
    userused         other          Allows reading any userused@... property
    
    aclinherit       property       
    aclmode          property       
    atime            property       
    canmount         property       
    casesensitivity  property       
    checksum         property       
    compression      property       
    copies           property       
    devices          property       
    exec             property       
    mountpoint       property       
    nbmand           property       
    normalization    property       
    primarycache     property       
    quota            property       
    readonly         property       
    recordsize       property       
    refquota         property       
    refreservation   property       
    reservation      property       
    secondarycache   property       
    setuid           property       
    shareiscsi       property       
    sharenfs         property       
    sharesmb         property       
    snapdir          property       
    utf8only         property       
    version          property       
    volblocksize     property       
    volsize          property       
    vscan            property       
    xattr            property       
    zoned            property   
    zfs allow -c perm|@setname[,...] filesystem|volume

    Sets “create time” permissions. These permissions are granted (locally) to the creator of any newly-created descendent file system.

    zfs allow -s @setname perm|@setname[,...] filesystem|volume

    Defines or adds permissions to a permission set. The set can be used by other zfs allow commands for the specified file system and its descendents. Sets are evaluated dynamically, so changes to a set are immediately reflected. Permission sets follow the same naming restrictions as ZFS file systems, but the name must begin with an “at sign” (@), and can be no more than 64 characters long.

    zfs unallow [-rldug] “everyone”|user|group[,...] [perm|@setname[, ...]] filesystem|volume
    zfs unallow [-rld] -e [perm|@setname [,...]] filesystem|volume
    zfs unallow [-r] -c [perm|@setname[,...]]
    filesystem|volume

    Removes permissions that were granted with the zfs allow command. No permissions are explicitly denied, so other permissions granted are still in effect. For example, if the permission is granted by an ancestor. If no permissions are specified, then all permissions for the specified user, group, or everyone are removed. Specifying “everyone” (or using the -e option) only removes the permissions that were granted to “everyone”, not all permissions for every user and group. See the “zfs allow” command for a description of the -ldugec options.

    -r

    Recursively remove the permissions from this file system and all descendents.

    zfs unallow [-r] -s @setname [perm|@setname[,...]]
    filesystem|volume

    Removes permissions from a permission set. If no permissions are specified, then all permissions are removed, thus removing the set entirely.

Examples


    Example 1 Creating a ZFS File System Hierarchy

    The following commands create a file system named pool/home and a file system named pool/home/bob. The mount point /export/home is set for the parent file system, and is automatically inherited by the child file system.


    # zfs create pool/home
    # zfs set mountpoint=/export/home pool/home
    # zfs create pool/home/bob
    


    Example 2 Creating a ZFS Snapshot

    The following command creates a snapshot named yesterday. This snapshot is mounted on demand in the .zfs/snapshot directory at the root of the pool/home/bob file system.


    # zfs snapshot pool/home/bob@yesterday
    


    Example 3 Creating andDestroying Multiple Snapshots

    The following command creates snapshots named yesterday of pool/home and all of its descendent file systems. Each snapshot is mounted on demand in the .zfs/snapshot directory at the root of its file system. The second command destroys the newly created snapshots.


    # zfs snapshot -r pool/home@yesterday
    # zfs destroy -r pool/home@yesterday
    


    Example 4 Disabling and Enabling File System Compression

    The following command disables the compression property for all file systems under pool/home. The next command explicitly enables compression on for pool/home/anne.


    # zfs set compression=off pool/home
    # zfs set compression=on pool/home/anne
    


    Example 5 Listing ZFS Datasets

    The following command lists all active file systems and volumes in the system. Snapshots are displayed if the listsnaps property is on. The default is off. See zpool(1M) for more information on pool properties.


    # zfs list
       NAME                      USED  AVAIL  REFER  MOUNTPOINT
       pool                      450K   457G    18K  /pool
       pool/home                 315K   457G    21K  /export/home
       pool/home/anne             18K   457G    18K  /export/home/anne
       pool/home/bob             276K   457G   276K  /export/home/bob


    Example 6 Setting a Quota on a ZFS File System

    The following command sets a quota of 50 Gbytes for pool/home/bob.


    # zfs set quota=50G pool/home/bob
    


    Example 7 Listing ZFS Properties

    The following command lists all properties for pool/home/bob.


    # zfs get all pool/home/bob
    NAME           PROPERTY              VALUE                  SOURCE
    pool/home/bob  type                  filesystem             -
    pool/home/bob  creation              Tue Jul 21 15:53 2009  -
    pool/home/bob  used                  21K                    -
    pool/home/bob  available             20.0G                  -
    pool/home/bob  referenced            21K                    -
    pool/home/bob  compressratio         1.00x                  -
    pool/home/bob  mounted               yes                    -
    pool/home/bob  quota                 20G                    local
    pool/home/bob  reservation           none                   default
    pool/home/bob  recordsize            128K                   default
    pool/home/bob  mountpoint            /pool/home/bob         default
    pool/home/bob  sharenfs              off                    default
    pool/home/bob  checksum              on                     default
    pool/home/bob  compression           on                     local
    pool/home/bob  atime                 on                     default
    pool/home/bob  devices               on                     default
    pool/home/bob  exec                  on                     default
    pool/home/bob  setuid                on                     default
    pool/home/bob  readonly              off                    default
    pool/home/bob  zoned                 off                    default
    pool/home/bob  snapdir               hidden                 default
    pool/home/bob  aclmode               groupmask              default
    pool/home/bob  aclinherit            restricted             default
    pool/home/bob  canmount              on                     default
    pool/home/bob  shareiscsi            off                    default
    pool/home/bob  xattr                 on                     default
    pool/home/bob  copies                1                      default
    pool/home/bob  version               4                      -
    pool/home/bob  utf8only              off                    -
    pool/home/bob  normalization         none                   -
    pool/home/bob  casesensitivity       sensitive              -
    pool/home/bob  vscan                 off                    default
    pool/home/bob  nbmand                off                    default
    pool/home/bob  sharesmb              off                    default
    pool/home/bob  refquota              none                   default
    pool/home/bob  refreservation        none                   default
    pool/home/bob  primarycache          all                    default
    pool/home/bob  secondarycache        all                    default
    pool/home/bob  usedbysnapshots       0                      -
    pool/home/bob  usedbydataset         21K                    -
    pool/home/bob  usedbychildren        0                      -
    pool/home/bob  usedbyrefreservation  0                      -

    The following command gets a single property value.


    # zfs get -H -o value compression pool/home/bob
    on

    The following command lists all properties with local settings for pool/home/bob.


    # zfs get -r -s local -o name,property,value all pool/home/bob
    NAME           PROPERTY              VALUE
    pool/home/bob  quota                 20G
    pool/home/bob  compression           on


    Example 8 Rolling Back a ZFS File System

    The following command reverts the contents of pool/home/anne to the snapshot named yesterday, deleting all intermediate snapshots.


    # zfs rollback -r pool/home/anne@yesterday
    


    Example 9 Creating a ZFS Clone

    The following command creates a writable file system whose initial contents are the same as pool/home/bob@yesterday.


    # zfs clone pool/home/bob@yesterday pool/clone
    


    Example 10 Promoting a ZFS Clone

    The following commands illustrate how to test out changes to a file system, and then replace the original file system with the changed one, using clones, clone promotion, and renaming:


    # zfs create pool/project/production
      populate /pool/project/production with data
    # zfs snapshot pool/project/production@today
    # zfs clone pool/project/production@today pool/project/beta
      make changes to /pool/project/beta and test them
    # zfs promote pool/project/beta
    # zfs rename pool/project/production pool/project/legacy
    # zfs rename pool/project/beta pool/project/production
      once the legacy version is no longer needed, it can be
      destroyed
    # zfs destroy pool/project/legacy
    


    Example 11 Inheriting ZFS Properties

    The following command causes pool/home/bob and pool/home/anne to inherit the checksum property from their parent.


    # zfs inherit checksum pool/home/bob pool/home/anne
    


    Example 12 Remotely Replicating ZFS Data

    The following commands send a full stream and then an incremental stream to a remote machine, restoring them into poolB/received/fs@a and poolB/received/fs@b, respectively. poolB must contain the file system poolB/received, and must not initially contain poolB/received/fs.


    # zfs send pool/fs@a | \
       ssh host zfs receive poolB/received/fs@a
    # zfs send -i a pool/fs@b | ssh host \
       zfs receive poolB/received/fs
    


    Example 13 Using the zfs receive -d Option

    The following command sends a full stream of poolA/fsA/fsB@snap to a remote machine, receiving it into poolB/received/fsA/fsB@snap. The fsA/fsB@snap portion of the received snapshot's name is determined from the name of the sent snapshot. poolB must contain the file system poolB/received. If poolB/received/fsA does not exist, it is created as an empty file system.


    # zfs send poolA/fsA/fsB@snap | \
       ssh host zfs receive -d poolB/received
    


    Example 14 Setting User Properties

    The following example sets the user defined com.example:department property for a dataset.


    # zfs set com.example:department=12345 tank/accounting
    


    Example 15 Creating a ZFS Volume as a iSCSI Target Device

    The following example shows how to create a ZFS volume as an iSCSI target.


    # zfs create -V 2g pool/volumes/vol1
     # zfs set shareiscsi=on pool/volumes/vol1
     # iscsitadm list target
     Target: pool/volumes/vol1
     iSCSI Name: 
     iqn.1986-03.com.sun:02:7b4b02a6-3277-eb1b-e686-a24762c52a8c
     Connections: 0

    After the iSCSI target is created, set up the iSCSI initiator. For more information about the Solaris iSCSI initiator, see iscsitadm(1M).


    Example 16 Performing a Rolling Snapshot

    The following example shows how to maintain a history of snapshots with a consistent naming scheme. To keep a week's worth of snapshots, the user destroys the oldest snapshot, renames the remaining snapshots, and then creates a new snapshot, as follows:


    # zfs destroy -r pool/users@7daysago
    # zfs rename -r pool/users@6daysago @7daysago
    # zfs rename -r pool/users@5daysago @6daysago
    ...
    # zfs rename -r pool/users@yesterday @2daysago
    # zfs rename -r pool/users@today @yesterday
    # zfs snapshot -r pool/users@today
    


    Example 17 Setting sharenfs Property Options on a ZFS File System

    The following commands show how to set sharenfs property options to enable rw access for a set of IP addresses and to enable root access for system neo on the tank/home file system.


    # zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
    

    If you are using DNS for host name resolution, specify the fully qualified hostname.



    Example 18 Delegating ZFS Administration Permissions on a ZFS Dataset

    The following example shows how to set permissions so that user cindys can create, destroy, mount and take snapshots on tank/cindys. The permissions on tank/cindys are also displayed.


    # zfs allow cindys create,destroy,mount,snapshot tank/cindys
    # zfs allow tank/cindys
    -------------------------------------------------------------
    Local+Descendent permissions on (tank/cindys)
              user cindys create,destroy,mount,snapshot
    -------------------------------------------------------------

    Because the tank/cindys mount point permission is set to 755 by default, user cindys will be unable to mount file systems under tank/cindys. Set an ACL similar to the following syntax to provide mount point access:


    # chmod A+user:cindys:add_subdirectory:allow /tank/cindys
    

    Example 19 Delegating Create Time Permissions on a ZFS Dataset

    The following example shows how to grant anyone in the group staff to create file systems in tank/users. This syntax also allows staff members to destroy their own file systems, but not destroy anyone else's file system. The permissions on tank/users are also displayed.


    # zfs allow staff create,mount tank/users
    # zfs allow -c destroy tank/users
    # zfs allow tank/users
    -------------------------------------------------------------
    Create time permissions on (tank/users)
              create,destroy
    Local+Descendent permissions on (tank/users)
              group staff create,mount
    ------------------------------------------------------------- 


    Example 20 Defining and Granting a Permission Set on a ZFS Dataset

    The following example shows how to define and grant a permission set on the tank/users file system. The permissions on tank/users are also displayed.


    # zfs allow -s @pset create,destroy,snapshot,mount tank/users
    # zfs allow staff @pset tank/users
    # zfs allow tank/users
    -------------------------------------------------------------
    Permission sets on (tank/users)
            @pset create,destroy,mount,snapshot
    Create time permissions on (tank/users)
            create,destroy
    Local+Descendent permissions on (tank/users)
            group staff @pset,create,mount
    -------------------------------------------------------------


    Example 21 Delegating Property Permissions on a ZFS Dataset

    The following example shows to grant the ability to set quotas and reservations on the users/home file system. The permissions on users/home are also displayed.


    # zfs allow cindys quota,reservation users/home
    # zfs allow users/home
    -------------------------------------------------------------
    Local+Descendent permissions on (users/home)
            user cindys quota,reservation
    -------------------------------------------------------------
    cindys% zfs set quota=10G users/home/marks
    cindys% zfs get quota users/home/marks
    NAME              PROPERTY  VALUE             SOURCE
    users/home/marks  quota     10G               local 


    Example 22 Removing ZFS Delegated Permissions on a ZFS Dataset

    The following example shows how to remove the snapshot permission from the staff group on the tank/users file system. The permissions on tank/users are also displayed.


    # zfs unallow staff snapshot tank/users
    # zfs allow tank/users
    -------------------------------------------------------------
    Permission sets on (tank/users)
            @pset create,destroy,mount,snapshot
    Create time permissions on (tank/users)
            create,destroy
    Local+Descendent permissions on (tank/users)
            group staff @pset,create,mount
    ------------------------------------------------------------- 

Exit Status

    The following exit values are returned:

    0

    Successful completion.

    1

    An error occurred.

    2

    Invalid command line options were specified.

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWzfsu

    Interface Stability

    Committed

See Also


2010-09-05 00:18:13

Name

    coreadm– core file administration

Synopsis

    coreadm [-g pattern] [-G content] [-i pattern] [-I content] 
         [-d option]... [-e option]...
    coreadm [-p pattern] [-P content] [pid]...
    coreadm -u
    

Description

    coreadm specifies the name and location of core files produced by abnormally-terminating processes. See core(4).

    Only users who have the sys_admin privilege can execute the first form of the SYNOPSIS. This form configures system-wide core file options, including a global core file name pattern and a core file name pattern for the init(1M) process. All settings are saved in coreadm's configuration file /etc/coreadm.conf to set at boot. See init(1M).

    Nonprivileged users can execute the second form of the SYNOPSIS. This form specifies the file name pattern and core file content that the operating system uses to generate a per-process core file.

    Only users who have the sys_admin privilege can execute the third form of the SYNOPSIS. This form updates all system-wide core file options, based on the contents of /etc/coreadm.conf. Normally, this option is used on reboot when starting svc:/system/coreadm:default.

    A core file name pattern is a normal file system path name with embedded variables, specified with a leading % character. The variables are expanded from values that are effective when a core file is generated by the operating system. The possible embedded variables are as follows:

    %d

    Executable file directory name, up to a maximum of MAXPATHLEN characters

    %f

    Executable file name, up to a maximum of MAXCOMLEN characters

    %g

    Effective group-ID

    %m

    Machine name (uname -m)

    %n

    System node name (uname -n)

    %p

    Process-ID

    %t

    Decimal value of time(2)

    %u

    Effective user-ID

    %z

    Name of the zone in which process executed (zonename)

    %%

    Literal %

    For example, the core file name pattern /var/core/core.%f.%p would result, for command foo with process-ID 1234, in the core file name /var/core/core.foo.1234.

    A core file content description is specified using a series of tokens to identify parts of a process's binary image:

    anon

    Anonymous private mappings, including thread stacks that are not main thread stacks

    ctf

    CTF type information sections for loaded object files

    data

    Writable private file mappings

    dism

    DISM mappings

    heap

    Process heap

    ism

    ISM mappings

    rodata

    Read-only private file mappings

    shanon

    Anonymous shared mappings

    shfile

    Shared mappings that are backed by files

    shm

    System V shared memory

    stack

    Process stack

    symtab

    Symbol table sections for loaded object files

    text

    Readable and executable private file mappings

    In addition, you can use the token all to indicate that core files should include all of these parts of the process's binary image. You can use the token none to indicate that no mappings are to be included. The default token indicates inclusion of the system default content (stack+heap+shm+ism+dism+text+data+rodata+anon+shanon+ctf). The /proc file system data structures are always present in core files regardless of the mapping content.

    You can use + and - to concatenate tokens. For example, the core file content default-ism would produce a core file with the default set of mappings without any intimate shared memory mappings.

    The coreadm command with no arguments reports the current system configuration, for example:


    $ coreadm
        global core file pattern: /var/core/core.%f.%p
        global core file content: all
          init core file pattern: core
          init core file content: default
               global core dumps: enabled
          per-process core dumps: enabled
         global setid core dumps: enabled
    per-process setid core dumps: disabled
        global core dump logging: disabled

    The coreadm command with only a list of process-IDs reports each process's per-process core file name pattern, for example:


    $ coreadm 278 5678
      278:   core.%f.%p default
      5678:  /home/george/cores/%f.%p.%t all-ism

    Only the owner of a process or a user with the proc_owner privilege can interrogate a process in this manner.

    When a process is dumping core, up to three core files can be produced: one in the per-process location, one in the system-wide global location, and, if the process was running in a local (non-global) zone, one in the global location for the zone in which that process was running. Each core file is generated according to the effective options for the corresponding location.

    When generated, a global core file is created in mode 600 and owned by the superuser. Nonprivileged users cannot examine such files.

    Ordinary per-process core files are created in mode 600 under the credentials of the process. The owner of the process can examine such files.

    A process that is or ever has been setuid or setgid since its last exec(2) presents security issues that relate to dumping core. Similarly, a process that initially had superuser privileges and lost those privileges through setuid(2) also presents security issues that are related to dumping core. A process of either type can contain sensitive information in its address space to which the current nonprivileged owner of the process should not have access. If setid core files are enabled, they are created mode 600 and owned by the superuser.

Options

    The following options are supported:

    -d option...

    Disable the specified core file option. See the -e option for descriptions of possible options.

    Multiple -e and -d options can be specified on the command line. Only users with the sys_admin privilege can use this option.

    -e option...

    Enable the specified core file option. Specify option as one of the following:

    global

    Allow core dumps that use global core pattern.

    global-setid

    Allow set-id core dumps that use global core pattern.

    log

    Generate a syslog(3C) message when generation of a global core file is attempted.

    process

    Allow core dumps that use per-process core pattern.

    proc-setid

    Allow set-id core dumps that use per-process core pattern.

    Multiple -e and -d options can be specified on the command line. Only users with the sys_admin privilege can use this option.

    -g pattern

    Set the global core file name pattern to pattern. The pattern must start with a / and can contain any of the special % variables that are described in the DESCRIPTION.

    Only users with the sys_admin privilege can use this option.

    -G content

    Set the global core file content to content. You must specify content by using the tokens that are described in the DESCRIPTION.

    Only users with the sys_admin privilege can use this option.

    -i pattern

    Set the default per-process core file name to pattern. This changes the per-process pattern for any process whose per-process pattern is still set to the default. Processes that have had their per-process pattern set or are descended from a process that had its per-process pattern set (using the -p option) are unaffected. This default persists across reboot.

    Only users with the sys_admin or proc_owner privilege can use this option.

    -I content

    Set the default per-process core file content to content. This changes the per-process content for any process whose per-process content is still set to the default. Processes that have had their per-process content set or are descended from a process that had its per-process content set (using the -P option) are unaffected. This default persists across reboot.

    Only users with the sys_admin or proc_owner privileges can use this option.

    -p pattern

    Set the per-process core file name pattern to pattern for each of the specified process-IDs. The pattern can contain any of the special % variables described in the DESCRIPTION and need not begin with /. If the pattern does not begin with /, it is evaluated relative to the directory that is current when the process generates a core file.

    A nonprivileged user can apply the -p option only to processes that are owned by that user. A user with the proc_owner privilege can apply the option to any process. The per-process core file name pattern is inherited by future child processes of the affected processes. See fork(2).

    If no process-IDs are specified, the -p option sets the per-process core file name pattern to pattern on the parent process (usually the shell that ran coreadm).

    -P content

    Set the per-process core file content to content for each of the specified process-IDs. The content must be specified by using the tokens that are described in the DESCRIPTION.

    A nonprivileged user can apply the -p option only to processes that are owned by that user. A user with the proc_owner privilege can apply the option to any process. The per-process core file name pattern is inherited by future child processes of the affected processes. See fork(2).

    If no process-IDs are specified, the -P option sets the per-process file content to content on the parent process (usually the shell that ran coreadm).

    -u

    Update system-wide core file options from the contents of the configuration file /etc/coreadm.conf. If the configuration file is missing or contains invalid values, default values are substituted. Following the update, the configuration file is resynchronized with the system core file configuration.

    Only users with the sys_admin privilege can use this option.

Operands

    The following operands are supported:

    pid

    process-ID

Examples


    Example 1 Setting the Core File Name Pattern

    When executed from a user's $HOME/.profile or $HOME/.login, the following command sets the core file name pattern for all processes that are run during the login session:


    example$  coreadm -p core.%f.%p

    Note that since the process-ID is omitted, the per-process core file name pattern will be set in the shell that is currently running and is inherited by all child processes.



    Example 2 Dumping a User's Files Into a Subdirectory

    The following command dumps all of a user's core dumps into the corefiles subdirectory of the home directory, discriminated by the system node name. This command is useful for users who use many different machines but have a shared home directory.


    example$  coreadm -p $HOME/corefiles/%n.%f.%p 1234


    Example 3 Culling the Global Core File Repository

    The following commands set up the system to produce core files in the global repository only if the executables were run from /usr/bin or /usr/sbin.


    example# mkdir -p /var/cores/usr/bin
    example# mkdir -p /var/cores/usr/sbin
    example# coreadm -G all -g /var/cores/%d/%f.%p.%n

Files

    /etc/coreadm.conf

Exit Status

    The following exit values are returned:

    0

    Successful completion.

    1

    A fatal error occurred while either obtaining or modifying the system core file configuration.

    2

    Invalid command-line options were specified.

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWcsu

See Also

Notes

    In a local (non-global) zone, the global settings apply to processes running in that zone. In addition, the global zone's apply to processes run in any zone.

    The term global settings refers to settings which are applied to the system or zone as a whole, and does not necessarily imply that the settings are to take effect in the global zone.

    The coreadm service is managed by the service management facility, smf(5), under the service identifier:


    svc:/system/coreadm:default

    Administrative actions on this service, such as enabling, disabling, or requesting restart, can be performed using svcadm(1M). The service's status can be queried using the svcs(1) command.


2010-09-05 00:17:08

Name

    share– make local resource available for mounting by remote systems

Synopsis

    share [-F FSType] [-o specific_options] [-d description] 
         [pathname]

Description

    The share command exports, or makes a resource available for mounting, through a remote file system of type FSType. If the option -F FSType is omitted, the first file system type listed in /etc/dfs/fstypes is used as default. For a description of NFS specific options, see share_nfs(1M). pathname is the pathname of the directory to be shared. When invoked with no arguments, share displays all shared file systems.

Options

    -F FSType

    Specify the filesystem type.

    -o specific_options

    The specific_options are used to control access of the shared resource. (See share_nfs(1M) for the NFS specific options.) They may be any of the following:

    rw

    pathname is shared read/write to all clients. This is also the default behavior.

    rw=client[:client]...

    pathname is shared read/write only to the listed clients. No other systems can access pathname.

    ro

    pathname is shared read-only to all clients.

    ro=client[:client]...

    pathname is shared read-only only to the listed clients. No other systems can access pathname.

    Separate multiple options with commas. Separate multiple operands for an option with colons. See EXAMPLES.

    -d description

    The -d flag may be used to provide a description of the resource being shared.

Examples


    Example 1 Sharing a Read-Only Filesystem

    This line will share the /disk file system read-only at boot time.


    share -F nfs -o ro /disk
    


    Example 2 Invoking Multiple Options

    The following command shares the filesystem /export/manuals, with members of the netgroup having read-only access and users on the specified host having read-write access.


    share -F nfs -o ro=netgroup_name,rw=host1:host2:host3 /export/manuals

Files

    /etc/dfs/dfstab

    list of share commands to be executed at boot time

    /etc/dfs/fstypes

    list of file system types, NFS by default

    /etc/dfs/sharetab

    system record of shared file systems

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE

    ATTRIBUTE VALUE

    Availability

    SUNWcsu

See Also

Notes

    Export (old terminology): file system sharing used to be called exporting on SunOS 4.x, so the share command used to be invoked as exportfs(1B) or /usr/sbin/exportfs.

    If share commands are invoked multiple times on the same filesystem, the last share invocation supersedes the previous—the options set by the last share command replace the old options. For example, if read-write permission was given to usera on /somefs, then to give read-write permission also to userb on /somefs:

    example% share -F nfs -o rw=usera:userb /somefs

    This behavior is not limited to sharing the root filesystem, but applies to all filesystems.


2010-09-05 00:07:35
출처 짜세나게 달려보자 !!! | 짜세맨
원문 http://blog.naver.com/831jsh/70047794179

원본 : http://www.softpanorama.org/Net/Application_layer/NFS/troubleshooting_of_nfs_problems.shtml

 

Troubleshooting Solaris NFS Problems

 

News NFS overview Recommended Links Sun Documentation Tutorials Reference HOWTO FAQs RFCs  
rpcbind failure error server not responding error  NFS client fails a reboot error service not responding error  program not registered error stale file handle error  unknown host error  mount point error  no such file error No such file or directory

 

Some common NFS errors are:

  1. The rpcbind failure error
  2. The server not responding error
  3. The NFS client fails a reboot error
  4. The service not responding error
  5. The program not registered error
  6. The stale file handle error
  7. The unknown host error
  8. The mount point error
  9. The no such file error
  10. No such file or directory

Troubleshooting recommendations:

  1. The rpcbind failure Error. The following example shows the message that appears on the client
    system during the boot process or in response to an explicit mount request:
    • nfs mount: server1:: RPC: Rpcbind failure
      RPC: Timed Out
      nfs mount: retrying: /mntpoint

    The error in accessing the server is due to:

    • The combination of an incorrect Internet address and a correct host or node name in the hosts database file supporting the client node.
    • The hosts database file that supports the client has the correct server node, but the server node temporarily stops due to an overload.
    To solve the rpcbind failure error condition when the server node is operational, determine if the server is out of critical resources (for example, memory, swap, or disk space).
     
  2. The server not responding Error The following message appears during the boot process or in response to an explicit mount request, and this message indicates a known server that is inaccessible.

    NFS server server2 not responding, still trying

     Possible causes for the server not responding error are:

    • The network between the local system and the server is down. To verify that the network is down, enter the ping command (ping server2).
    •  The server ( server2) is down.
       
    The NFS client fails a reboot Error. If you attempt to boot an NFS client and the client-node stops, waits, and echoes the following message:

    Setting default interface for multicast: add net 224.0.0.0: gateway:
    client_node_name.

    these symptoms might indicate that a client is requesting an NFS mount using an entry in the /etc/vfstab file, specifying a foreground mount from a non-operational NFS server.

    To solve this error, complete the following steps:

    1. To interrupt the failed client node press Stop-A, and boot the client into single-user mode.

    2. Edit the /etc/vfstab file to comment out the NFS mounts.

    3. To continue booting to the default run level (normally run level 3), press Control-D.

    4. Determine if all the NFS servers are operational and functioning properly.

    5. After you resolve problems with the NFS servers, remove the comments from the /etc/vfstab file.

    Note – If the NFS server is not available, an alternative to commenting out
    the entry in the /etc/vfstab file is to use the bg mount option so that the
    boot sequence can proceed in parallel with the attempt to perform the NFS mount.
     

  3. The service not responding ErrorThe following message appears during the boot process or in response to an explicit mount request, and indicates that an accessible server is not running the NFS server daemons.


    nfs mount: dbserver: NFS: Service not responding
    nfs mount: retrying: /mntpoint

    To solve the service not responding error condition, complete the following steps:

    1.  Enter the who -r command on the server to see if it is at run level 3. If the server is not, change to run level 3 by entering the init 3 command.
    2. Enter the ps -e command on the server to check whether the NFS server daemons are running. If they are not, start them by using the /etc/init.d/nfs.server start script.
       
  4. The program not registered Error. The following message appears during the boot process or in response to an explicit mount request and indicates that an accessible server is not running the mountd daemon.

    nfs mount: dbserver: RPC: Program not registered
    nfs mount: retrying: /mntpoint

    To solve the program not registered error condition, complete the following steps:

    1.  Enter the who -r command on the server to check that it is at run level 3. If the server is not, change to run level 3 by performing the init 3 command.
    2.  Enter the pgrep -xl mountd command. If the mountd daemon is not running, start it using the /etc/init.d/nfs.server script, first with the stop flag and then with the start flag.
    3.  Check the /etc/dfs/dfstab file entries.
       
  5. The stale NFS file handle Error. The following message appears when a process attempts to access a
    remote file resource with an out-of-date file handle.  A possible cause for the stale NFS file handle error is that the file resource on the server moved. To solve the stale NFS file handle error condition, unmount and mount the resource again on the client.
     
  6. The unknown host Error. The following message indicates that the host name of the server on the client is missing from the hosts table.

    nfs mount: sserver1:: RPC: Unknown host

    To solve the unknown host error condition, verify the host name in the hosts database that supports the client node. Note – The preceding example misspelled the node name server1 as sserver1.
     

  7. The mount point Error. The following message appears during the boot process or in response to
    an explicit mount request and indicates a non-existent mount point.

    mount: mount-point /DS9 does not exist.

    To solve the mount point error condition, check that the mount point exists on the client. Check the spelling of the mount point on the command line or in the /etc/vfstab file on the client, or comment out
    the entry and reboot the system.
     

  8. The no such file Error. The following message appears during the boot process or in response to
    an explicit mount request, which indicates that there is an unknown file
    resource name on the server.
     
  9. No such file or directory To solve the no such file error condition, check that the directory exists
    on the server. Check the spelling of the directory on the command line or in the /etc/vfstab file.

Use of NFS Considered Harmful  

First of all usage of 'considered harmful" usually signify primitive fundamentalist stance of the critique. Also this critique is applicable only to older versions of protocols. NFS v.4 contains some improvements

Following are a few known problems with NFS and suggested workarounds.

a. Time Synchronization

NFS does not synchronize time between client and server, and offers no mechanism for the client to determine what time the server thinks it is. What this means is that a client can update a file, and have the timestamp on the file be either some time long in the past, or even in the future, from its point of view.

While this is generally not an issue if clocks are a few seconds or even a few minutes off, it can be confusing and misleading to humans. Of even greater importance is the affect on programs. Programs often do not expect time difference like this, and may end abnormally or behave strangely, as various tasks timeout instantly, or take extraordinarily long while to timeout.

Poor time synchronization also makes debugging problems difficult, because there is no easy way to establish a chronology of events. This is especially problematic when investigating security issues, such as break in attempts.

Workaround: Use the Network Time Protocol (NTP) religiously. Use of NTP can result in machines that have extremely small time differences.

Note: The NFS protocol version 3 does have support for the client specifying the time when updating a file, but this is not widely implemented. Additionally, it does not help in the case where two clients are accessing the same file from machines with drifting clocks.

b. File Locking Semantics

Programs use file locking to insure that concurrent access to files does not occur except when guaranteed to be safe. This prevents data corruption, and allows handshaking between cooperative processes.

In Unix, the kernel handles file locking. This is required so that if a program is terminated, any locks that it has are released. It also allows the operations to be atomic, meaning that a lock cannot be obtained by multiple processes.

Because NFS is stateless, there is no way for the server to keep track of file locks - it simply does not know what clients there are or what files they are using. In an effort to solve this, a separate server, the lock daemon, was added. Typically, each NFS server will run a lock daemon.

The combination of lock daemon and NFS server yields a solution that is almost like Unix file locking. Unfortunately, file locking is extremely slow, compared to NFS traffic without file locking (or file locking on a local Unix disk). Of greater concern is the behaviour of NFS locking on failure.

In the event of server failure (e.g. server reboot or lock daemon restart), all client locks are lost. However, the clients are not informed of this, and because the other operations (read, write, and so on) are not visibly interrupted, they have no reliable way to prevent other clients from obtaining a lock on a file they think they have locked.

In the event of client failure, the locks are not immediately freed. Nor is there a timeout. If the client process terminates, the client OS kernel will notify the server, and the lock will be freed. However, if the client system shuts down abnormally (e.g. power failure or kernel panic), then the server will not be notified. When the client reboots and remounts the NFS exports, the server is notified and any client locks are freed.

If the client does not reboot, for example if a frustrated user hits the power switch and goes home for the weekend, or if a computer has had a hardware failure and must wait for replacement parts, then the locks are never freed! In this unfortunate scenario, the server lock daemon must be restarted, with the same effects as a server failure.

Workaround: If possible (given program source and skill with code modification), remove locking and insure no inconsistency occurs via other mechanisms, possibly using atomic file creation (see below) or some other mechanism for synchronization. Otherwise, build platforms never fail and have a staff trained on the implications of NFS file locking failure. If NFS is used only for files that are never accessed by more than a single client, locking is not an issue.

Note: A status monitor mechanism exists to monitor client status, and free client locks if a client is unavailable. However, clients may chose not to use this mechanism, and in many implementations do not.

c. File Locking API

In Unix, there are two flavours of file locking, flock() from BSD and lockf() from System V. It varies from system to system which of these mechanisms work with NFS. In Solaris, Sun's Unix variant, lockf() works with NFS, and flock() is implemented via lockf(). On other systems, the results are less consistent. For example, on some systems, lockf() is not implemented at all, and flock() does not support NFS; while on other systems, lockf() supports NFS but flock() does not.

Regardless of the system specifics, programs often assume that if they are unable to obtain a lock, it is because another program has the lock. This can cause problems as programs wait for the lock to be freed. Since the reason the lock fails is because locking is unsupported, the attempt to obtain a lock will never work. This results in either the applications waiting forever, or aborting their operation.

These results will also vary with the support of the server. While typically the NFS server runs an accompanying lock daemon, this is not guaranteed.

Workaround: Upgrade to the latest versions of all operating systems, as they usually have improved and more consistent locking support. Also, use the lock daemon. Additionally, try to use only programs written to handle NFS locking properly, veified either by code review or a vendor compliance statement.

d. Exclusive File Creation

In Unix, when a program creates a file, it may ask for the operation to fail if the file already exists (as opposed to the default behaviour of using the existing file). This allows programs to know that, for example, they have a unique file name for a temporary file. It is also used by various daemons for locking various operations, e.g. modifying mail folders or print queues.

Unfortunately, NFS does not properly implement this behaviour. A file creation will sometimes return success even if the file already exists. Programs written to work on a local file system will experience strange results when they attempt to update a file after using file creation to lock it, only to discover another file is modifying it (I have personally seen mailboxes with hundreds of mail messages corrupted because of this), because it also "locked" the file via the same mechanism.

Workaround: If possible (given program source and skill with code modification), use the following method, as documented in the Linux open() manual page:

The solution for performing atomic file locking using a lockfile is to create a unique file on the same fs (e.g., incorporating hostname and pid), use link(2) to make a link to the lockfile and use stat(2) on the unique file to check if its link count has increased to 2. Do not use the return value of the link() call.

This still leaves the issue of client failure unanswered. The suggested solution for this is to pick a timeout value and assume if a lock is older than a certain application-specific age that it has been abandoned.

e. Delayed Write Caching

In an effort to improve efficiency, many NFS clients cache writes. This means that they delay sending small writes to the server, with the idea that if the client makes another small write in a short amount of time, the client need only send a single message to the server.

Unix servers typically cache disk writes to local disks the same way. The difference is that Unix servers also keep track of the state of the file in the cache memory versus the state on disk, so programs are all presented with a single view of the file.

In NFS caching, all applications on a single client will typically see the same file contents. However, applications accessing the file from different clients will not see the same file for several seconds.

Workaround: It is often possible to disable client write caching. Unfortunately, this frequently causes unacceptably slow performance, depending on the application. (Applications that perform I/O of large chunks of data should be unaffected, but applications that perform lots of small I/O operations will be severely punished.) If locking is employed, applications can explicitly cooperate and flush files from the local cache to the server, but see the previous sections on locking when employing this solution.

f. Read Caching and File Access Time

Unix file systems typically have three times associated with a file: the time of last modification (file creation or write), the time of last "change" (write or change of inode information), or the time of last access (file execution or read). NFS file systems also report this information.

NFS clients perform attribute caching for efficiency reasons. Reading small amounts of data does not update the access time on the server. This means a server may report a file has been unaccessed for a much longer time than is accurate.

This can cause problems as administrators and automatic cleanup software may delete files that have remained unused for a long time, expecting them to be stale lock files, abandoned temporary files and so on.

Workaround: Attribute caching may be disabled on the client, but this is usually not a good idea for performance reasons. Administrators should be trained to understand the behaviour of NFS regarding file access time. Any programs that rely on access time information should be modified to use another mechanism.

g. Indestructible Files

In Unix, when a file is opened, the data of that file is accessible to the process that opened it, even if the file is deleted. The disk blocks the file uses are freed only when the last process which has it open has closed it.

An NFS server, being stateless, has no way to know what clients have a file open. Indeed, in NFS clients never really "open" or "close" files. So when a file is deleted, the server merely frees the space. Woe be unto any client that was expecting the file contents to be accessible as before, as in the Unix world!

In an effort to minimize this as much as possible, when a client deletes a file, the operating systems checks if any process on the same client box has it open. If it does, the client renames the file to a "hidden" file. Any read or write requests from processes on the client that were to the now-deleted file go to the new file.

This file is named in the form .nfsXXXX, where the XXXX value is determined by the inode of the deleted file - basically a random value. If a process (such as rm) attempts to delete this new file from the client, it is replaced by a new .nfsXXXX file, until the process with the file open closes it.

These files are difficult to get rid of, as the process with the file open needs to be killed, and it is not easy to determine what that process is. These files may have unpleasant side effects such as preventing directories from being removed.

If the server or client crashes while a .nfsXXXX file is in use, they will never be deleted. There is no way for the server or a client to know whether a .nfsXXXX file is currently being used by a client or not.

Workaround: One should be able to delete .nfsXXXX files from another client, however if a process writes to the file, it will be created at that time. It would be best to exit or kill processes using an NFS file before deleting it. Unfortunately, there is no way to know if an uncooperative process has a file open.

h. User and Group Names and Numbers

NFS uses user and group numbers, rather than names. This means that each machine that accesses an NFS export needs (or at least should) have the same user and group identifiers as the NFS export has. Note that this problem is not unique to NFS, and also applies, for instance, to removable media and archives. It is most frequently an issue with NFS, however.

Workaround: Either the /etc/passwd and /etc/group files must be synchronized, or something like NIS needs to be used for this purpose.

i. Superuser Account

NFS has special handling of the superuser account (also known as the root account). By default, the root user may not update files on an NFS mount.

Normally on a Unix system, root may do anything to any file. When an NFS drive has been mounted, this is no longer the case. This can confuse scripts and administrators alike.

To clarify: a normal user (for example "shane" or "billg") can update files that the superuser ("root") cannot.

Workaround: Enable root access to specific clients for NFS exports, but only in a trusted environment since NFS is insecure. Therefore, this does not guarantee that unauthorized client will be unable to access the mount as root.