샤브의 블로그 RSS 태그 관리 글쓰기 방명록
solaris 10 (35)
2010-09-05 00:04:32

/etc/inet/hosts 파일 형식

The /etc/inet/hosts file uses the basic syntax that follows. Refer to the hosts(4) man page for complete syntax information.

IPv4-address hostname [nicknames] [#comment]

IPv4-address

Contains the IPv4 address for each interface that the local host must recognize.

hostname

Contains the host name that is assigned to the system at setup, plus the host names that are assigned to additional network interfaces that the local host must recognize.

[nickname]

Is an optional field that contains a nickname for the host.

[#comment]

Is an optional field for a comment.


2010-09-05 00:02:51

Name

    smf– service management facility

Description

    The Solaris service management facility defines a programming model for providing persistently running applications called services. The facility also provides the infrastructure in which to run services. A service can represent a running application, the software state of a device, or a set of other services. Services are represented in the framework by service instance objects, which are children of service objects. Instance objects can inherit or override the configuration of the parent service object, which allows multiple service instances to share configuration information. All service and instance objects are contained in a scope that represents a collection of configuration information. The configuration of the local Solaris instance is called the “localhost” scope, and is the only currently supported scope.

    Each service instance is named with a fault management resource identifier (FMRI) with the scheme “svc:”. For example, the syslogd(1M) daemon started at system startup is the default service instance named:

    svc://localhost/system/system-log:default
    svc:/system/system-log:default
    system/system-log:default

    In the above example, 'default' is the name of the instance and 'system/system-log' is the service name. Service names may comprise multiple components separated by slashes (/). All components, except the last, compose the category of the service. Site-specific services should be named with a category beginning with 'site'.

    A service instance is either enabled or disabled. All services can be enabled or disabled with the svcadm(1M) command.

    The list of managed service instances on a system can be displayed with the svcs(1) command.

    Dependencies

      Service instances may have dependencies on services or files. Those dependencies govern when the service is started and automatically stopped. When the dependencies of an enabled service are not satisfied, the service is kept in the offline state. When its dependencies are satisfied, the service is started. If the start is successful, the service is transitioned to the online state. Whether a dependency is satisfied is determined by its type:

      require_all

      Satisfied when all cited services are running (online or degraded), or when all indicated files are present.

      require_any

      Satisfied when one of the cited services is running (online or degraded), or when at least one of the indicated files is present.

      optional_all

      Satisfied if the cited services are running (online or degraded) or will not run without administrative action (disabled, maintenance, not present, or offline waiting for dependencies which will not start without administrative action).

      exclude_all

      Satisfied when all of the cited services are disabled, in the maintenance state, or when cited services or files are not present.

      Once running (online or degraded), if a service cited by a require_all, require_any, or optional_all dependency is stopped or refreshed, the SMF considers why the service was stopped and the restart_on attribute of the dependency to decide whether to stop the service.

                         |  restart_on value
      event              |  none  error restart refresh
      -------------------+------------------------------
      stop due to error  |  no    yes   yes     yes
      non-error stop     |  no    no    yes     yes
      refresh            |  no    no    no      yes

      A service is considered to have stopped due to an error if the service has encountered a hardware error or a software error such as a core dump. For exclude_all dependencies, the service is stopped if the cited service is started and the restart_on attribute is not none.

      The dependencies on a service can be listed with svcs(1) or svccfg(1M), and modified with svccfg(1M).

    Restarters

      Each service is managed by a restarter. The master restarter, svc.startd(1M) manages states for the entire set of service instances and their dependencies. The master restarter acts on behalf of its services and on delegated restarters that can provide specific execution environments for certain application classes. For instance, inetd(1M) is a delegated restarter that provides its service instances with an initial environment composed of a network connection as input and output file descriptors. Each instance delegated to inetd(1M) is in the online state. While the daemon of a particular instance might not be running, the instance is available to run.

      As dependencies are satisfied when instances move to the online state, svc.startd(1M) invokes start methods of other instances or directs the delegated restarter to do so. These operations might overlap.

      The current set of services and associated restarters can be examined using svcs(1). A description of the common configuration used by all restarters is given in smf_restarter(5).

    Methods

      Each service or service instance must define a set of methods that start, stop, and, optionally, refresh the service. See smf_method(5) for a more complete description of the method conventions for svc.startd(1M) and similar fork(2)-exec(2) restarters.

      Administrative methods, such as for the capture of legacy configuration information into the repository, are discussed on the svccfg(1M) manual page.

      The methods for a service can be listed and modified using the svccfg(1M) command.

    States

      Each service instance is always in a well-defined state based on its dependencies, the results of the execution of its methods, and its potential receipt of events from the contracts filesystem. The following states are defined:

      UNINITIALIZED

      This is the initial state for all service instances. Instances are moved to maintenance, offline, or a disabled state upon evaluation by svc.startd(1M) or the appropriate restarter.

      OFFLINE

      The instance is enabled, but not yet running or available to run. If restarter execution of the service start method or the equivalent method is successful, the instance moves to the online state. Failures might lead to a degraded or maintenance state. Administrative action can lead to the uninitialized state.

      ONLINE

      The instance is enabled and running or is available to run. The specific nature of the online state is application-model specific and is defined by the restarter responsible for the service instance. Online is the expected operating state for a properly configured service with all dependencies satisfied. Failures of the instance can lead to a degraded or maintenance state. Failures of services on which the instance depends can lead to offline or degraded states.

      DEGRADED

      The instance is enabled and running or available to run. The instance, however, is functioning at a limited capacity in comparison to normal operation. Failures of the instance can lead to the maintenance state. Failures of services on which the instance depends can lead to offline or degraded states. Restoration of capacity should result in a transition to the online state.

      MAINTENANCE

      The instance is enabled, but not able to run. Administrative action is required to restore the instance to offline and subsequent states. The maintenance state might be a temporarily reached state if an administrative operation is underway.

      DISABLED

      The instance is disabled. Enabling the service results in a transition to the offline state and eventually to the online state with all dependencies satisfied.

      LEGACY-RUN

      This state represents a legacy instance that is not managed by the service management facility. Instances in this state have been started at some point, but might or might not be running. Instances can only be observed using the facility and are not transferred into other states.

      States can also have transitions that result in a return to the originating state.

    Properties and Property Groups

      The dependencies, methods, delegated restarter, and instance state mentioned above are represented as properties or property groups of the service or service instance. A service or service instance has an arbitrary number of property groups in which to store application data. Using property groups in this way allows the configuration of the application to derive the attributes that the repository provides for all data in the facility. The application can also use the appropriate subset of the service_bundle(4) DTD to represent its configuration data within the framework.

      Property lookups are composed. If a property group-property combination is not found on the service instance, most commands and the high-level interfaces of libscf(3LIB) search for the same property group-property combination on the service that contains that instance. This feature allows common configuration among service instances to be shared. Composition can be viewed as an inheritance relationship between the service instance and its parent service.

      Properties are protected from modification by unauthorized processes. See smf_security(5).

    Snapshots

      Historical data about each instance in the repository is maintained by the service management facility. This data is made available as read-only snapshots for administrative inspection and rollback. The following set of snapshot types might be available:

      initial

      Initial configuration of the instance created by the administrator or produced during package installation.

      last_import

      Configuration as prescribed by the manifest of the service that is taken during svccfg(1M) import operation. This snapshot provides a baseline for determining property customization.

      previous

      Current configuration captured when an administrative undo operation is performed.

      running

      The running configuration of the instance.

      start

      Configuration captured during a successful transition to the online state.

      The svccfg(1M) command can be used to interact with snapshots.

    Special Property Groups

      Some property groups are marked as “non-persistent”. These groups are not backed up in snapshots and their content is cleared during system boot. Such groups generally hold an active program state which does not need to survive system restart.

    Configuration Repository

      The current state of each service instance, as well as the properties associated with services and service instances, is stored in a system repository managed by svc.configd(1M). This repository is transactional and able to provide previous versions of properties and property groups associated with each service or service instance.

      The repository for service management facility data is managed by svc.configd(1M).

    Service Bundles, Manifests, and Profiles

      The information associated with a service or service instance that is stored in the configuration repository can be exported as XML-based files. Such XML files, known as service bundles, are portable and suitable for backup purposes. Service bundles are classified as one of the following types:

      manifests

      Files that contain the complete set of properties associated with a specific set of services or service instances.

      profiles

      Files that contain a set of service instances and values for the enabled property on each instance.

      Service bundles can be imported or exported from a repository using the svccfg(1M) command. See service_bundle(4) for a description of the service bundle file format with guidelines for authoring service bundles.

      A service archive is an XML file that contains the description and persistent properties of every service in the repository, excluding transient properties such as service state. This service archive is basically a 'svccfg export' for every service which is not limited to named services.

    Legacy Startup Scripts

      Startup programs in the /etc/rc?.d directories are executed as part of the corresponding run-level milestone:

      /etc/rcS.d

      milestone/single-user:default

      /etc/rc2.d

      milestone/multi-user:default

      /etc/rc3.d

      milestone/multi-user-server:default

      Execution of each program is represented as a reduced-functionality service instance named by the program's path. These instances are held in a special legacy-run state.

      These instances do not have an enabled property and, generally, cannot be manipulated with the svcadm(1M) command. No error diagnosis or restart is done for these programs.

See Also

2010-09-05 00:00:43

Name

    dfstab– file containing commands for sharing resources across a network

Description

    dfstab resides in directory /etc/dfs and contains commands for sharing resources across a network. dfstab gives a system administrator a uniform method of controlling the automatic sharing of local resources.

    Each line of the dfstab file consists of a share(1M) command. The dfstab file can be read by the shell to share all resources. System administrators can also prepare their own shell scripts to execute particular lines from dfstab.

    The contents of dfstab put into effect when the command shown below is run. See svcadm(1M).

    /usr/sbin/svcadm enable network/nfs/server

See Also

2010-09-04 23:59:19

Name

    sharetab– shared file system table

Description

    sharetab resides in directory /etc/dfs and contains a table of local resources shared by the share command.

    Each line of the file consists of the following fields:

    pathname resource fstype specific_options description

    where

    pathname

    Indicate the path name of the shared resource.

    resource

    Indicate the symbolic name by which remote systems can access the resource.

    fstype

    Indicate the file system type of the shared resource.

    specific_options

    Indicate file-system-type-specific options that were given to the share command when the resource was shared.

    description

    Describe the shared resource provided by the system administrator when the resource was shared.

See Also

2010-09-04 23:58:03
Values for Creating Vendor Category Options for Solaris Clients

Name 

Code 

Data Type 

Granularity 

Maximum 

Vendor Client Classes * 

Description 

SrootOpt

ASCII text 

SUNW.Ultra–1, SUNW.Ultra-30, SUNW.i86pc

NFS mount options for the client's root file system 

SrootIP4

IP address 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

IP address of root server 

SrootNM

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Host name of root server 

SrootPTH

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to the client's root directory on the root server 

SswapIP4

IP address 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

IP address of swap server 

SswapPTH

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to the client's swap file on the swap server 

SbootFIL

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to the client's boot file 

Stz

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Time zone for client 

SbootRS

NUMBER 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

NFS read size used by standalone boot program when it loads the kernel 

SinstIP4

10 

IP address 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

IP address of Jumpstart Install server 

SinstNM

11 

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Host name of install server 

SinstPTH

12 

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to installation image on install server 

SsysidCF

13 

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to sysidcfg file, in the format server:/path

SjumpsCF

14 

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to JumpStart configuration file in the format server:/path

Sterm

15 

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Terminal type  

* The vendor client classes determine what classes of client can use the option. Vendor client classes listed here are suggestions only. You should specify client classes that indicate the actual clients in your network that need to install from the network. See Table 4–9 for information about how to determine a client's vendor client class.


2010-09-04 23:56:08

Name

    mnttab– mounted file system table

Description

    The file /etc/mnttab is really a file system that provides read-only access to the table of mounted file systems for the current host. /etc/mnttab is read by programs using the routines described in getmntent(3C). Mounting a file system adds an entry to this table. Unmounting removes an entry from this table. Remounting a file system causes the information in the mounted file system table to be updated to reflect any changes caused by the remount. The list is maintained by the kernel in order of mount time. That is, the first mounted file system is first in the list and the most recently mounted file system is last. When mounted on a mount point the file system appears as a regular file containing the current mnttab information.

    Each entry is a line of fields separated by TABs in the form:

    special   mount_point   fstype   options   time
    

    where:

    special

    The name of the resource that has been mounted.

    mount_point

    The pathname of the directory on which the filesystem is mounted.

    fstype

    The file system type of the mounted file system.

    options

    The mount options. See respective mount file system man page in the See Also section below.

    time

    The time at which the file system was mounted.

    Examples of entries for the special field include the pathname of a block-special device, the name of a remote file system in the form of host:pathname, or the name of a swap file, for example, a file made with mkfile(1M).

ioctls

    The following ioctl(2) calls are supported:

    MNTIOC_NMNTS

    Returns the count of mounted resources in the current snapshot in the uint32_t pointed to by arg.

    MNTIOC_GETDEVLIST

    Returns an array of uint32_t's that is twice as long as the length returned by MNTIOC_NMNTS. Each pair of numbers is the major and minor device number for the file system at the corresponding line in the current /etc/mnttab snapshot. arg points to the memory buffer to receive the device number information.

    MNTIOC_SETTAG

    Sets a tag word into the options list for a mounted file system. A tag is a notation that will appear in the options string of a mounted file system but it is not recognized or interpreted by the file system code. arg points to a filled in mnttagdesc structure, as shown in the following example:

    uint_t  mtd_major;  /* major number for mounted fs */
    uint_t  mtd_minor;  /* minor number for mounted fs */
    char    *mtd_mntpt; /* mount point of file system */
    char    *mtd_tag;   /* tag to set/clear */

    If the tag already exists then it is marked as set but not re-added. Tags can be at most MAX_MNTOPT_TAG long.

    Use of this ioctl is restricted to processes with the {PRIV_SYS_MOUNT} privilege.

    MNTIOC_CLRTAG

    Marks a tag in the options list for a mounted file system as not set. arg points to the same structure as MNTIOC_SETTAG, which identifies the file system and tag to be cleared.

    Use of this ioctl is restricted to processes with the {PRIV_SYS_MOUNT} privilege.

Errors

    EFAULT

    The arg pointer in an MNTIOC_ ioctl call pointed to an inaccessible memory location or a character pointer in a mnttagdesc structure pointed to an inaccessible memory location.

    EINVAL

    The tag specified in a MNTIOC_SETTAG call already exists as a file system option, or the tag specified in a MNTIOC_CLRTAG call does not exist.

    ENAMETOOLONG

    The tag specified in a MNTIOC_SETTAG call is too long or the tag would make the total length of the option string for the mounted file system too long.

    EPERM

    The calling process does not have {PRIV_SYS_MOUNT} privilege and either a MNTIOC_SETTAG or MNTIOC_CLRTAG call was made.

Files

    /etc/mnttab

    Usual mount point for mnttab file system

    /usr/include/sys/mntio.h

    Header file that contains IOCTL definitions

See Also

Warnings

    The mnttab file system provides the previously undocumented dev=xxx option in the option string for each mounted file system. This is provided for legacy applications that might have been using the dev=information option.

    Using dev=option in applications is strongly discouraged. The device number string represents a 32-bit quantity and might not contain correct information in 64-bit environments.

    Applications requiring device number information for mounted file systems should use the getextmntent(3C) interface, which functions properly in either 32- or 64-bit environments.

Notes

    The snapshot of the mnttab information is taken any time a read(2) is performed at offset 0 (the beginning) of the mnttab file. The file modification time returned by stat(2) for the mnttab file is the time of the last change to mounted file system information. A poll(2) system call requesting a POLLRDBAND event can be used to block and wait for the system's mounted file system information to be different from the most recent snapshot since the mnttab file was opened.


2010-09-04 23:54:32

Name

    vfstab– table of file system defaults

Description

    The file /etc/vfstab describes defaults for each file system. The information is stored in a table with the following column headings:


    device       device       mount      FS      fsck    mount      mount
    to mount     to fsck      point      type    pass    at boot    options

    The fields in the table are space-separated and show the resource name (device to mount), the raw device to fsck (device to fsck), the default mount directory (mount point), the name of the file system type (FS type), the number used by fsck to decide whether to check the file system automatically (fsck pass), whether the file system should be mounted automatically by mountall (mount at boot), and the file system mount options (mount options). (See respective mount file system man page below in SEE ALSO for mount options.) A '-' is used to indicate no entry in a field. This may be used when a field does not apply to the resource being mounted.

    The getvfsent(3C) family of routines is used to read and write to /etc/vfstab.

    /etc/vfstab can be used to specify swap areas. An entry so specified, (which can be a file or a device), will automatically be added as a swap area by the /sbin/swapadd script when the system boots. To specify a swap area, the device-to-mount field contains the name of the swap file or device, the FS-type is "swap", mount-at-boot is "no" and all other fields have no entry.

Examples

    The following are vfstab entries for various file system types supported in the Solaris operating environment.


    Example 1 NFS and UFS Mounts

    The following entry invokes NFS to automatically mount the directory /usr/local of the server example1 on the client's /usr/local directory with read-only permission:


    example1:/usr/local - /usr/local nfs - yes ro

    The following example assumes a small departmental mail setup, in which clients mount /var/mail from a server mailsvr. The following entry would be listed in each client's vfstab:


    mailsvr:/var/mail - /var/mail nfs - yes intr,bg

    The following is an example for a UFS file system in which logging is enabled:


    /dev/dsk/c2t10d0s0 /dev/rdsk/c2t10d0s0 /export/local ufs 3 yes logging

    See mount_nfs(1M) for a description of NFS mount options and mount_ufs(1M) for a description of UFS options.



    Example 2 pcfs Mounts

    The following example mounts a pcfs file system on a fixed hard disk on an x86 machine:


    /dev/dsk/c1t2d0p0:c - /win98 pcfs - yes -

    The example below mounts a Jaz drive on a SPARC machine. Normally, the volume management software handles mounting of removable media, obviating a vfstab entry. Specifying a device that supports removable media in vfstab with set the mount-at-boot field to no (as shown below) disables the automatic handling of that device. Such an entry presumes you are not running volume management software.


    /dev/dsk/c1t2d0s2:c - /jaz pcfs - no -

    For removable media on a SPARC machine, the convention for the slice portion of the disk identifier is to specify s2, which stands for the entire medium.

    For pcfs file systems on x86 machines, note that the disk identifier uses a p (p0) and a logical drive (c, in the /win98 example above) for a pcfs logical drive. See mount_pcfs(1M) for syntax for pcfs logical drives and for pcfs-specific mount options.



    Example 3 CacheFS Mount

    Below is an example for a CacheFS file system. Because of the length of this entry and the fact that vfstab entries cannot be continued to a second line, the vfstab fields are presented here in a vertical format. In re-creating such an entry in your own vfstab, you would enter values as you would for any vfstab entry, on a single line.


    device to mount:  svr1:/export/abc 
    device to fsck:  /usr/abc 
    mount point:  /opt/cache 
    FS type:  cachefs 
    fsck pass:  7 
    mount at boot:  yes 
    mount options: 
    local-access,bg,nosuid,demandconst,backfstype=nfs,cachedir=/opt/cache

    See mount_cachefs(1M) for CacheFS-specific mount options.



    Example 4 Loopback File System Mount

    The following is an example of mounting a loopback (lofs) file system:


    /export/test - /opt/test lofs - yes -

    See lofs(7FS) for an overview of the loopback file system.


See Also

2010-09-03 02:28:36

Name

    prtvtoc– report information about a disk geometry and partitioning

Synopsis

    prtvtoc [-fhs] [-t vfstab] [-m mnttab] device
    

Description

    The prtvtoc command allows the contents of the label to be viewed. The command can be used only by the super-user.

    The device name can be the file name of a raw device in the form of /dev/rdsk/c?t?d?s2 or can be the file name of a block device in the form of /dev/dsk/c?t?d?s2.

Options

    The following options are supported:

    -f

    Report on the disk free space, including the starting block address of the free space, number of blocks, and unused partitions.

    -h

    Omit the headers from the normal output.

    -m mnttab

    Use mnttab as the list of mounted filesystems, in place of /etc/mnttab.

    -s

    Omit all headers but the column header from the normal output.

    -t vfstab

    Use vfstab as the list of filesystem defaults, in place of /etc/vfstab.

Examples


    Example 1 Using the prtvtoc Command

    The following example uses the prtvtoc command on a 424-megabyte hard disk:


    example# prtvtoc /dev/rdsk/c0t3d0s2
    * /dev/rdsk/c0t3d0s2 partition map
    *
    * Dimension:
    *     512 bytes/sector
    *      80 sectors/track
    *       9 tracks/cylinder
    *     720 sectors/cylinder
    *    2500 cylinders
    *    1151 accessible cylinders
    *
    * Flags:
    *   1: unmountable
    *  10: read-only
    * *                           First    Sector   Last
    * Partition   Tag   Flags   Sector   Count    Sector   Mount Directory
         0         2     00          0    76320    76319   /
         1         3     01      76320   132480   208799
         2         5     00          0   828720   828719
         5         6     00     208800   131760   340559   /opt
         6         4     00     340560   447120   787679   /usr
         7         8     00     787680    41040   828719   /export/home
    example#

    The data in the Tag column above indicates the type of partition, as follows:

    Name

    Number

    UNASSIGNED 

    0x00 

    BOOT 

    0x01 

    ROOT 

    0x02 

    SWAP 

    0x03 

    USR 

    0x04 

    BACKUP 

    0x05 

    STAND 

    0x06 

    VAR 

    0x07 

    HOME 

    0x08 

    ALTSCTR 

    0x09 

    CACHE 

    0x0a 

    RESERVED 

    0x0b 

    The data in the Flags column above indicates how the partition is to be mounted, as follows:

    Name

    Number

    MOUNTABLE, READ AND WRITE 

    0x00 

    NOT MOUNTABLE 

    0x01 

    MOUNTABLE, READ ONLY 

    0x10 



    Example 2 Using the prtvtoc Command with the -f Option

    The following example uses the prtvtoc command with the -f option on a 424-megabyte hard disk:


    example# prtvtoc -f /dev/rdsk/c0t3d0s2
    FREE_START=0 FREE_SIZE=0 FREE_COUNT=0 FREE_PART=34


    Example 3 Using the prtvtoc Command on a Disk Over One Terabyte

    The following example uses uses the prtvtoc command on a disk over one terabyte:.


    example# prtvtoc /dev/rdsk/c1t1d0s2
    * /dev/rdsk/c1t1d0s2 partition map
    *
    * Dimensions:
    *     512 bytes/sector
    * 3187630080 sectors
    * 3187630013 accessible sectors
    *
    * Flags:
    *   1: unmountable
    *  10: read-only
    *
    *                          First     Sector    Last
    * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
    0      2    00         34    262144    262177
    1      3    01     262178    262144    524321
    6      4    00     524322 3187089340 3187613661
    8     11    00  3187613662     16384 318763004

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE 

    ATTRIBUTE VALUE 

    Availability 

    SUNWcsu 

See Also

Warnings

    The mount command does not check the "not mountable" bit.


2010-09-03 02:20:17

Name

    metadb– create and delete replicas of the metadevice state database

Synopsis

    /sbin/metadb  -h
    
    /sbin/metadb  [-s setname]
    /sbin/metadb  [-s setname] -a [-f] [-k system-file] mddbnn
    
    /sbin/metadb  [-s setname] -a [-f] [-k system-file]
     [-c number] [-l length] slice...
    
    /sbin/metadb  [-s setname] -d [-f] [-k system-file] mddbnn
    
    /sbin/metadb  [-s setname] -d [-f] [-k system-file] slice...
    
    /sbin/metadb  [-s setname] -i
    
    /sbin/metadb  [-s setname] -p [-k system-file] [mddb.cf-file]

Description

    The metadb command creates and deletes replicas of the metadevice state database. State database replicas can be created on dedicated slices, or on slices that will later become part of a simple metadevice (concatenation or stripe) or RAID5 metadevice. Do not place state database replicas on fabric-attached storage, SANs, or other storage that is not directly attached to the system and available at the same point in the boot process as traditional SCSI or IDE drives. See NOTES.

    The metadevice state database contains the configuration of all metadevices and hot spare pools in the system. Additionally, the metadevice state database keeps track of the current state of metadevices and hot spare pools, and their components. Solaris Volume Manager automatically updates the metadevice state database when a configuration or state change occurs. A submirror failure is an example of a state change. Creating a new metadevice is an example of a configuration change.

    The metadevice state database is actually a collection of multiple, replicated database copies. Each copy, referred to as a replica, is subject to strict consistency checking to ensure correctness.

    Replicated databases have an inherent problem in determining which database has valid and correct data. To solve this problem, Volume Manager uses a majority consensus algorithm. This algorithm requires that a majority of the database replicas be available before any of them are declared valid. This algorithm strongly encourages the presence of at least three initial replicas, which you create. A consensus can then be reached as long as at least two of the three replicas are available. If there is only one replica and the system crashes, it is possible that all metadevice configuration data can be lost.

    The majority consensus algorithm is conservative in the sense that it will fail if a majority consensus cannot be reached, even if one replica actually does contain the most up-to-date data. This approach guarantees that stale data will not be accidentally used, regardless of the failure scenario. The majority consensus algorithm accounts for the following: the system will stay running with exactly half or more replicas; the system will panic when less than half the replicas are available; the system will not reboot without one more than half the total replicas.

    When used with no options, the metadb command gives a short form of the status of the metadevice state database. Use metadb -i for an explanation of the flags field in the output.

    The initial state database is created using the metadb command with both the -a and -f options, followed by the slice where the replica is to reside. The -a option specifies that a replica (in this case, the initial) state database should be created. The -f option forces the creation to occur, even though a state database does not exist. (The -a and -f options should be used together only when no state databases exist.)

    Additional replicas beyond those initially created can be added to the system. They contain the same information as the existing replicas, and help to prevent the loss of the configuration information. Loss of the configuration makes operation of the metadevices impossible. To create additional replicas, use the metadb -a command, followed by the name of the new slice(s) where the replicas will reside. All replicas that are located on the same slice must be created at the same time.

    To delete all replicas that are located on the same slice, the metadb -d command is used, followed by the slice name.

    When used with the -i option, metadb displays the status of the metadevice state databases. The status can change if a hardware failure occurs or when state databases have been added or deleted.

    To fix a replica in an error state, delete the replica and add it back again.

    The metadevice state database (mddb) also contains a list of the replica locations for this set (local or shared diskset).

    The local set mddb can also contain host and drive information for each of the shared disksets of which this node is a member. Other than the diskset host and drive information stored in the local set mddb, the local and shared diskset mddbs are functionality identical.

    The mddbs are written to during the resync of a mirror or during a component failure or configuration change. A configuration change or failure can also occur on a single replica (removal of a mddb or a failed disk) and this causes the other replicas to be updated with this failure information.

Options

    Root privileges are required for all of the following options except -h and -i.

    The following options can be used with the metadb command. Not all the options are compatible on the same command line. Refer to the SYNOPSIS to see the supported use of the options.

    -a

    Attach a new database device. The /kernel/drv/md.conf file is automatically updated with the new information and the /etc/lvm/mddb.cf file is updated as well. An alternate way to create replicas is by defining them in the /etc/lvm/md.tab file and specifying the assigned name at the command line in the form, mddbnn, where nn is a two-digit number given to the replica definitions. Refer to the md.tab(4) man page for instructions on setting up replicas in that file.

    -c number

    Specifies the number of replicas to be placed on each device. The default number of replicas is 1.

    -d

    Deletes all replicas that are located on the specified slice. The /kernel/drv/md.conf file is automatically updated with the new information and the /etc/lvm/mddb.cf file is updated as well.

    -f

    The -f option is used to create the initial state database. It is also used to force the deletion of replicas below the minimum of one. (The -a and -f options should be used together only when no state databases exist.)

    -h

    Displays a usage message.

    -i

    Inquire about the status of the replicas. The output of the -i option includes characters in front of the device name that represent the status of the state database. Explanations of the characters are displayed following the replica status and are as follows:

    d

    replica does not have an associated device ID.

    o

    replica active prior to last mddb configuration change

    u

    replica is up to date

    l

    locator for this replica was read successfully

    c

    replica's location was in /etc/lvm/mddb.cf

    p

    replica's location was patched in kernel

    m

    replica is master, this is replica selected as input

    r

    replica does not have device relocation information

    t

    tagged data is associated with the replica

    W

    replica has device write errors

    a

    replica is active, commits are occurring to this

    M

    replica had problem with master blocks

    D

    replica had problem with data blocks

    F

    replica had format problems

    S

    replica is too small to hold current database

    R

    replica had device read errors

    B

    tagged data associated with the replica is not valid

    -k system-file

    Specifies the name of the kernel file where the replica information should be written. The default system-file is /kernel/drv/md.conf. This option is for use with the local diskset only.

    -l length

    Specifies the size of each replica. The default length is 8192 blocks, which should be appropriate for most configurations. "Replica sizes of less than 128 blocks are not recommended.

    -p

    Specifies updating the system file (/kernel/drv/md.conf) with entries from the /etc/lvm/mddb.cf file. This option is normally used to update a newly built system before it is booted for the first time. If the system has been built on a system other than the one where it will run, the location of the mddb.cf on the local machine can be passed as an argument. The system file to be updated can be changed using the -k option. This option is for use with the local diskset only.

    -s setname

    Specifies the name of the diskset on which the metadb command will work. Using the -s option will cause the command to perform its administrative function within the specified diskset. Without this option, the command will perform its function on local database replicas.

    slice

    Specifies the logical name of the physical slice (partition), such as /dev/dsk/c0t0d0s3.

Examples


    Example 1 Creating Initial State Database Replicas

    The following example creates the initial state database replicas on a new system.


    # metadb -a -f c0t0d0s7 c0t1d0s3 c1t0d0s7 c1t1d0s3

    The -a and -f options force the creation of the initial database and replicas. You could then create metadevices with these same slices, making efficient use of the system.



    Example 2 Adding Two Replicas on Two New Disks

    This example shows how to add two replicas on two new disks that have been connected to a system currently running Volume Manager.


    # metadb -a c0t2d0s3 c1t1d0s3


    Example 3 Deleting Two Replicas

    This example shows how to delete two replicas from the system. Assume that replicas have been set up on /dev/dsk/c0t2d0s3 and /dev/dsk/c1t1d0s3.


    # metadb -d c0t2d0s3 c1t1d0s3

    Although you can delete all replicas, you should never do so while metadevices still exist. Removing all replicas causes existing metadevices to become inoperable.


Files

    /etc/lvm/mddb.cf

    Contains the location of each copy of the metadevice state database.

    /etc/lvm/md.tab

    Workspace file for metadevice database configuration.

    /kernel/drv/md.conf

    Contains database replica information for all metadevices on a system. Also contains Solaris Volume Manager configuration information.

Exit Status

    The following exit values are returned:

    0

    successful completion

    >0

    an error occurred

Attributes

    See attributes(5) for descriptions of the following attributes:

    ATTRIBUTE TYPE 

    ATTRIBUTE VALUE 

    Availability 

    SUNWmdr 

See Also

Notes

    Replicas cannot be stored on fabric-attached storage, SANs, or other storage that is not directly attached to the system. Replicas must be on storage that is available at the same point in the boot process as traditional SCSI or IDE drives. A replica can be stored on a:

    • Dedicated local disk partition

    • Local partition that will be part of a volume

    • Local partition that will be part of a UFS logging device


2010-09-03 02:17:44

Name

    smf– service management facility

Description

    The Solaris service management facility defines a programming model for providing persistently running applications called services. The facility also provides the infrastructure in which to run services. A service can represent a running application, the software state of a device, or a set of other services. Services are represented in the framework by service instance objects, which are children of service objects. Instance objects can inherit or override the configuration of the parent service object, which allows multiple service instances to share configuration information. All service and instance objects are contained in a scope that represents a collection of configuration information. The configuration of the local Solaris instance is called the “localhost” scope, and is the only currently supported scope.

    Each service instance is named with a fault management resource identifier (FMRI) with the scheme “svc:”. For example, the syslogd(1M) daemon started at system startup is the default service instance named:

    svc://localhost/system/system-log:default
    svc:/system/system-log:default
    system/system-log:default

    In the above example, 'default' is the name of the instance and 'system/system-log' is the service name. Service names may comprise multiple components separated by slashes (/). All components, except the last, compose the category of the service. Site-specific services should be named with a category beginning with 'site'.

    A service instance is either enabled or disabled. All services can be enabled or disabled with the svcadm(1M) command.

    The list of managed service instances on a system can be displayed with the svcs(1) command.

    Dependencies

      Service instances may have dependencies on services or files. Those dependencies govern when the service is started and automatically stopped. When the dependencies of an enabled service are not satisfied, the service is kept in the offline state. When its dependencies are satisfied, the service is started. If the start is successful, the service is transitioned to the online state. Whether a dependency is satisfied is determined by its type:

      require_all

      Satisfied when all cited services are running (online or degraded), or when all indicated files are present.

      require_any

      Satisfied when one of the cited services is running (online or degraded), or when at least one of the indicated files is present.

      optional_all

      Satisfied if the cited services are running (online or degraded) or will not run without administrative action (disabled, maintenance, not present, or offline waiting for dependencies which will not start without administrative action).

      exclude_all

      Satisfied when all of the cited services are disabled, in the maintenance state, or when cited services or files are not present.

      Once running (online or degraded), if a service cited by a require_all, require_any, or optional_all dependency is stopped or refreshed, the SMF considers why the service was stopped and the restart_on attribute of the dependency to decide whether to stop the service.

                         |  restart_on value
      event              |  none  error restart refresh
      -------------------+------------------------------
      stop due to error  |  no    yes   yes     yes
      non-error stop     |  no    no    yes     yes
      refresh            |  no    no    no      yes

      A service is considered to have stopped due to an error if the service has encountered a hardware error or a software error such as a core dump. For exclude_all dependencies, the service is stopped if the cited service is started and the restart_on attribute is not none.

      The dependencies on a service can be listed with svcs(1) or svccfg(1M), and modified with svccfg(1M).

    Restarters

      Each service is managed by a restarter. The master restarter, svc.startd(1M) manages states for the entire set of service instances and their dependencies. The master restarter acts on behalf of its services and on delegated restarters that can provide specific execution environments for certain application classes. For instance, inetd(1M) is a delegated restarter that provides its service instances with an initial environment composed of a network connection as input and output file descriptors. Each instance delegated to inetd(1M) is in the online state. While the daemon of a particular instance might not be running, the instance is available to run.

      As dependencies are satisfied when instances move to the online state, svc.startd(1M) invokes start methods of other instances or directs the delegated restarter to do so. These operations might overlap.

      The current set of services and associated restarters can be examined using svcs(1). A description of the common configuration used by all restarters is given in smf_restarter(5).

    Methods

      Each service or service instance must define a set of methods that start, stop, and, optionally, refresh the service. See smf_method(5) for a more complete description of the method conventions for svc.startd(1M) and similar fork(2)-exec(2) restarters.

      Administrative methods, such as for the capture of legacy configuration information into the repository, are discussed on the svccfg(1M) manual page.

      The methods for a service can be listed and modified using the svccfg(1M) command.

    States

      Each service instance is always in a well-defined state based on its dependencies, the results of the execution of its methods, and its potential receipt of events from the contracts filesystem. The following states are defined:

      UNINITIALIZED

      This is the initial state for all service instances. Instances are moved to maintenance, offline, or a disabled state upon evaluation by svc.startd(1M) or the appropriate restarter.

      OFFLINE

      The instance is enabled, but not yet running or available to run. If restarter execution of the service start method or the equivalent method is successful, the instance moves to the online state. Failures might lead to a degraded or maintenance state. Administrative action can lead to the uninitialized state.

      ONLINE

      The instance is enabled and running or is available to run. The specific nature of the online state is application-model specific and is defined by the restarter responsible for the service instance. Online is the expected operating state for a properly configured service with all dependencies satisfied. Failures of the instance can lead to a degraded or maintenance state. Failures of services on which the instance depends can lead to offline or degraded states.

      DEGRADED

      The instance is enabled and running or available to run. The instance, however, is functioning at a limited capacity in comparison to normal operation. Failures of the instance can lead to the maintenance state. Failures of services on which the instance depends can lead to offline or degraded states. Restoration of capacity should result in a transition to the online state.

      MAINTENANCE

      The instance is enabled, but not able to run. Administrative action is required to restore the instance to offline and subsequent states. The maintenance state might be a temporarily reached state if an administrative operation is underway.

      DISABLED

      The instance is disabled. Enabling the service results in a transition to the offline state and eventually to the online state with all dependencies satisfied.

      LEGACY-RUN

      This state represents a legacy instance that is not managed by the service management facility. Instances in this state have been started at some point, but might or might not be running. Instances can only be observed using the facility and are not transferred into other states.

      States can also have transitions that result in a return to the originating state.

    Properties and Property Groups

      The dependencies, methods, delegated restarter, and instance state mentioned above are represented as properties or property groups of the service or service instance. A service or service instance has an arbitrary number of property groups in which to store application data. Using property groups in this way allows the configuration of the application to derive the attributes that the repository provides for all data in the facility. The application can also use the appropriate subset of the service_bundle(4) DTD to represent its configuration data within the framework.

      Property lookups are composed. If a property group-property combination is not found on the service instance, most commands and the high-level interfaces of libscf(3LIB) search for the same property group-property combination on the service that contains that instance. This feature allows common configuration among service instances to be shared. Composition can be viewed as an inheritance relationship between the service instance and its parent service.

      Properties are protected from modification by unauthorized processes. See smf_security(5).

    Snapshots

      Historical data about each instance in the repository is maintained by the service management facility. This data is made available as read-only snapshots for administrative inspection and rollback. The following set of snapshot types might be available:

      initial

      Initial configuration of the instance created by the administrator or produced during package installation.

      last_import

      Configuration as prescribed by the manifest of the service that is taken during svccfg(1M) import operation. This snapshot provides a baseline for determining property customization.

      previous

      Current configuration captured when an administrative undo operation is performed.

      running

      The running configuration of the instance.

      start

      Configuration captured during a successful transition to the online state.

      The svccfg(1M) command can be used to interact with snapshots.

    Special Property Groups

      Some property groups are marked as “non-persistent”. These groups are not backed up in snapshots and their content is cleared during system boot. Such groups generally hold an active program state which does not need to survive system restart.

    Configuration Repository

      The current state of each service instance, as well as the properties associated with services and service instances, is stored in a system repository managed by svc.configd(1M). This repository is transactional and able to provide previous versions of properties and property groups associated with each service or service instance.

      The repository for service management facility data is managed by svc.configd(1M).

    Service Bundles, Manifests, and Profiles

      The information associated with a service or service instance that is stored in the configuration repository can be exported as XML-based files. Such XML files, known as service bundles, are portable and suitable for backup purposes. Service bundles are classified as one of the following types:

      manifests

      Files that contain the complete set of properties associated with a specific set of services or service instances.

      profiles

      Files that contain a set of service instances and values for the enabled property on each instance.

      Service bundles can be imported or exported from a repository using the svccfg(1M) command. See service_bundle(4) for a description of the service bundle file format with guidelines for authoring service bundles.

      A service archive is an XML file that contains the description and persistent properties of every service in the repository, excluding transient properties such as service state. This service archive is basically a 'svccfg export' for every service which is not limited to named services.

    Legacy Startup Scripts

      Startup programs in the /etc/rc?.d directories are executed as part of the corresponding run-level milestone:

      /etc/rcS.d

      milestone/single-user:default

      /etc/rc2.d

      milestone/multi-user:default

      /etc/rc3.d

      milestone/multi-user-server:default

      Execution of each program is represented as a reduced-functionality service instance named by the program's path. These instances are held in a special legacy-run state.

      These instances do not have an enabled property and, generally, cannot be manipulated with the svcadm(1M) command. No error diagnosis or restart is done for these programs.

See Also