샤브의 블로그 RSS 태그 관리 글쓰기 방명록
Device & Language/Solaris 10 (56)
2010-09-05 00:11:23

Solaris System Configuration Files Reference

For details about the files and commands summarized here, consult the appropriate man pages.

  • /etc/bootparams Contains information regarding network boot clients.
  • /etc/cron.d/cron.allow
    /etc/cron.d/cron.deny
    Allow access to crontab for users listed in this file. If the file does not exist, access is permitted for users not in the /etc/cron.d/cron.deny file.
  • /etc/defaultdomain NIS domain set by /etc/init.d/inetinit
  • /etc/default/cron Sets cron logging with the CRONLOG variable.
  • /etc/default/login Controls root logins via specification of the CONSOLE variable, as well as variables for login logging thresholds and password requirements.
  • /etc/default/su Determines logging activity for su attempts via the SULOG and SYSLOG variables, sets some initial environment variables for su sessions.
  • /etc/dfs/dfstab Determines which directories will be NFS-shared at boot time. Each line is a share command.
  • /etc/dfs/sharetab Contains a table of resources that have been shared via share.
  • /etc/group Provides groupname translation information.
  • /etc/hostname.interface Assigns a hostname to interface; assigns an IP address by cross- referencing /etc/inet/hosts.
  • /etc/hosts.allow
    /etc/hosts.deny
    Determine which hosts will be allowed access to TCP wrapper mediated services.
  • /etc/hosts.equiv Determines which set of hosts will not need to provide passwords when using the "r" remote access commands (eg rlogin, rsh, rexec)
  • /etc/inet/hosts
    /etc/hosts
    Associates hostnames and IP addresses.
  • /etc/inet/inetd.conf
    /etc/inetd.conf
    Identifies the services that are started by inetd as well as the manner in which they are started. inetd.conf may even specify that TCP wrappers be used to protect a service.
  • /etc/inittab inittab is used by init to determine scripts to for different run levels as well as a default run level.
  • /etc/logindevperm Contains information to change permissions for devices upon console logins.
  • /etc/magic Database of magic numbers that identify file types for file.
  • /etc/mail/aliases
    /etc/aliases
    Contains mail aliases recognized by sendmail.
  • /etc/mail/sendmail.cf
    /etc/sendmail.cf
    Mail configuration file for sendmail.
  • /etc/minor_perm Specifies permissions for device files; used by drvconfig
  • /etc/mnttab Contains information about currently mounted resources.
  • /etc/name_to_major List of currently configured major device numbers; used by drvconfig.
  • /etc/netconfig Network configuration database read during network initialization.
  • /etc/netgroup Defines groups of hosts and/or users.
  • /etc/netmasks Determines default netmask settings.
  • /etc/nsswitch.conf Determines order in which different information sources are accessed when performing lookups.
  • /etc/path_to_inst Contents of physical device tree using physical device names and instance numbers.
  • /etc/protocols Known protocols.
  • /etc/remote Attributes for tip sessions.
  • /etc/rmtab Currently mounted filesystems.
  • /etc/rpc Available RPC programs.
  • /etc/services Well-known networking services and associated port numbers.
  • /etc/syslog.conf Configures syslogd logging.
  • /etc/system Can be used to force kernel module loading or set kernel tuneable parameters.
  • /etc/vfstab Information for mounting local and remote filesystems.
  • /var/adm/messages Main log file used by syslogd.
  • /var/adm/sulog Default log for recording use of su command.
  • /var/adm/utmpx User and accounting information.
  • /var/adm/wtmpx User login and accounting information.
  • /var/local/etc/ftpaccess
    /var/local/etc/ftpconversions
    /var/local/etc/ftpusers
    wu-ftpd configuration files to set ftp access rights, conversion/compression types, and a list of userids to exclude from ftp operations.
  • /var/lp/log Print services activity log.
  • /var/sadm/install/contents Database of installed software packages.
  • /var/saf/_log Logs activity of SAF (Service Access Facility).

2010-09-05 00:09:20

Network File System (NFS)

News Recommended Links Sun Documentation Tutorials Reference RPC RFCs NFS performance tuning
share command dfshares Command dfstab File Mounting NFS Resources /etc/vfstab File AutoFS and automountd daemon NFS Security Nfsstat
NFS logging Troubleshooting Linux NFS SFU NFS implementation History Tips Humor Etc
 

NFS is a network filesystem originally developed by Sun (version 2, see RFC1094) and later enhanced by Network Appliance and other companies (version 3 and 4 of NFS). It works well for sharing file systems between multiple clients, but is slower then some other network filesystems (samba). It is also more fault tolerant then most other network file systems.

 

. With NFS, when a file or directory is shared from a remote machine, it appears to be part of your filesystem.  Every time you access the NFS-linked area, you're going over the network to the other machine, but that's all transparent to you (except for some delays). Because of its popularity, implementations of NFS have been created on other operating systems, for example Windows and Netware. A competing file sharing protocol called SAMBA which originated in Windows was ported and became popular on Unix. 

NFS defines an abstract model of a file system. Each OS applies the NFS model to its file system semantics and implement reading and writing operations as though they are accessing a local file.  NFS is also statelessness. You can reboot a server and the client won't crash. It won't be able to access files on the server's export while the server is down, but once it returns, you'll pick up right where things left off. Other network file sharing systems are not so resilient.

NFS is based on a client-server model. One computer works as a server and offers filesystems to other systems. This is called exporting or sharing and the filesystems offered are called "exports." The clients can mount server exports using an extension of mount command used to mount local filesystems.

File systems shared through NFS software can also be mounted automatically. Autofs, a client-side service, is a change directory intercept mechanism that catches the cases when user changes to NFS directory and transparently mounts it.  the list of mount points should be provided to Autofs as a configuration file. Essentially any I/O operation on s program notifies the automount daemon, automountd,  and it mounts it and then if there is long period of inactivity unmounts it. The automountd, daemon transparently performs mounting and unmounting of remote directories listed it Autofs configuration file on an as-needed basis. The NFS is in turn based on the Remote Procedure Call (RPC) protocol. For this reason, the RPC server daemon must be running for NFS to be implemented. You can check whether RPC is active by issuing this command at the shell prompt:

rpcinfo -p

The NFS service makes the physical location of the file system irrelevant to the user. You can use the NFS implementation to enable users to see all the relevant files regardless of location. Instead of placing copies of commonly used files on every system, the NFS service enables you to place one copy on one computer's disk and have all other systems access it across the network. Under NFS operation, remote file systems are almost indistinguishable from local ones.

Writable NFS-sharable file systems should generally be a separate disk or partition (on server). By having file systems on a separate partition of a harddisk, we can ensure that malicious users can not simply fill up the entire harddisk by writing large files onto it. This will then be able to crash other services running on the same harddisk. Prevent normal users on an NFS client from mounting an NFS file system (on server)

NFS controls who can mount an exported file system based on the host making the mount request, not the user that will actually use the file system. Hosts must be given explicit rights to mount the exported file system. Access control is not possible for users, other than file and directory permissions. In other words, once a file system is exported via NFS, any user on any remote host connected to the NFS server can access the shared data. To limit the potential risks, administrators can only allow read-only access or squashing users to a common user and groupid. But these solutions may prevent the NFS share from being used in the way it was originally intended.

Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS file system, the system associated with a particular hostname or fully qualified domain name can be pointed to an unauthorized machine. At this point, the unauthorized machine is the system permitted to mount the NFS share, since no username or password information is exchanged to provide additional security for the NFS mount. The same risks hold true to compromised NIS servers, if NIS netgroups are used to allow certain hosts to mount an NFS share. By using IP addresses in /etc/exports, this kind of attack is more difficult.

Wildcards should be used sparingly when granting exporting NFS shares as the scope of the wildcard may encompass more systems than intended.

Once the NFS file system is mounted read-write by a remote host, the only protection each shared file has is its permissions. If two users that share the same userid value mount the same NFS file system, they will be able to modify each others files. Additionally, anyone logged in as root on the client system can use the su - command to become a user who could access particular files via the NFS share.

The default behavior when exporting a file system via NFS is to use root squashing. This sets the userid of anyone accessing the NFS share as the root user on their local machine to a value of the server's nobody account. Never turn off root squashing.

If exporting an NFS share read-only, consider using the all_squash option, which makes every user accessing the exported file system take the userid of the nobody user.

Before file systems or directories can be accessed (that is, mounted) by a client through NFS, they must be shared or  exported Once shared, authorized NFS clients can mount the resources. This term most often reflected in directory names for NFS resources such as /export/home or /export/swap.

To start the NFS server daemons or to specify the number of concurrent NFS requests that can be handled by the nfsd daemon, use the /etc/rc3.d/S15nfs.server script. 

You need several daemons to support NFS activities. These daemons can support both NFS client and NFS server activity, NFS server activity alone, or logging of the NFS server activity. To start the NFS server daemons or to specify the number of concurrent NFS requests that can be handled by the nfsd daemon, use the /etc/rc3.d/S15nfs.server script. There are six daemons that support NFS:

  1. mountd Handles file system mount requests from remote systems, and provides access control (server)
  2. nfsd Handles client file system requests (both client and server)
  3. statd Works with the lockd daemon to provide crash recovery functions for the lock manager (server)
  4. lockd Supports record locking operations on NFS files
  5. nfslogd  Provides filesystem logging. Runs only if one or more filesystems is mounted with log attribute.

You can detect most NFS problems from console messages or from certain symptoms that appear on a client system. Some common errors are:

  1. The rpcbind failure error incorrect host Internet address or server overload
     
  2. The server not responding error network connection or server is down
     
  3. The NFS client fails a reboot error a client is requesting an NFS mount using an entry in the /etc/vfstab file, specifying a foreground mount from a non-operational NFS server.
     
  4. The service not responding error an accessible server is not running the NFS server daemons.
     
  5. The program not registered error  an accessible server is not running the mountd daemon.
     
  6. The stale file handle error [file moved on the server]. To solve the stale NFS file handle error condition, unmount and mount the resource again on the client.
     
  7. The unknown host error the host name of the server on the client is missing from the hosts table.
     
  8. The mount point error check that the mount point exists on the client
     
  9. The no such file error unknown file name on the server
     
  10. No such file or directory  the directory does not exists on the server

NFS Server Commands

  • share Makes a local directory on an NFS server available for mounting. Without parameters displays the contents of the
    /etc/dfs/sharetab file.
  • unshare Makes a previously available directory unavailable for client side mount operations.
  • shareall Reads and executes share statements in the /etc/dfs/dfstab file.
  • unshareall Makes previously shared resources unavailable.
  • dfshares Lists available shared resources from a remote or local NFS server.
  • dfmounts Displays a list of NFS server directories that are currently mounted.

NFS resources can be shared using the share command and unshared using the unshare command. In addition, any resources identified in the /etc/dfs/dfstab file are automatically shared at system boot or when the shareall command is used. Shared resources are automatically recorded in the /etc/dfs/sharetab file. When the unshareall command is used, all resources listed in the /etc/dfs/sharetab file are automatically unshared.

The share command is used to share NFS resources so that NFS clients can mount and access them. At a minimum, the full pathname of the directory (or mount point of the file system) to be shared is specified as a command-line argument. In addition, three other command-line arguments are supported:

  • The -d command-line argument is followed by a description of the data being shared.

  • The -F nfs command-line argument is used to specify the type of file system. If not specified, the default file system type listed in the /etc/dfs/fstypes file (NFS) is assumed.

  • The -o command-line argument is followed by one or more NFS-specific options (separated by commas).

For example:

# share -F nfs -o public,ro /export/home

If the share command is used without any command-line arguments, the currently shared resources will be listed.
 

!

NFS server is started on  run level 3.The resources are unshared and the NFS server is stopped when the system run level changes to any level other than 3. The NFS client is started at run level 2.

The unshare command is used to stop the sharing of NFS resources so that NFS clients can no longer mount and access them. At a minimum, the full pathname of a directory (or mount point of the file system) that is currently shared is specified as a command-line argument.

Only one other command-line argument is supported: the -F nfs command-line argument, which is used to specify the type of file system. If not specified, the default file system type listed in the /etc/dfs/fstypes file (NFS) is assumed.

The following listing shows using the unshare command to stop the sharing of the /export/home file system:

# unshare -F nfs /export/home

Solaris uses six configuration files to support NFS server: Three common, one specific for client and two specific for the server

  Server Client
1. /etc/dfs/dfstab Lists share commands to share at boot time. Similar to /etc/vfstab for local filesystems.  shareall is essentially sh /etc/dfs/dfstab  Same thing

 

2. /etc/dfs/sharetab (autofile)Dynamically lists directories currently being shared by the NFS server. Same thing
3.

 

/etc/dfs/fstypes List of the default file system types for each remote file systems. Same thing
4. /etc/rmtab (autofile) Lists  remotely mounted file systems

 

 
5. /etc/nfs/nfslog.conf Defines the location of configuration logs used for NFS server logging.  
6. /etc/default/nfslogd Configuration of the nfslogd daemon.  

NFS Logging  is accomplished by nfslogd  daemon with the configuration stored in /etc/nfs/nfslog.conf and  /etc/default/nfslogd. The functions of the nfslogd daemon:

  • Converts the raw data from the logging operation into ASCII records, and stores the raw data in ASCII log files.
  • Resolves IP addresses to host names and UIDs to login names.
  • Maps the file handles to path names, and records the mappings in a file-handle-to-path mapping table. Each tag in the /etc/nfs/nfslog.conf file corresponds to one mapping table.

The NFS Logging Daemon monitors and analyzes RPC operations processed by the NFS server. If enabled, each RPC operation is stored in the NFS log file as a record that contains:

  • Time stamp

  • IP address or hostname of client

  • File or directory affected by operation

  • Type of operation: input, output, make directory, remove directory, or remove file

The NFS server logging consists of two phases. The first phase is performed by the kernel; it records RPC requests in a work buffer. The second phase is performed by the daemon; it reads the work buffer, constructs and writes the log records. The amount of time the daemon waits before reading the work buffer along with other configurable parameters are specified in the /etc/default/nfslogd file. /etc/default/nfslogd file can contain a number of parameters (the initial nfslogd provided with the Solaris 9 system contains only comments):

  • CYCLE_FREQUENCY� Amount of time (in hours) of the log cycle (close current log and open new one). This is to prevent the logs from getting too large.

  • IDLE_TIME� Amount of time (in seconds) that the logging daemon will sleep while waiting for data to be placed in the work buffer.

  • MAPPING_UPDATE_INTERVAL� The amount of time (in seconds) between updates of the file handle to pathname mapping database.

  • MAX_LOGS_PRESERVE� The maximum number of log files to save.

  • MIN_PROCESSING_SIZE� Minimum size (in bytes) of the work buffer before the logging daemon will process its contents.

  • PRUNE_TIMEOUT� The amount of time (in hours) the access time of a file associated with a record in the pathname mapping database can remain unchanged before it is removed.

  • UMASKumask used for the work buffer and file handle to pathname mapping database.

The /etc/nfs/nfslog.conf file is used to specify the location of log files, file handle to pathname mapping database, and work buffer, along with a few other parameters.  Set of parameters can be grouped together and associated with a tag. this way multiple configurations can be specified in the configuration file. The default configuration has the tag global . the following NFS logging parameters can be set:

  • buffer� Specifies location of working buffer.

  • defaultdir� Specifies the default directory of files. If specified, this path is added to the beginning of other parameters that are used to specify the location of files.

  • fhtable� Specifies location of the file handle to pathname mapping database.

  • log� Specifies location of log files.

  • logformat� Specifies either basic (default) or extended logging.

For example:

#ident  "@(#)nfslog.conf        1.5     99/02/21 SMI"
#
# Copyright (c) 1999 by Sun Microsystems, Inc.
# All rights reserved.
#
# NFS server log configuration file.
#
# <tag> [ defaultdir=<dir_path> ] \
# [ log=<logfile_path> ] [ fhtable=<table_path> ] \
# [ buffer=<bufferfile_path> ] [ logformat=basic|extended ]
#

global  defaultdir=/var/nfs log=nfslog fhtable=fhtable buffer=nfslog_workbuffer

Logging is enabled on a per-share (file system/directory) basis, by adding the -o log option to the share command. 

Note:

  • The configuration file that controls the number of NFS logs created and the permissions on the log files is named atypically:  /etc/default/nfslogd

2010-09-05 00:07:35
출처 짜세나게 달려보자 !!! | 짜세맨
원문 http://blog.naver.com/831jsh/70047794179

원본 : http://www.softpanorama.org/Net/Application_layer/NFS/troubleshooting_of_nfs_problems.shtml

 

Troubleshooting Solaris NFS Problems

 

News NFS overview Recommended Links Sun Documentation Tutorials Reference HOWTO FAQs RFCs  
rpcbind failure error server not responding error  NFS client fails a reboot error service not responding error  program not registered error stale file handle error  unknown host error  mount point error  no such file error No such file or directory

 

Some common NFS errors are:

  1. The rpcbind failure error
  2. The server not responding error
  3. The NFS client fails a reboot error
  4. The service not responding error
  5. The program not registered error
  6. The stale file handle error
  7. The unknown host error
  8. The mount point error
  9. The no such file error
  10. No such file or directory

Troubleshooting recommendations:

  1. The rpcbind failure Error. The following example shows the message that appears on the client
    system during the boot process or in response to an explicit mount request:
    • nfs mount: server1:: RPC: Rpcbind failure
      RPC: Timed Out
      nfs mount: retrying: /mntpoint

    The error in accessing the server is due to:

    • The combination of an incorrect Internet address and a correct host or node name in the hosts database file supporting the client node.
    • The hosts database file that supports the client has the correct server node, but the server node temporarily stops due to an overload.
    To solve the rpcbind failure error condition when the server node is operational, determine if the server is out of critical resources (for example, memory, swap, or disk space).
     
  2. The server not responding Error The following message appears during the boot process or in response to an explicit mount request, and this message indicates a known server that is inaccessible.

    NFS server server2 not responding, still trying

     Possible causes for the server not responding error are:

    • The network between the local system and the server is down. To verify that the network is down, enter the ping command (ping server2).
    •  The server ( server2) is down.
       
    The NFS client fails a reboot Error. If you attempt to boot an NFS client and the client-node stops, waits, and echoes the following message:

    Setting default interface for multicast: add net 224.0.0.0: gateway:
    client_node_name.

    these symptoms might indicate that a client is requesting an NFS mount using an entry in the /etc/vfstab file, specifying a foreground mount from a non-operational NFS server.

    To solve this error, complete the following steps:

    1. To interrupt the failed client node press Stop-A, and boot the client into single-user mode.

    2. Edit the /etc/vfstab file to comment out the NFS mounts.

    3. To continue booting to the default run level (normally run level 3), press Control-D.

    4. Determine if all the NFS servers are operational and functioning properly.

    5. After you resolve problems with the NFS servers, remove the comments from the /etc/vfstab file.

    Note – If the NFS server is not available, an alternative to commenting out
    the entry in the /etc/vfstab file is to use the bg mount option so that the
    boot sequence can proceed in parallel with the attempt to perform the NFS mount.
     

  3. The service not responding ErrorThe following message appears during the boot process or in response to an explicit mount request, and indicates that an accessible server is not running the NFS server daemons.


    nfs mount: dbserver: NFS: Service not responding
    nfs mount: retrying: /mntpoint

    To solve the service not responding error condition, complete the following steps:

    1.  Enter the who -r command on the server to see if it is at run level 3. If the server is not, change to run level 3 by entering the init 3 command.
    2. Enter the ps -e command on the server to check whether the NFS server daemons are running. If they are not, start them by using the /etc/init.d/nfs.server start script.
       
  4. The program not registered Error. The following message appears during the boot process or in response to an explicit mount request and indicates that an accessible server is not running the mountd daemon.

    nfs mount: dbserver: RPC: Program not registered
    nfs mount: retrying: /mntpoint

    To solve the program not registered error condition, complete the following steps:

    1.  Enter the who -r command on the server to check that it is at run level 3. If the server is not, change to run level 3 by performing the init 3 command.
    2.  Enter the pgrep -xl mountd command. If the mountd daemon is not running, start it using the /etc/init.d/nfs.server script, first with the stop flag and then with the start flag.
    3.  Check the /etc/dfs/dfstab file entries.
       
  5. The stale NFS file handle Error. The following message appears when a process attempts to access a
    remote file resource with an out-of-date file handle.  A possible cause for the stale NFS file handle error is that the file resource on the server moved. To solve the stale NFS file handle error condition, unmount and mount the resource again on the client.
     
  6. The unknown host Error. The following message indicates that the host name of the server on the client is missing from the hosts table.

    nfs mount: sserver1:: RPC: Unknown host

    To solve the unknown host error condition, verify the host name in the hosts database that supports the client node. Note – The preceding example misspelled the node name server1 as sserver1.
     

  7. The mount point Error. The following message appears during the boot process or in response to
    an explicit mount request and indicates a non-existent mount point.

    mount: mount-point /DS9 does not exist.

    To solve the mount point error condition, check that the mount point exists on the client. Check the spelling of the mount point on the command line or in the /etc/vfstab file on the client, or comment out
    the entry and reboot the system.
     

  8. The no such file Error. The following message appears during the boot process or in response to
    an explicit mount request, which indicates that there is an unknown file
    resource name on the server.
     
  9. No such file or directory To solve the no such file error condition, check that the directory exists
    on the server. Check the spelling of the directory on the command line or in the /etc/vfstab file.

Use of NFS Considered Harmful  

First of all usage of 'considered harmful" usually signify primitive fundamentalist stance of the critique. Also this critique is applicable only to older versions of protocols. NFS v.4 contains some improvements

Following are a few known problems with NFS and suggested workarounds.

a. Time Synchronization

NFS does not synchronize time between client and server, and offers no mechanism for the client to determine what time the server thinks it is. What this means is that a client can update a file, and have the timestamp on the file be either some time long in the past, or even in the future, from its point of view.

While this is generally not an issue if clocks are a few seconds or even a few minutes off, it can be confusing and misleading to humans. Of even greater importance is the affect on programs. Programs often do not expect time difference like this, and may end abnormally or behave strangely, as various tasks timeout instantly, or take extraordinarily long while to timeout.

Poor time synchronization also makes debugging problems difficult, because there is no easy way to establish a chronology of events. This is especially problematic when investigating security issues, such as break in attempts.

Workaround: Use the Network Time Protocol (NTP) religiously. Use of NTP can result in machines that have extremely small time differences.

Note: The NFS protocol version 3 does have support for the client specifying the time when updating a file, but this is not widely implemented. Additionally, it does not help in the case where two clients are accessing the same file from machines with drifting clocks.

b. File Locking Semantics

Programs use file locking to insure that concurrent access to files does not occur except when guaranteed to be safe. This prevents data corruption, and allows handshaking between cooperative processes.

In Unix, the kernel handles file locking. This is required so that if a program is terminated, any locks that it has are released. It also allows the operations to be atomic, meaning that a lock cannot be obtained by multiple processes.

Because NFS is stateless, there is no way for the server to keep track of file locks - it simply does not know what clients there are or what files they are using. In an effort to solve this, a separate server, the lock daemon, was added. Typically, each NFS server will run a lock daemon.

The combination of lock daemon and NFS server yields a solution that is almost like Unix file locking. Unfortunately, file locking is extremely slow, compared to NFS traffic without file locking (or file locking on a local Unix disk). Of greater concern is the behaviour of NFS locking on failure.

In the event of server failure (e.g. server reboot or lock daemon restart), all client locks are lost. However, the clients are not informed of this, and because the other operations (read, write, and so on) are not visibly interrupted, they have no reliable way to prevent other clients from obtaining a lock on a file they think they have locked.

In the event of client failure, the locks are not immediately freed. Nor is there a timeout. If the client process terminates, the client OS kernel will notify the server, and the lock will be freed. However, if the client system shuts down abnormally (e.g. power failure or kernel panic), then the server will not be notified. When the client reboots and remounts the NFS exports, the server is notified and any client locks are freed.

If the client does not reboot, for example if a frustrated user hits the power switch and goes home for the weekend, or if a computer has had a hardware failure and must wait for replacement parts, then the locks are never freed! In this unfortunate scenario, the server lock daemon must be restarted, with the same effects as a server failure.

Workaround: If possible (given program source and skill with code modification), remove locking and insure no inconsistency occurs via other mechanisms, possibly using atomic file creation (see below) or some other mechanism for synchronization. Otherwise, build platforms never fail and have a staff trained on the implications of NFS file locking failure. If NFS is used only for files that are never accessed by more than a single client, locking is not an issue.

Note: A status monitor mechanism exists to monitor client status, and free client locks if a client is unavailable. However, clients may chose not to use this mechanism, and in many implementations do not.

c. File Locking API

In Unix, there are two flavours of file locking, flock() from BSD and lockf() from System V. It varies from system to system which of these mechanisms work with NFS. In Solaris, Sun's Unix variant, lockf() works with NFS, and flock() is implemented via lockf(). On other systems, the results are less consistent. For example, on some systems, lockf() is not implemented at all, and flock() does not support NFS; while on other systems, lockf() supports NFS but flock() does not.

Regardless of the system specifics, programs often assume that if they are unable to obtain a lock, it is because another program has the lock. This can cause problems as programs wait for the lock to be freed. Since the reason the lock fails is because locking is unsupported, the attempt to obtain a lock will never work. This results in either the applications waiting forever, or aborting their operation.

These results will also vary with the support of the server. While typically the NFS server runs an accompanying lock daemon, this is not guaranteed.

Workaround: Upgrade to the latest versions of all operating systems, as they usually have improved and more consistent locking support. Also, use the lock daemon. Additionally, try to use only programs written to handle NFS locking properly, veified either by code review or a vendor compliance statement.

d. Exclusive File Creation

In Unix, when a program creates a file, it may ask for the operation to fail if the file already exists (as opposed to the default behaviour of using the existing file). This allows programs to know that, for example, they have a unique file name for a temporary file. It is also used by various daemons for locking various operations, e.g. modifying mail folders or print queues.

Unfortunately, NFS does not properly implement this behaviour. A file creation will sometimes return success even if the file already exists. Programs written to work on a local file system will experience strange results when they attempt to update a file after using file creation to lock it, only to discover another file is modifying it (I have personally seen mailboxes with hundreds of mail messages corrupted because of this), because it also "locked" the file via the same mechanism.

Workaround: If possible (given program source and skill with code modification), use the following method, as documented in the Linux open() manual page:

The solution for performing atomic file locking using a lockfile is to create a unique file on the same fs (e.g., incorporating hostname and pid), use link(2) to make a link to the lockfile and use stat(2) on the unique file to check if its link count has increased to 2. Do not use the return value of the link() call.

This still leaves the issue of client failure unanswered. The suggested solution for this is to pick a timeout value and assume if a lock is older than a certain application-specific age that it has been abandoned.

e. Delayed Write Caching

In an effort to improve efficiency, many NFS clients cache writes. This means that they delay sending small writes to the server, with the idea that if the client makes another small write in a short amount of time, the client need only send a single message to the server.

Unix servers typically cache disk writes to local disks the same way. The difference is that Unix servers also keep track of the state of the file in the cache memory versus the state on disk, so programs are all presented with a single view of the file.

In NFS caching, all applications on a single client will typically see the same file contents. However, applications accessing the file from different clients will not see the same file for several seconds.

Workaround: It is often possible to disable client write caching. Unfortunately, this frequently causes unacceptably slow performance, depending on the application. (Applications that perform I/O of large chunks of data should be unaffected, but applications that perform lots of small I/O operations will be severely punished.) If locking is employed, applications can explicitly cooperate and flush files from the local cache to the server, but see the previous sections on locking when employing this solution.

f. Read Caching and File Access Time

Unix file systems typically have three times associated with a file: the time of last modification (file creation or write), the time of last "change" (write or change of inode information), or the time of last access (file execution or read). NFS file systems also report this information.

NFS clients perform attribute caching for efficiency reasons. Reading small amounts of data does not update the access time on the server. This means a server may report a file has been unaccessed for a much longer time than is accurate.

This can cause problems as administrators and automatic cleanup software may delete files that have remained unused for a long time, expecting them to be stale lock files, abandoned temporary files and so on.

Workaround: Attribute caching may be disabled on the client, but this is usually not a good idea for performance reasons. Administrators should be trained to understand the behaviour of NFS regarding file access time. Any programs that rely on access time information should be modified to use another mechanism.

g. Indestructible Files

In Unix, when a file is opened, the data of that file is accessible to the process that opened it, even if the file is deleted. The disk blocks the file uses are freed only when the last process which has it open has closed it.

An NFS server, being stateless, has no way to know what clients have a file open. Indeed, in NFS clients never really "open" or "close" files. So when a file is deleted, the server merely frees the space. Woe be unto any client that was expecting the file contents to be accessible as before, as in the Unix world!

In an effort to minimize this as much as possible, when a client deletes a file, the operating systems checks if any process on the same client box has it open. If it does, the client renames the file to a "hidden" file. Any read or write requests from processes on the client that were to the now-deleted file go to the new file.

This file is named in the form .nfsXXXX, where the XXXX value is determined by the inode of the deleted file - basically a random value. If a process (such as rm) attempts to delete this new file from the client, it is replaced by a new .nfsXXXX file, until the process with the file open closes it.

These files are difficult to get rid of, as the process with the file open needs to be killed, and it is not easy to determine what that process is. These files may have unpleasant side effects such as preventing directories from being removed.

If the server or client crashes while a .nfsXXXX file is in use, they will never be deleted. There is no way for the server or a client to know whether a .nfsXXXX file is currently being used by a client or not.

Workaround: One should be able to delete .nfsXXXX files from another client, however if a process writes to the file, it will be created at that time. It would be best to exit or kill processes using an NFS file before deleting it. Unfortunately, there is no way to know if an uncooperative process has a file open.

h. User and Group Names and Numbers

NFS uses user and group numbers, rather than names. This means that each machine that accesses an NFS export needs (or at least should) have the same user and group identifiers as the NFS export has. Note that this problem is not unique to NFS, and also applies, for instance, to removable media and archives. It is most frequently an issue with NFS, however.

Workaround: Either the /etc/passwd and /etc/group files must be synchronized, or something like NIS needs to be used for this purpose.

i. Superuser Account

NFS has special handling of the superuser account (also known as the root account). By default, the root user may not update files on an NFS mount.

Normally on a Unix system, root may do anything to any file. When an NFS drive has been mounted, this is no longer the case. This can confuse scripts and administrators alike.

To clarify: a normal user (for example "shane" or "billg") can update files that the superuser ("root") cannot.

Workaround: Enable root access to specific clients for NFS exports, but only in a trusted environment since NFS is insecure. Therefore, this does not guarantee that unauthorized client will be unable to access the mount as root.


2010-09-05 00:04:32

/etc/inet/hosts 파일 형식

The /etc/inet/hosts file uses the basic syntax that follows. Refer to the hosts(4) man page for complete syntax information.

IPv4-address hostname [nicknames] [#comment]

IPv4-address

Contains the IPv4 address for each interface that the local host must recognize.

hostname

Contains the host name that is assigned to the system at setup, plus the host names that are assigned to additional network interfaces that the local host must recognize.

[nickname]

Is an optional field that contains a nickname for the host.

[#comment]

Is an optional field for a comment.


2010-09-05 00:02:51

Name

    smf– service management facility

Description

    The Solaris service management facility defines a programming model for providing persistently running applications called services. The facility also provides the infrastructure in which to run services. A service can represent a running application, the software state of a device, or a set of other services. Services are represented in the framework by service instance objects, which are children of service objects. Instance objects can inherit or override the configuration of the parent service object, which allows multiple service instances to share configuration information. All service and instance objects are contained in a scope that represents a collection of configuration information. The configuration of the local Solaris instance is called the “localhost” scope, and is the only currently supported scope.

    Each service instance is named with a fault management resource identifier (FMRI) with the scheme “svc:”. For example, the syslogd(1M) daemon started at system startup is the default service instance named:

    svc://localhost/system/system-log:default
    svc:/system/system-log:default
    system/system-log:default

    In the above example, 'default' is the name of the instance and 'system/system-log' is the service name. Service names may comprise multiple components separated by slashes (/). All components, except the last, compose the category of the service. Site-specific services should be named with a category beginning with 'site'.

    A service instance is either enabled or disabled. All services can be enabled or disabled with the svcadm(1M) command.

    The list of managed service instances on a system can be displayed with the svcs(1) command.

    Dependencies

      Service instances may have dependencies on services or files. Those dependencies govern when the service is started and automatically stopped. When the dependencies of an enabled service are not satisfied, the service is kept in the offline state. When its dependencies are satisfied, the service is started. If the start is successful, the service is transitioned to the online state. Whether a dependency is satisfied is determined by its type:

      require_all

      Satisfied when all cited services are running (online or degraded), or when all indicated files are present.

      require_any

      Satisfied when one of the cited services is running (online or degraded), or when at least one of the indicated files is present.

      optional_all

      Satisfied if the cited services are running (online or degraded) or will not run without administrative action (disabled, maintenance, not present, or offline waiting for dependencies which will not start without administrative action).

      exclude_all

      Satisfied when all of the cited services are disabled, in the maintenance state, or when cited services or files are not present.

      Once running (online or degraded), if a service cited by a require_all, require_any, or optional_all dependency is stopped or refreshed, the SMF considers why the service was stopped and the restart_on attribute of the dependency to decide whether to stop the service.

                         |  restart_on value
      event              |  none  error restart refresh
      -------------------+------------------------------
      stop due to error  |  no    yes   yes     yes
      non-error stop     |  no    no    yes     yes
      refresh            |  no    no    no      yes

      A service is considered to have stopped due to an error if the service has encountered a hardware error or a software error such as a core dump. For exclude_all dependencies, the service is stopped if the cited service is started and the restart_on attribute is not none.

      The dependencies on a service can be listed with svcs(1) or svccfg(1M), and modified with svccfg(1M).

    Restarters

      Each service is managed by a restarter. The master restarter, svc.startd(1M) manages states for the entire set of service instances and their dependencies. The master restarter acts on behalf of its services and on delegated restarters that can provide specific execution environments for certain application classes. For instance, inetd(1M) is a delegated restarter that provides its service instances with an initial environment composed of a network connection as input and output file descriptors. Each instance delegated to inetd(1M) is in the online state. While the daemon of a particular instance might not be running, the instance is available to run.

      As dependencies are satisfied when instances move to the online state, svc.startd(1M) invokes start methods of other instances or directs the delegated restarter to do so. These operations might overlap.

      The current set of services and associated restarters can be examined using svcs(1). A description of the common configuration used by all restarters is given in smf_restarter(5).

    Methods

      Each service or service instance must define a set of methods that start, stop, and, optionally, refresh the service. See smf_method(5) for a more complete description of the method conventions for svc.startd(1M) and similar fork(2)-exec(2) restarters.

      Administrative methods, such as for the capture of legacy configuration information into the repository, are discussed on the svccfg(1M) manual page.

      The methods for a service can be listed and modified using the svccfg(1M) command.

    States

      Each service instance is always in a well-defined state based on its dependencies, the results of the execution of its methods, and its potential receipt of events from the contracts filesystem. The following states are defined:

      UNINITIALIZED

      This is the initial state for all service instances. Instances are moved to maintenance, offline, or a disabled state upon evaluation by svc.startd(1M) or the appropriate restarter.

      OFFLINE

      The instance is enabled, but not yet running or available to run. If restarter execution of the service start method or the equivalent method is successful, the instance moves to the online state. Failures might lead to a degraded or maintenance state. Administrative action can lead to the uninitialized state.

      ONLINE

      The instance is enabled and running or is available to run. The specific nature of the online state is application-model specific and is defined by the restarter responsible for the service instance. Online is the expected operating state for a properly configured service with all dependencies satisfied. Failures of the instance can lead to a degraded or maintenance state. Failures of services on which the instance depends can lead to offline or degraded states.

      DEGRADED

      The instance is enabled and running or available to run. The instance, however, is functioning at a limited capacity in comparison to normal operation. Failures of the instance can lead to the maintenance state. Failures of services on which the instance depends can lead to offline or degraded states. Restoration of capacity should result in a transition to the online state.

      MAINTENANCE

      The instance is enabled, but not able to run. Administrative action is required to restore the instance to offline and subsequent states. The maintenance state might be a temporarily reached state if an administrative operation is underway.

      DISABLED

      The instance is disabled. Enabling the service results in a transition to the offline state and eventually to the online state with all dependencies satisfied.

      LEGACY-RUN

      This state represents a legacy instance that is not managed by the service management facility. Instances in this state have been started at some point, but might or might not be running. Instances can only be observed using the facility and are not transferred into other states.

      States can also have transitions that result in a return to the originating state.

    Properties and Property Groups

      The dependencies, methods, delegated restarter, and instance state mentioned above are represented as properties or property groups of the service or service instance. A service or service instance has an arbitrary number of property groups in which to store application data. Using property groups in this way allows the configuration of the application to derive the attributes that the repository provides for all data in the facility. The application can also use the appropriate subset of the service_bundle(4) DTD to represent its configuration data within the framework.

      Property lookups are composed. If a property group-property combination is not found on the service instance, most commands and the high-level interfaces of libscf(3LIB) search for the same property group-property combination on the service that contains that instance. This feature allows common configuration among service instances to be shared. Composition can be viewed as an inheritance relationship between the service instance and its parent service.

      Properties are protected from modification by unauthorized processes. See smf_security(5).

    Snapshots

      Historical data about each instance in the repository is maintained by the service management facility. This data is made available as read-only snapshots for administrative inspection and rollback. The following set of snapshot types might be available:

      initial

      Initial configuration of the instance created by the administrator or produced during package installation.

      last_import

      Configuration as prescribed by the manifest of the service that is taken during svccfg(1M) import operation. This snapshot provides a baseline for determining property customization.

      previous

      Current configuration captured when an administrative undo operation is performed.

      running

      The running configuration of the instance.

      start

      Configuration captured during a successful transition to the online state.

      The svccfg(1M) command can be used to interact with snapshots.

    Special Property Groups

      Some property groups are marked as “non-persistent”. These groups are not backed up in snapshots and their content is cleared during system boot. Such groups generally hold an active program state which does not need to survive system restart.

    Configuration Repository

      The current state of each service instance, as well as the properties associated with services and service instances, is stored in a system repository managed by svc.configd(1M). This repository is transactional and able to provide previous versions of properties and property groups associated with each service or service instance.

      The repository for service management facility data is managed by svc.configd(1M).

    Service Bundles, Manifests, and Profiles

      The information associated with a service or service instance that is stored in the configuration repository can be exported as XML-based files. Such XML files, known as service bundles, are portable and suitable for backup purposes. Service bundles are classified as one of the following types:

      manifests

      Files that contain the complete set of properties associated with a specific set of services or service instances.

      profiles

      Files that contain a set of service instances and values for the enabled property on each instance.

      Service bundles can be imported or exported from a repository using the svccfg(1M) command. See service_bundle(4) for a description of the service bundle file format with guidelines for authoring service bundles.

      A service archive is an XML file that contains the description and persistent properties of every service in the repository, excluding transient properties such as service state. This service archive is basically a 'svccfg export' for every service which is not limited to named services.

    Legacy Startup Scripts

      Startup programs in the /etc/rc?.d directories are executed as part of the corresponding run-level milestone:

      /etc/rcS.d

      milestone/single-user:default

      /etc/rc2.d

      milestone/multi-user:default

      /etc/rc3.d

      milestone/multi-user-server:default

      Execution of each program is represented as a reduced-functionality service instance named by the program's path. These instances are held in a special legacy-run state.

      These instances do not have an enabled property and, generally, cannot be manipulated with the svcadm(1M) command. No error diagnosis or restart is done for these programs.

See Also

2010-09-05 00:00:43

Name

    dfstab– file containing commands for sharing resources across a network

Description

    dfstab resides in directory /etc/dfs and contains commands for sharing resources across a network. dfstab gives a system administrator a uniform method of controlling the automatic sharing of local resources.

    Each line of the dfstab file consists of a share(1M) command. The dfstab file can be read by the shell to share all resources. System administrators can also prepare their own shell scripts to execute particular lines from dfstab.

    The contents of dfstab put into effect when the command shown below is run. See svcadm(1M).

    /usr/sbin/svcadm enable network/nfs/server

See Also

2010-09-04 23:59:19

Name

    sharetab– shared file system table

Description

    sharetab resides in directory /etc/dfs and contains a table of local resources shared by the share command.

    Each line of the file consists of the following fields:

    pathname resource fstype specific_options description

    where

    pathname

    Indicate the path name of the shared resource.

    resource

    Indicate the symbolic name by which remote systems can access the resource.

    fstype

    Indicate the file system type of the shared resource.

    specific_options

    Indicate file-system-type-specific options that were given to the share command when the resource was shared.

    description

    Describe the shared resource provided by the system administrator when the resource was shared.

See Also

2010-09-04 23:58:03
Values for Creating Vendor Category Options for Solaris Clients

Name 

Code 

Data Type 

Granularity 

Maximum 

Vendor Client Classes * 

Description 

SrootOpt

ASCII text 

SUNW.Ultra–1, SUNW.Ultra-30, SUNW.i86pc

NFS mount options for the client's root file system 

SrootIP4

IP address 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

IP address of root server 

SrootNM

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Host name of root server 

SrootPTH

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to the client's root directory on the root server 

SswapIP4

IP address 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

IP address of swap server 

SswapPTH

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to the client's swap file on the swap server 

SbootFIL

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to the client's boot file 

Stz

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Time zone for client 

SbootRS

NUMBER 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

NFS read size used by standalone boot program when it loads the kernel 

SinstIP4

10 

IP address 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

IP address of Jumpstart Install server 

SinstNM

11 

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Host name of install server 

SinstPTH

12 

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to installation image on install server 

SsysidCF

13 

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to sysidcfg file, in the format server:/path

SjumpsCF

14 

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Path to JumpStart configuration file in the format server:/path

Sterm

15 

ASCII text 

SUNW.Ultra-1, SUNW.Ultra-30, SUNW.i86pc

Terminal type  

* The vendor client classes determine what classes of client can use the option. Vendor client classes listed here are suggestions only. You should specify client classes that indicate the actual clients in your network that need to install from the network. See Table 4–9 for information about how to determine a client's vendor client class.


2010-09-04 23:56:08

Name

    mnttab– mounted file system table

Description

    The file /etc/mnttab is really a file system that provides read-only access to the table of mounted file systems for the current host. /etc/mnttab is read by programs using the routines described in getmntent(3C). Mounting a file system adds an entry to this table. Unmounting removes an entry from this table. Remounting a file system causes the information in the mounted file system table to be updated to reflect any changes caused by the remount. The list is maintained by the kernel in order of mount time. That is, the first mounted file system is first in the list and the most recently mounted file system is last. When mounted on a mount point the file system appears as a regular file containing the current mnttab information.

    Each entry is a line of fields separated by TABs in the form:

    special   mount_point   fstype   options   time
    

    where:

    special

    The name of the resource that has been mounted.

    mount_point

    The pathname of the directory on which the filesystem is mounted.

    fstype

    The file system type of the mounted file system.

    options

    The mount options. See respective mount file system man page in the See Also section below.

    time

    The time at which the file system was mounted.

    Examples of entries for the special field include the pathname of a block-special device, the name of a remote file system in the form of host:pathname, or the name of a swap file, for example, a file made with mkfile(1M).

ioctls

    The following ioctl(2) calls are supported:

    MNTIOC_NMNTS

    Returns the count of mounted resources in the current snapshot in the uint32_t pointed to by arg.

    MNTIOC_GETDEVLIST

    Returns an array of uint32_t's that is twice as long as the length returned by MNTIOC_NMNTS. Each pair of numbers is the major and minor device number for the file system at the corresponding line in the current /etc/mnttab snapshot. arg points to the memory buffer to receive the device number information.

    MNTIOC_SETTAG

    Sets a tag word into the options list for a mounted file system. A tag is a notation that will appear in the options string of a mounted file system but it is not recognized or interpreted by the file system code. arg points to a filled in mnttagdesc structure, as shown in the following example:

    uint_t  mtd_major;  /* major number for mounted fs */
    uint_t  mtd_minor;  /* minor number for mounted fs */
    char    *mtd_mntpt; /* mount point of file system */
    char    *mtd_tag;   /* tag to set/clear */

    If the tag already exists then it is marked as set but not re-added. Tags can be at most MAX_MNTOPT_TAG long.

    Use of this ioctl is restricted to processes with the {PRIV_SYS_MOUNT} privilege.

    MNTIOC_CLRTAG

    Marks a tag in the options list for a mounted file system as not set. arg points to the same structure as MNTIOC_SETTAG, which identifies the file system and tag to be cleared.

    Use of this ioctl is restricted to processes with the {PRIV_SYS_MOUNT} privilege.

Errors

    EFAULT

    The arg pointer in an MNTIOC_ ioctl call pointed to an inaccessible memory location or a character pointer in a mnttagdesc structure pointed to an inaccessible memory location.

    EINVAL

    The tag specified in a MNTIOC_SETTAG call already exists as a file system option, or the tag specified in a MNTIOC_CLRTAG call does not exist.

    ENAMETOOLONG

    The tag specified in a MNTIOC_SETTAG call is too long or the tag would make the total length of the option string for the mounted file system too long.

    EPERM

    The calling process does not have {PRIV_SYS_MOUNT} privilege and either a MNTIOC_SETTAG or MNTIOC_CLRTAG call was made.

Files

    /etc/mnttab

    Usual mount point for mnttab file system

    /usr/include/sys/mntio.h

    Header file that contains IOCTL definitions

See Also

Warnings

    The mnttab file system provides the previously undocumented dev=xxx option in the option string for each mounted file system. This is provided for legacy applications that might have been using the dev=information option.

    Using dev=option in applications is strongly discouraged. The device number string represents a 32-bit quantity and might not contain correct information in 64-bit environments.

    Applications requiring device number information for mounted file systems should use the getextmntent(3C) interface, which functions properly in either 32- or 64-bit environments.

Notes

    The snapshot of the mnttab information is taken any time a read(2) is performed at offset 0 (the beginning) of the mnttab file. The file modification time returned by stat(2) for the mnttab file is the time of the last change to mounted file system information. A poll(2) system call requesting a POLLRDBAND event can be used to block and wait for the system's mounted file system information to be different from the most recent snapshot since the mnttab file was opened.


2010-09-04 23:54:32

Name

    vfstab– table of file system defaults

Description

    The file /etc/vfstab describes defaults for each file system. The information is stored in a table with the following column headings:


    device       device       mount      FS      fsck    mount      mount
    to mount     to fsck      point      type    pass    at boot    options

    The fields in the table are space-separated and show the resource name (device to mount), the raw device to fsck (device to fsck), the default mount directory (mount point), the name of the file system type (FS type), the number used by fsck to decide whether to check the file system automatically (fsck pass), whether the file system should be mounted automatically by mountall (mount at boot), and the file system mount options (mount options). (See respective mount file system man page below in SEE ALSO for mount options.) A '-' is used to indicate no entry in a field. This may be used when a field does not apply to the resource being mounted.

    The getvfsent(3C) family of routines is used to read and write to /etc/vfstab.

    /etc/vfstab can be used to specify swap areas. An entry so specified, (which can be a file or a device), will automatically be added as a swap area by the /sbin/swapadd script when the system boots. To specify a swap area, the device-to-mount field contains the name of the swap file or device, the FS-type is "swap", mount-at-boot is "no" and all other fields have no entry.

Examples

    The following are vfstab entries for various file system types supported in the Solaris operating environment.


    Example 1 NFS and UFS Mounts

    The following entry invokes NFS to automatically mount the directory /usr/local of the server example1 on the client's /usr/local directory with read-only permission:


    example1:/usr/local - /usr/local nfs - yes ro

    The following example assumes a small departmental mail setup, in which clients mount /var/mail from a server mailsvr. The following entry would be listed in each client's vfstab:


    mailsvr:/var/mail - /var/mail nfs - yes intr,bg

    The following is an example for a UFS file system in which logging is enabled:


    /dev/dsk/c2t10d0s0 /dev/rdsk/c2t10d0s0 /export/local ufs 3 yes logging

    See mount_nfs(1M) for a description of NFS mount options and mount_ufs(1M) for a description of UFS options.



    Example 2 pcfs Mounts

    The following example mounts a pcfs file system on a fixed hard disk on an x86 machine:


    /dev/dsk/c1t2d0p0:c - /win98 pcfs - yes -

    The example below mounts a Jaz drive on a SPARC machine. Normally, the volume management software handles mounting of removable media, obviating a vfstab entry. Specifying a device that supports removable media in vfstab with set the mount-at-boot field to no (as shown below) disables the automatic handling of that device. Such an entry presumes you are not running volume management software.


    /dev/dsk/c1t2d0s2:c - /jaz pcfs - no -

    For removable media on a SPARC machine, the convention for the slice portion of the disk identifier is to specify s2, which stands for the entire medium.

    For pcfs file systems on x86 machines, note that the disk identifier uses a p (p0) and a logical drive (c, in the /win98 example above) for a pcfs logical drive. See mount_pcfs(1M) for syntax for pcfs logical drives and for pcfs-specific mount options.



    Example 3 CacheFS Mount

    Below is an example for a CacheFS file system. Because of the length of this entry and the fact that vfstab entries cannot be continued to a second line, the vfstab fields are presented here in a vertical format. In re-creating such an entry in your own vfstab, you would enter values as you would for any vfstab entry, on a single line.


    device to mount:  svr1:/export/abc 
    device to fsck:  /usr/abc 
    mount point:  /opt/cache 
    FS type:  cachefs 
    fsck pass:  7 
    mount at boot:  yes 
    mount options: 
    local-access,bg,nosuid,demandconst,backfstype=nfs,cachedir=/opt/cache

    See mount_cachefs(1M) for CacheFS-specific mount options.



    Example 4 Loopback File System Mount

    The following is an example of mounting a loopback (lofs) file system:


    /export/test - /opt/test lofs - yes -

    See lofs(7FS) for an overview of the loopback file system.


See Also