Check vsam file attributes




















When the file is closed, the activity count is reset to its positive complement. When the region opens the VSAM file, all components of that file must have equal activity counts. This means the index and data components of the primary file and all alternate index files must have the same activity counts. If the file is spanned, each segment must have that same activity count.

If the activity count of one component of a file is not consistent with the other components, the file cannot be opened. The opening of the file is bypassed to allow time to determine why the activity counts are inconsistent and to either override or correct the error. Activity counts may be inconsistent if all components of a file were not restored from the same backup.

Use the kixverify utility to display and modify activity counts. If you suspect that a dataset is corrupted, use kixvalfle with the -ik options on a closed dataset to check its integrity. Capture the output in a file in case technical support staff need it for problem analysis. If the file is a KSDS or alternate index file, the links between the blocks of the index file are also checked. If kixvalfle reports no errors, your dataset is not corrupted and no further action is necessary.

However, if a significant proportion of the file contains free blocks, you might want to reorganize it. See Reorganizing a Dataset. If kixvalfle reports errors, you must perform corrective actions on the dataset.

The error message types dictate the appropriate actions. If the index file or the alternate index files are not synchronized with the main data file, use the unikixbld utility to rebuild the index files without having to rebuild the data file. The unikixbld options to rebuild index files are as follows:. The logical records are in ascending order within a VSAM block, and the blocks are chained together, but not necessarily in physical sequence. Example : Block 1 is chained to block 7, which is chained to block 3, etc.

Block 1 has the group of logical records with the lowest key values, and within Block 1, these records are in ascending sequence. Block 7 has the group of logical records which are the next ones in ascending sequence, then Block 3, etc. As logical records are deleted, the blocks may become empty no logical records are present. These empty blocks are marked as free and are not returned to the operating system.

These free blocks are then reused when new records are inserted into the dataset. If no free blocks are available, a new block is requested from the operating system. Reorganizing datasets reclaims free disk space and may improve disk access, but this depends on the fragmentation level of the dataset and your file system at file creation time, file update time, and file reorganization time.

It can also depend on your operating system and the type of disk controllers you use. Physical disk access behavior is difficult to predict and it changes over the life of the dataset. Execute the kixfile command to make the dataset read-only:. Use the unikixbld utility to write the output to a sequential file:. Use unikixbld to restore the contents from the sequential file:.

The command used in Step 3 results in the default fill percentage of 0, which means all blocks contain as many logical records as can fit. This is a good choice if you know that new records inserted in the future will have keys with values higher than your present set. However, if you expect new records to be inserted into existing VSAM blocks, it makes sense to leave space for these insertions.

Use the fill percentage option to specify the amount of space. Use the kixsalvage utility to salvage a dataset. Carefully examine the output from kixsalvage and determine if it is better to go back to your last backed-up copy. Type the kixsalvage command to generate a new dataset. If you omit the -c option, kixsalvage generates a recordv format file that you can sort with an external sort utility.

Use unikixbld from a batch script to initialize the corrupted dataset and reload the contents from the sorted file that you just created. You must also specify recovery at the individual file level in the FCT. In addition to recovery for databases, the software supports recovery for temporary storage queues, transient data queues TDQs , and asynchronous transaction starts ATI requests.

Occurs when a transaction aborts. Any database updates that were performed by the transaction are backed out so that the failed transaction does not affect the database. This is called dynamic transaction back-out. Other transactions cannot access updated records until the updating transaction terminates successfully.

This prevents the contamination of the database by data from failed transactions getting passed to successful transactions prior to the abort. See Recovering From a Transaction Abort. Occurs after a system crash caused by a hardware problem, a software problem in any of the Sun MTP components or the operating system. See Recovering From a System Crash. When recovery is not enabled, the actions described above do not take place and the application environment will be chaotic.

A transaction abort, system crash, or region crash may make your database invalid. Even if none of these occur, applications can read in-process updates of other transactions. The application designer must understand these inconsistencies and plan for them. Other recovery issues to consider when designing your application are:. Conversational Transactions : How conversational transactions can cause problems in a multiuser environment.

See Conversational Transactions and Recovery. See Maintaining Database Integrity. The size of a logical record may be greater than the VSAM block size. Dynamic transaction back-out is used to roll back database updates when a transaction fails.

When a record is written to the database, a copy of the original record is written to the recovery file. This copy, called the before image , identifies the transaction that created it.

Marker records indicating the start and end of each transaction that updates the database, as well as any syncpoints, are also written to the recovery file. The recovery file is a circular file, meaning that when the file reaches a predefined maximum size, records are reused starting from the beginning. Sun MTP also stores offsets into the recovery file for each record written to the database, in memory.

When a transaction aborts, the software reads each offset in the recovery file for each record associated with the transaction.

Each before image associated with the failed transaction is restored to the database. At this point, all the records that were updated by the failed transaction are backed out and the state of the database is the same as it was before the transaction was executed. After the recovery file back-outs are complete, the software rolls back all updates that the failed transaction made to Temporary Storage for queues that are defined as recoverable in the TST.

It also rolls back updates to intrapartition transient data and all recoverable asynchronous START requests that the failed transaction may have issued. However, such requests do not get scheduled until the transaction issues a syncpoint or completes successfully. This ensures that all updates made by a failed transaction to recoverable resources VSAM files, temporary storage, intrapartition transient data, and asynchronous START s are rolled back during dynamic transaction back-out.

Although the system may pause for a moment while the transaction back-out takes place, other transactions in the system are not affected. After a system crash, start the region in the normal way. If the VCT was configured with recovery in effect, the recovery procedure described in this section occurs.

The region's Recovery Server backs out the database updates of any transactions that were incomplete at the time of a system crash. If header file information in the recovery file indicates that the system did not end normally, the Recovery Server restores the before images to the database just as it does with a single-transaction abort. However, it backs out all the transactions that were in progress at the time the last record of the recovery file was written.

The effect is the same as doing an individual back-out for each transaction that did not complete successfully. When it encounters temporary storage records, it creates Temporary Storage Blocks just as in the dynamic transaction back-out. As it encounters each start record, it creates an Asynchronous Start Queue entry and sends a message to the start processor to schedule the asynchronous START. If a system crash occurs while recovery is in effect, recovery must still be in effect when the system is restarted to perform recovery.

If you do not want to have the recovery performed, it is not enough to just turn off the recovery flag in the VCT:. How Do Deadlocks Affect Recovery? A deadlock occurs when two or more transactions are each waiting for a resource that is currently owned by one of the transactions. Because each transaction is waiting for one of the other transactions to release a needed resource, the transactions remain hung unless intervention occurs.

The simplest deadlock involves two transactions and two resources. Transaction 1 cannot continue until Transaction 2 releases Resource B, while Transaction 2 cannot continue until Transaction 1 releases Resource A. Sun MTP contains special logic to detect deadlock conditions. When it detects a deadlock, it forces one of the transactions to abend.

The transaction that abends is the last transaction to enter the group; that is, the one whose request, when added to the others already outstanding, results in the deadlock. When this type of abend occurs, it does not mean that the particular transaction that abended has a design error. It means that the transactions involved in the deadlock, when taken as a group, have a design error. This differs from a pseudo-conversational transaction in which there is no interaction with the user during the course of the transaction.

While the user types in a response, the transaction is not active. Using the Sun MTP recovery capability has important design implications for conversational transactions:. The maximum time span of a transaction becomes unbounded if the transaction requires a response from a user before it completes. The update may appear to the user to be complete, but it may not. Until the transaction completes, that update is not committed to the database.

If the transaction fails many minutes later, the user may not realize that the update was rolled back. Sun MTP uses a recovery file to store before images of database records. The recovery file provides a rollback capability in the event of an abort. The before image data is not retained beyond a database commitment or rollback. Each third-party RDBMS has its own log files that provide both database rollback and roll forward capabilities.

The application designer manages database integrity within a particular application implicitly or explicitly.

Implicit database actions occur at various points in the execution of an application:. An implicit database commitment at the successful end of each transaction An implicit database rollback at the abnormal termination of a transaction Sun MTP automatically handles the implicit commitment or rollback of a VSAM database and the RDBMS software manages the implicit commitment or rollback of an RDBMS transaction through a user module.

The database administrator must prepare the user module, then bind it with the Transaction Server unikixtran and the Batch Processor unikixvsam.

The user module must be developed in consultation with the application designer to guarantee the consistency of the application. For information about developing user modules, see Chapter An application program can request a database commit explicitly. Here, all RDBMS software that was incorporated into the transaction processor is called to commit its changes to the appropriate database and mark its log file that the transaction has successfully completed. In either case, the operation is only executed on behalf of the data managed by that particular RDBMS.

Use this method of explicitly requesting a commitment where there is just one RDBMS in use by the application. Otherwise, inconsistent application databases can result. There are several constraints on an application that accesses more than one database:. There is no two-phase commit between database integrity managers.

In a situation where one database manager has committed its changes and a second database manager cannot commit its changes, there is no opportunity to back out the committed changes. If this occurs, there is no guarantee that the databases maintained by the two database managers are synchronized.

Inherent in self-contained database integrity management is self-contained locking. Each RDBMS has checks that prevent lock requests by two or more users that can result in a deadly embrace. Because there is no global lock manager in Sun MTP, an application that accesses two or more database managers can create an undetectable deadly embrace, resulting in an application hang or a time-out within one of the database managers.

To avoid these problems, use a phased approach so that only one database manager is updated at any one time. In phase two, the application accesses the Oracle database in update mode. If phase two does not successfully commit, it is not possible to automatically back out the changes made to the VSAM database in phase one.

However, the application logic could handle backing out the changes. When the need for application throughput outweighs the need for recovery from a system crash or transaction abort, use the software's VSAM file and journal caching facilities to defer physical file writes.

The file system cache schedules these physical writes based on its cache flush operation rules. It is the responsibility of the system administrator to implement a policy to ensure that adequate backups of VSAM files on the system are performed daily or more frequently. This may mean that additional kixfile or UNIX sync commands are required to ensure that updates contained in cache for VSAM files and journals are physically written as needed. Because these files are re-initialized each time the region is started, their data is not needed after a region or system crash.

Similarly, certain customer application files, such as user journals, may still be usable after a system failure if only the last few written blocks are lost. Identify the file for which to cache writes, and type a C in the No Rcv column. Repeat Step 4 for each file to cache. Shut down and restart the region so your changes takes effect. Identify the journal file for which to cache writes and type a C in the Opt column. Repeat Step 4 for each journal file to cache.

Use the -rC option of the kixfile utility, in combination with another appropriate option e. You must run kixfile from a shell script as a batch job while a region is running. All rights reserved. KIXE Output file already exists. Block size. The allowed values are: 4K , 8K , 16K , 32K. P Primary cluster. Primary environment. Primary file name. Record format. Single character: F : Fixed V : Variable. Record length. Key length. Key offset. Batch read-locking. Reuse allowed. Single character: Y : Reuse allowed for this file.

Primary size. Default value is 1. Increment size. Percent fill. Default value is 0. A Alternate index definition. Alternate environment. Alternate file name. Alternate key length.

Alternate key offset. S Spanned segment definition. Spanned environment. To Open the Data File Editor. Name by which a program references the file. Logical file identifier.

Access Method. File Type. File organization of a VSAM file. Rmt Fle. Y : File is remote N : File is not remote. Rec Fmt. V : Variable length format F : Fixed length format. Rec Lth. Updates the contents of the file display area. Displays the previous page of datasets. Displays the next page of datasets. To Build a Sequential File. You can check any of the IDCxxxxx messages. Verify command is used to check and fix VSAM files which have not been closed properly after an error.

The command adds correct End-Of-Data records to the file. In the above syntax, vsam-file-name is the VSAM dataset name for which we need to check the errors.

Nishant Malik. Previous Page. Next Page. In order to search for a particular record, we give a unique key value. Key value is searched in the index component. Once key is found the corresponding memory address which refers to the data component is retrieved. From the memory address we can fetch the actual data which is stored in the data component. Above syntax shows which parameters we can use while deleting KSDS cluster. A Relative record dataset has records that are identified by the Relative Record Number RRN , which is the sequence number relative to the first record.

RRDS allows access of records by number like record 1, record 2, and so on. This provides random access and assumes the application program has a way to get the desired record numbers. The records in an RRDS dataset can be accessed sequentially, in relative record number order, or directly, by supplying the relative record number of the desired record. The records in a RRDS dataset are stored in fixed length slots. Each record is referenced by the number of its slot, number can vary from 1 to the maximum number of records in the dataset.

Applications which use fixed-length records or a record number with contextual meaning that can use RRDS datasets. Space is divided into fixed length slots in RRDS file structure. A slot can be either completely vacant or completely full. Thus, new records can be added to empty slots and existing records can be deleted from slots which are filled. We can access any record directly by giving Relative Record Number. Above syntax shows which parameters we can use while deleting RRDS cluster.

Linear dataset is the only form of byte-stream dataset which is used in used in traditional operating system files. Linear datasets are rarely used. The following syntax shows which parameters we can use while creating LDS cluster. Above syntax shows which parameters we can use while deleting LDS cluster. Above syntax shows which parameters we can alter in an existing VSAM cluster.

It is also used to copy data from one VSAM data set to another. We can use this command to copy data from sequential file to VSAM file. In the above syntax, the in-ddname is DD name for the Input Dataset which is having records. The out-ddname is the DD name for the Output Dataset, where the input datasets records will be copied. In the above syntax, vsam-file-name is the VSAM dataset name for which we need all the information. ALL keyword is specified to get all catalog details.

Examine command is used to check the structural integrity of a key-sequenced data set cluster. It checks for index and data components and if any problem is found, the error messages are sent spool.

You can check any of the IDCxxxxx messages. Verify command is used to check and fix VSAM files which have not been closed properly after an error. The command adds correct End-Of-Data records to the file. In the above syntax, vsam-file-name is the VSAM dataset name for which we need to check the errors.

An alternate index provides access to records by using more than one key. The key of alternate index can be a non-unique key, it can have duplicates. Above syntax shows the parameters which are used while defining Alternate Index. Define AIX command is used to define Alternate Index and specify parameter attributes for its components. Define Path is used to relate the alternate index to the base cluster. While defining path we specify the name of the path and the alternate index to which this path is related.

Above syntax has two parameters. Catalog maintains the unit and volume where dataset resides. Catalog is used for retrieval of datasets.

Master catalog is itself a file which monitors and manages operations of VSAM.



0コメント

  • 1000 / 1000