Google

Berkeley DB Reference Guide:
Debugging Applications

PrevRefNext

Reviewing Berkeley DB log files

If you are running with transactions and logging, the db_printlog utility can be a useful debugging aid. The db_printlog utility will display the contents of your log files in a human readable (and machine-processable) format.

The db_printlog utility will attempt to display any and all log files present in a designated db_home directory. For each log record, db_printlog will display a line of the form:

[22][28]db_big: rec: 43 txnid 80000963 prevlsn [21][10483281]

The opening numbers in square brackets are the log sequence number (LSN) of the log record being displayed. The first number indicates the log file in which the record appears, and the second number indicates the offset in that file of the record.

The first character string identifies the particular log operation being reported. The log records corresponding to particular operations are described following. The rest of the line consists of name/value pairs.

The rec field indicates the record type (this is used to dispatch records in the log to appropriate recovery functions).

The txnid field identifies the transaction for which this record was written. A txnid of 0 means that the record was written outside the context of any transaction. You will see these most frequently for checkpoints.

Finally, the prevlsn contains the LSN of the last record for this transaction. By following prevlsn fields, you can accumulate all the updates for a particular transaction. During normal abort processing, this field is used to quickly access all the records for a particular transaction.

After the initial line identifying the record type, each field of the log record is displayed, one item per line. There are several fields that appear in many different records and a few fields that appear only in some records.

The following table presents each currently written log record type with a brief description of the operation it describes.

Log Record TypeDescription
bam_adjUsed when we insert/remove an index into/from the page header of a Btree page.
bam_cadjustKeeps track of record counts in a Btree or Recno database.
bam_cdelUsed to mark a record on a page as deleted.
bam_curadjUsed to adjust a cursor location when a nearby record changes in a Btree database.
bam_rcuradjUsed to adjust a cursor location when a nearby record changes in a Recno database.
bam_replDescribes a replace operation on a record.
bam_rootDescribes an assignment of a root page.
bam_rsplitDescribes a reverse page split.
bam_splitDescribes a page split.
crdel_metasubDescribes the creation of a metadata page for a subdatabase.
db_addremAdd or remove an item from a page of duplicates.
db_bigAdd an item to an overflow page (overflow pages contain items too large to place on the main page)
db_cksumUnable to checksum a page.
db_debugLog debugging message.
db_noopThis marks an operation that did nothing but update the LSN on a page.
db_ovrefIncrement or decrement the reference count for a big item.
db_pg_allocIndicates that we allocated a page to a Btree.
db_pg_freeIndicates that we freed a page in the Btree (freed pages are added to a freelist and reused).
db_relinkFix prev/next chains on duplicate pages because a page was added or removed.
dbreg_registerRecords an open of a file (mapping the filename to a log-id that is used in subsequent log operations).
ham_chgpgUsed to adjust a cursor location when a Hash page is removed, and its elements are moved to a different Hash page.
ham_copypageUsed when we empty a bucket page, but there are overflow pages for the bucket; one needs to be copied back into the actual bucket.
ham_curadjUsed to adjust a cursor location when a nearby record changes in a Hash database.
ham_groupallocAllocate some number of contiguous pages to the Hash database.
ham_insdelInsert/delete an item on a Hash page.
ham_metagroupUpdate the metadata page to reflect the allocation of a sequence of contiguous pages.
ham_newpageAdds or removes overflow pages from a Hash bucket.
ham_replaceHandle updates to records that are on the main page.
ham_splitdataRecord the page data for a split.
qam_addDescribes the actual addition of a new record to a Queue.
qam_delDelete a record in a Queue.
qam_delextDelete a record in a Queue with extents.
qam_incfirstIncrements the record number that refers to the first record in the database.
qam_mvptrIndicates that we changed the reference to either or both of the first and current records in the file.
txn_childCommit a child transaction.
txn_ckpTransaction checkpoint.
txn_recycleTransaction IDs wrapped.
txn_regopLogs a regular (non-child) transaction commit.
txn_xa_regopLogs a prepare message.

Augmenting the Log for Debugging

When debugging applications, it is sometimes useful to log not only the actual operations that modify pages, but also the underlying Berkeley DB functions being executed. This form of logging can add significant bulk to your log, but can permit debugging application errors that are almost impossible to find any other way. To turn on these log messages, specify the --enable-debug_rop and --enable-debug_wop configuration options when configuring Berkeley DB. See Configuring Berkeley DB for more information.

Extracting Committed Transactions and Transaction Status

Sometimes, it is helpful to use the human-readable log output to determine which transactions committed and aborted. The awk script, commit.awk, (found in the db_printlog directory of the Berkeley DB distribution) allows you to do just that. The following command, where log_output is the output of db_printlog, will display a list of the transaction IDs of all committed transactions found in the log:

awk -f commit.awk log_output

If you need a complete list of both committed and aborted transactions, then the script status.awk will produce it. The syntax is as follows:

awk -f status.awk log_output

Extracting Transaction Histories

Another useful debugging aid is to print out the complete history of a transaction. The awk script txn.awk allows you to do that. The following command line, where log_output is the output of db_printlog and txnlist is a comma-separated list of transaction IDs, will display all log records associated with the designated transaction ids:

awk -f txn.awk TXN=txnlist log_output

Extracting File Histories

The awk script fileid.awk allows you to extract all log records that refer to a designated file. The syntax for the fileid.awk script is the following, where log_output is the output of db_printlog and fids is a comma-separated list of fileids:

awk -f fileid.awk PGNO=fids log_output

Extracting Page Histories

The awk script pgno.awk allows you to extract all log records that refer to designated page numbers. However, because this script will extract records with the designated page numbers for all files, it is most useful in conjunction with the fileid script. The syntax for the pgno.awk script is the following, where log_output is the output of db_printlog and pgnolist is a comma-separated list of page numbers:

awk -f pgno.awk PGNO=pgnolist log_output

Other log processing tools

The awk script count.awk prints out the number of log records encountered that belonged to some transaction (that is, the number of log records excluding those for checkpoints and non-transaction-protected operations).

The script range.awk will extract a subset of a log. This is useful when the output of db_printlog is too large to be reasonably manipulated with an editor or other tool. The syntax for range.awk is the following, where sf and so represent the LSN of the beginning of the sublog you want to extract, and ef and eo represent the LSN of the end of the sublog you want to extract:

awk -f range.awk START_FILE=sf START_OFFSET=so END_FILE=ef END_OFFSET=eo log_output

PrevRefNext

Copyright Sleepycat Software