You are on page 1of 18

Logical Disk Manager (LDM)

History of partition tables


Back when DOS was the predominant operating system for computers, the disks were a lot smaller than they are now; they weren`t
split up into partitions and the filesystems, or methods of organizing files on a disk, had to use the entire disk.

The problem with that was that, as time progressed and disks became larger, that outdated, DOS-era, use of disks became more and
more impractical. The solution to this was to split the disk up into partitions: this is how disk storage evolved into how it is today. The
only cases where the disks aren`t set up like this today are for floppy disks, zip disks, basically: removable media.

At this point, each partition can so much as have different operating systems and/or a different filesystem. Partitions are even used as
swap space. So the disk was split up into partitions, each partition was given a different drive name, and the user would see these
partitions as separate disks.

When splitting the disk up like this, or "partitioning", was first begun, it was limited to only having four primary partitions. Here, there
was a partition table at the beginning of each disk, giving information about each partition, consisting of the location, size, and type. An
example of this would be 0x01 FAT12, or 0x04 FAT16.

For the same reasons that the disk needed to be split up in the first place, as the disks grew even larger, it became obvious that even
four partitions wasn't enough. The solution to this was to extend the partition type to an extended type: 0x05. Unlike the other
partitions, this one pointed to the location of another partition table.

In Figure 1.2, the first partition table has great significance, showing the location of the first three partitions.

The last entry of the table is dedicated to pointing to the second partition table, itself giving the location of the third partition table and
the location of partition four.

This process is repeated, having the last entry of each partition table point to other partition tables, over and over. This way, there
could be as many partitions on the disk as there was space for them. Instead of four partitions, suddenly, with enough disk space,
there could be as many partitions as necessary.

Next, there will be the history of disk partitioning, as it changed throughout the Windows operating systems:

Windows NT
Windows NT used the aforementioned, DOS method of partitioning, also adding a few enterprise features. Windows NT used RAID
volumes and this allowed for striped, mirrored and RAID5 volumes.

A volume is the user's view of a place where they can store data files (I.e. "C:","D:", etc.). Because of this, NT could use disks more
efficiently and gave it a fault tolerance. Despite this, Windows NT still had some problems.

The way the volumes were named, they were limited to only having 26 volumes in total, "A:" to "Z:". Second of all, most changes to the
partition required a reboot to take effect. Also, the volume set information was in the registry, having the problem that disks were hard
to move to other machines. Finally, the partition table limited the volumes to only 2 terabytes.

Windows 2000
With Windows 2000 came the introduction of the Logical Disk Manager, or the LDM, and the ability to mount volumes over directories.
This was the solution to all the filesystem limitations of Windows NT.
At this point, volumes no longer required a drive letter, so there was no longer the limit to their number. Also, RAID information was
stored on the disk, so moving the disks between computers was much easier; there was no longer the need to reboot the computer.

There was also a new partitioning scheme that was named "Dynamic", henceforth the old partitioning scheme was simply referred to
as "Basic". Dynamic disks were the only type of disks that were able to create software RAID volumes.

In the new, "Dynamic", partitioning scheme, the LDM would keep a journal of the database, using the last 1MB of the physical disk.
This is why, generally, there is need for free space when converting from "Basic" to "Dynamic". This is useful for rolling the disk back to
a consistent state after a power or disk failure. Although the volumes being changed and manipulated may be lost, the database will
not.

As well as keeping a journal of the database, the LDM also had something called "Disk Groups". Each member of these groups has a
database with a list of all the partitions on the disk that are in the group.

This translates to the ability to discover missing items if disks are removed. For fault-tolerant volumes, it is easy to rebuild them on a
new disk.

For Windows 2000 and Windows XP, there is only one, overarching Disk Group, named after the computer it was created on, with a
suffix of "Dg0". And example of one of these groups could be WorkcompDg0.

Also, in Windows 2000, the NTFS driver was changed to allow for dynamic resizing. Active volumes, including stripes, mirrors, or
RAID5 could now be extended to use free space on the disk, with no need to reboot or close applications.

Volumes
Filesystems and Containers
As an introduction, Filesystems and containers are, theoretically, independent from one another. They have some limitations, such as
the fact that FAT partitions cannot be resized by Windows quickly and easily. A dynamic disk can have any combination of filesystem
and container.

The types of filesystems are:

1. NTFS
2. FAT

And the types of containers are:

1. Simple
2. Spanned
3. Stripe (RAID 0)
4. Mirror (RAID 1)
5. Stripe with parity (RAID5)
6. Mirrored Stripe (RAID 1+0)

Here in Table 2.1, we will analyze each of the containers, judge their redundancy and efficiency:

Type of
Container Description Redundancy Efficiency

Simple This is the basic unit. One container fills a single partition. None Poor

Spanned Otherwise known as RAID Liner, Spanning concentrates two partitions together, None Poor
allowing the user to extend a volume using the free space on a disk.

Stripe (RAID Striping data across two or more disks improves performance, but slightly impacts None Medium
0) the writing.
Type of
Container Description Redundancy Efficiency

Mirror (RAID Being the simplest redundant RAID type, it copies data onto one or more disks, None Medium
1) allowing them to survive through single disk failures.

Stripe with With at least three disks, parity information can be added to a stripe, creating Good Good
parity RAID5. This can survive a disk failure, but may hinder rebuilding the data on a new
(RAID5) disk.

Mirrored Requiring at least four disks, it takes a striped pair of disks and completely mirrors Medium Good
Stripe (RAID them onto two or more disks.
1+0)

Extending Containers
Next, we will look at how each of the container types could be extended. The following disk configuration in Figure 1.4 will be the
examples used in the explanations:

The following table (Table 2.2) describes how to extend or add a mirror to each type of container, using the example in Figure 1.4:

Table 2.2

Type of
Container Extend Add Mirror

Simple You can use the free space after "R:" and the Adding a copy of "R:" will speed up time it takes to access and will
container will remain simple. If you were to use provide a fault tolerance.
free space on another disk, the container type
will be "spanned".

Spanned Because "N:" is already spanned, extending it Adding a copy of "N:" will speed up time it takes to access and will
will add another partition. provide a fault tolerance. The mirror could split across disks 2 and
3, or be put entirely on disk 4.

Stripe Adding to the end of stripe "W:" would need at Mirroring would also require at least two disks, different from the
least two disks. The partition on disk 3 could be ones that the stripe is already on. By mirroring onto two different
extended, or the same amount of space could disks, it will build a fault tolerance for the volume.
be used from disks 2 or 4.

Mirror To Extend, you will need more space on the Volume "P:" already has a mirror, so if you were to mirror it again,
original disks, or more space on others. you would need to make it on a different disk than the existing
mirror and will result in a third copy of the volume. This will further
increase the volume's fault tolerance

Stripe with To Extend, you will need at least three disks.


parity Each part of "M:" can be extended on the same
disk or on other disks.
Type of
Container Extend Add Mirror

Mirrored You would need space on at least four disks to You will need only two disks and you would be adding a mirror of
Stripe (not extend this. Similarly to stripe with parity, each the underlying stripe. The two disks you would need cannot be the
shown) section has the ability to be extended on the four already being used, however. This would improve the
same disk, or others. volume's fault tolerance.

Breaking Containers
Now is the description of how to break up partitions. Looking at the same example in Figure 1.4, Table 2.3 will describe deleting,
breaking volumes and removing disks:

Table 2.3

Type of
Container Delete Break Remove

Simple Remove the


partition,
containing
volume "R:"

Spanned Remove all of


the partitions
that belong to
volume "N:"

Stripe Remove all of


the partitions
that belong to
volume "W:"

Mirror Remove one You will need to split the mirror into two identical volumes. In our "P:" volume, Same as
copy of the you would come up with two volumes with no fault tolerance. For volumes breaking, only the
underlying with more mirrors, you would come up with a smaller mirror and a simple, selected part is
volume "P:" spanned or stripe volume. deleted after
splitting.

Stripe with Remove all


parity partitions
belonging to
volume "M:"

Mirrored Remove all Similarly to breaking a Mirror, you will end up with two volumes, one a stripe Same as
Stripe (not partitions and the other a stripe or mirrored type. breaking, only the
shown) belonging to selected part is
the volume deleted after
splitting.

Disk Failures
If a disk fails, then the elements that will survive are the RAID5 and mirrored volumes. Because Simple, spanned and striped volumes
have no redundancy, losing any part of these is losing part of the filesystem.

Mirrored volumes can lose a disk and continue, with only the fault tolerance being lost. Similarly, RAID5 can also lose one disk. The
volume may be repaired if a new disk is added. In the case of mirrors, this repair consists of copying the entire volume to a new
partition, while RAID5 requires the parity checks to be recalculated, consuming a lot of time.

Same problems could occur if only some of the disks of a group are moved onto another machine. In the case of RAID5, only one of
the machines will contain enough information to keep running, while the other will not.

Splitting a mirror to two computers will allow each computer to use its volume, but they will no longer have the fault tolerance they
would have if they were on the same machine. Also, the mirrors will become out of sync and once they reach that point, they will no
longer be able to be recombined into one, single, mirror.
Limits
Limits of Containers
The containers are the transparent layer between the user and the filesystem. They have certain limits, given in the following table
(Table 4.1):

Table 4.1

Volume Min chunk size Max chunk size Default chunk size Min number of disks

RAID5 0.5kB 64KB 16kB 3

Stripe 0.5kB 32000KB 64kB 2

There is also the same (blanket) limit, of 32, for each partition type. Table 4.1 describes this:

Limit Number

Partitions in a striped volume 32

Partitions in a RAID5 volume 32

Partitions in a spanned volume 32

Mirrors of a simple or striped volume 32

Limitations of Windows
Windows has some limitations in itself. The following is a list of limitations given by Windows operating systems:

1. Microsoft does not support Dynamic Disks on laptops, removable disks, USB's or FireWire interfaces.

2. Dynamic and Basic Volumes cannot be mixed on a disk. Basic Disks can be converted to Dynamic, but in order to convert
Dynamic back to Basic, you would need to remove all of the Dynamic Volumes first.

3. After upgrading to a Dynamic Disk, partitions will show up as free space after the conversion, with the exception of NTFS and FAT.

4. LDM does not require a DOS-style partition, but there will still be one to prevent legacy applications from thinking there is free
space on the disk when there isn't. It is also needed to boot Windows, as the boot code needs the operating system to be in a
primary partition.

5. You cannot extend your boot volume in any way because it is reliant on simple BIOS calls.

6. You are not able to install Windows onto a disk with partitions because it requires a primary partition.

7. NTFS volumes can only be resized dynamically through Windows.

These limitations are due to the fact that Microsoft has not updated its entire disk infrastructure match Veritas' LDM, which is a very
powerful, robust partitioning scheme. Microsoft only started adding LDM as of Windows 2000 and XP, so any previous versions cannot
read from dynamic disks.

Database
Layout of a Disk
We previously touched on the layout of a disk, so to continue the discussion on it, let us take another look at Figure 1.3 and get a
better understanding of what the elements mean:
Right after the partition table, there is an element called "PRIVHEAD". What this name actually stands for is "Private Header", and is
the first Logical Disk Manager (LDM) structure we see. In it is a set of accounting information, including a version number, a unique
GUID for the disk, and the locations of the other LDM structures. There are actually two backup copies of this structure, in case it
becomes missing or damaged.

The LDM, as mentioned before, takes up 1Mb at most, and is found in the last part of the physical disk. Upgrading to a Dynamic Disk
therefore requires a bit of extra space, and downgrading from Dynamic to Basic frees up that last 1 Mb, since all the traces of the LDM
database is removed.

The next diagram, Figure 5.1, demonstrates the layout of the Logical Disk Manager visually, qwith each of the colours explained in
Table 5.1, representing a different element of the LDM:

The "config" section contains the actual database of the partitions and volumes. The "log" section contains a journal of changes to the
database. There are four copies of this structure, being identical at all times.

VBLK Naming:

In the VBLK structures, each disk group, disk partition, component (container), or volume is given a unique name. The following table,
Table 4.2, describes the naming procedures for each type of element:

Table 5.2

Element Naming procedure for the element

Disk Groups These are named after the machine, with a "Dg0" suffix. For example, HomeDg0 or WorkcompDg0.

Disks They are named "Disk" followed by the number, so they would look like: Disk1, Disk2, Disk3, etc.

Partitions for Disks They are named after the disk, followed by a suffix, such as "-01", "-02", etc. An example would be Disk1-
01, Disk1-02, etc.

Striped Volumes They are named "Stripe" followed by the number, so they would look like: Stripe1, Stripe2, Stripe3, etc.

RAID volumes They are named "Raid" followed by the number, so they would look like: Raid1, Raid2, Raid3, etc.

Simple, Spanned and They are named "Volume" followed by the number, so they would look like: Volume 1, Volume 2, Volume 3,
Mirror Volumes etc.

Components They are named after the volume, followed by a suffix, such as "-01", "-02", etc. An example would be
belonging to volumes Stripe1-01, Stripe1-02, Raid1-01, Raid1-02, Volume1-01, Volume1-02 etc.

When objects are deleted, new ones will take the lowest available value of name.
Example Volumes
In the following section is a demonstration of the different types of volumes, which VBLKs would be present and how they relate to one
another.

Simple Volume:

This is the easiest example, taking up only one partition on a single disk. It is visually shown below:

When another partition is added to this Simple volume, the volume will become a Spanned Volume, shown later in this section.

Mirrored Volume:

Mirrored volumes are easily identified by having the same volume information sent to two components, partitions and disks. The data
is not altered at all here. A visual is shown below, in Figure 6.2

With two more partitions, these volumes could be extended, making it so the components each have two "children". This creates a
Mirrored Spanned Volume, shown later in this section. Another partition could be added on top of that, creating a third mirror section,
giving the volume a third "child".

Striped, RAID5 and Spanned Volume:

This is the most interesting layout, with the component, controlling how the volume information is transformed before it is sent to
multiple, two or more, partitions.

In a Spanned Volume, the partitions are joined linearly, with the start of the filesystem being written in the first partition, the next part in
the second partition, and so on.

Striped Volumes have the filesystem divided into 64kB sections, where the sections are written to partition 1, partitions 2, etc., until it
goes through each partition and then wraps around.

RAID5 Volumes are divided into 16kB sections, and this type requires at least three partitions. For each of those sections, there is
another section that contains the parity information, giving the Volume fault tolerance.

If any of the Volumes, mentioned in this section are given another partition, then the filesystem could be extended to it, giving the
component more "children". Figure 6.3 shows a general visual for the above mentioned types of Volumes:
Mirrored Stripe Mirrored Spanned Volumes:

These types of Volumes demonstrate the combination of the different ideas of the other Volume types. The Volume information is
written identically to two components which can either Stripe or Span the filesystem. These types of volumes are extended by adding a
mirror, which requires two more partitions, or extending the volume, requiring four more partitions.

Technical
The rest of this document is dedicated to showing the on-disk layout of the LDM database. To start off, all of the structures that will be
mentioned are padded with zeroes. This is especially clear with VBLKs, which are frequently recycled and have no remnants from
previous use. This is also true with database accounting structures, but it is not as obvious.

The numbers in databases are in a format, where numbers of variable, large lengths are accommodated, and to aid in the identification
of these, they are prefixed by a one byte length marker. Strings with a length marker may not be NULL terminated.

In the rest of this document, the sizes and offsets in the database will be measured in sectors, and some offsets will be relative to the
data, while others are relative to the disk.

Because the documentation is derived from reverse-engineering the LDM database, there may still be mistakes or gaps, so minor
ambiguities will be denoted with a "?" mark.

Partition
The Logical Disk Manager manages the entire disk, so an MSDOS style partition is not required, but to prevent legacy applications
from thinking that the disk isn't being used, a dummy partition is created to fill the space. The boot loader is one of these legacy
applications that doesn't understand the LDM database, but must still boot the operating system.

Layout:

If a Basic Disk has no partitions and is converted into a Dynamic Disk, Windows creates a dummy partition to fill the disk. The ID for
the partition, chosen by Microsoft is 0x42, used by the encrypted Secure FileSystem (SFS). Details and an example is given in Table
8.1:

If the Basic Disk being converted contains partitions, then the structure of the Basic Disk will be preserved. Converted disks will be
limited to the structure, given in Table 8.2. These partitions are needed in cases such as the boot loader, since it only understands
MSDOS partitions.

PRIVHEAD
An LDM Disk is most easily identified by the partition type of 0x42. After the partition table is the Private Header, or PRIVHEAD, which
gives the location and size of the database. If the Dynamic Disk is reverted into a Dynamic Disk, the PRIVHEAD is removed, although
some of the database will remain.
The PRIVHEAD is 512 bytes long, with three copies on each physical disk. Each set of three is organized in the following way: one is
placed immediately after the partition table, the second is placed near the end of the LDM database, and the last copy takes up the last
512 bytes of the physical disk.

Layout:

Table 9.1 shows the layout of the PRIVHEAD:

TOCBLOCK
The Table of Contents block, or the TOCBLOCK, is 512 bytes long, with four copies of this information, organized in pairs. Two copies
are placed near the beginning of the LDM, and two copies are placed near the end. Table 10.1 shows the layout of the TOCBLOCK,
where the sizes are in sectors, with the starts being relative to the start of the database. The two bitmaps are "config" and "log".

VMDB
The VMDB, alongside the KLOG blocks, manage the journaling of the database metadata. The VMDB is the header for the main part
of the database and takes up 512 bytes.
The VMDB is also designed to be protected from data loss during updates. All of these updates are logged, and the update process
adds new VMDBs before erasing the old ones. This way, in the event of an unexpected shutdown (for example a power failure), the
database could be rolled back to a consistent state.

Table 11.1 shows the layout of the VMDB, while Table 11.2 shows what each VMDB update status means:

Table 11.1

Offset Size Description

0x00 4 VMDB Magic Number

0x04 4 Sequence Number of Last VBLK

0x08 4 Size of VBLK

0x0C 4 Offset to first VBLK

0x10 2 Update Status

0x12 2 Version Major (Always 0x04)

0x14 2 Version Minor (Always 0x0A)

0x16 31 Disk Group Name (string, null padded)

0x35 64 Disk Group ID GUID (string, null padded)

0x75 8 Committed Sequence

0x7D 8 Pending Sequence

0x85 4 Number of Committed Volume VBLKs

0x89 4 Number of Committed Component VBLKs

0x8D 4 Number of Committed Partition VBLKs

0x91 4 Number of Committed Disk VBLKs

0x95 4 (Unused)

0x99 4 (Unused)

0x9D 4 (Unused)

0xA1 4 Number of Pending Volume VBLKs

0xA5 4 Number of Pending Component VBLKs

0xA9 4 Number of Pending Partition VBLKs

0xAD 4 Number of Pending Disk VBLKs

0xB1 4 (Unused)

0xB5 4 (Unused)

0xB9 4 (Unused)
Offset Size Description

0xBD 8 Last Accessed Time (Timestamp is number of 100ns units since Jan 01 1601)

Table 11.2

Flags Description

0x01 VMDB is in a consistent state

0x02 VMDB is in a creation phase

0x03 VMDB is in a deletion phase

VBLK
The VBLK consists of a representation of each Disk Group, Disk, Volume, Partition and Component, making the VBLK a critical part of
the LDM. It is 128 bytes long and has a standard 16 byte header. When the VBLK isn't large enough to store all the information, two
must be used, called an extended VBLK.

A table the size of "P" implies that the field is prefixed by a one byte length marker and isn't terminated unless it is listed as NULL.

Table 12.1 shows the format of a standard VBLK:

Offset Size Description

0x00 4 VBLK Magic Number

0x04 4 Sequence Number

0x08 4 Group Number

0x0C 2 Record Number (x of y)

0x0E 2 Number of Records

The Sequence Numbers start at 4. 0 — 3 are the VMDB Header block.


The Group number is never zero
The Record Type can be 0x32, 0x32, 0x33, 0x34, 0x35 or 0x51, representing Component, Partition, Disk, Disk Group and
Volume respectively.

Extended VBLK:

As noted in Table 12.1, the number of VBLKs is mentioned in the "Number of Records" component of the VBLK header. "Record
Number" is a zero based index. The "Group Number" component is used to keep track of Extended VBLKs, by giving each set their
own, unique, group number. As a note, Extended VBLKs may not be located close by, in the VMDB.

Update Status:

Similarly to the VMDB, the changes that happen to the VBLK are logged in order to protect from data loss during update. In the event
of an unexpected shutdown (for example a power failure) the database could be rolled back to a consistent state. Table 12.2 MB is a
summary of the Update Flags and Statuses:

Table 12.2

Flags Description

0x00 VBLK is in a consistent state


Flags Description

0x01 VBLK is about to be deleted, but is still active

0x02 VBLK has just been created, but is not yet active

Table 12.3 Layout of Each Section of the VBLK:

Tables 12.3 give the layout for each component of the VBLK: Volume (0x51):

Offset Size Description

0x00 16 Standard VBLK

0x10 2 Update Status

0x12 2 Record type and flags (a)

0x14 4 Data length

0x18 P Object ID

..0x18 P Name

..0x18 P Volume Type (b)

..0x18 1 Zero

..0x19 14 Volume State (string, null padded) (c)

..0x27 1 Volume Type? (3 Normal, 4 RAID)

..0x28 1 Don't know (always 1)

..0x29 1 Volume Number (d)

..0x2A 3 Zeroes

..0x2D 1 Flags? (0x11 Normal, 0x13 RAID, 0x15?, 0x17?)

..0x2E P Number of Children

..0x2E 8 Log Commit ID

..0x36 8 ID? Or 0x00

..0x3E P Size

..0x3E 4 Zeroes

..0x42 1 Partition Type (i.e. 7 represents NTFS)

..0x43 16 Volume ID (GUID???)

..0x53 P ID1?

..0x53 P ID2?
Offset Size Description

..0x53 P Size (if children?)

..0x53 P Drive Hint (string)

Footnotes: (a) — Revision 5 of VBLK type 1 (b) — Volume Type: "gen" or "raid5" (c) — Volume State: "ACTIVE" (d) —
Starts at 5 and unused numbers are reused
The flags denote the presence of an optional field
ID1 and ID2 are mutually exclusive
The optional fields will always be in the order: ID, size, Drive Hint

Table 12.4 Table 12.4 shows the layout of the flags for the 0x51 (Volume) section of the VBLK:

Flags Description

0x08 ID1

0x20 ID2

0x80 Size

0x02 Drive Hint

Table 12.5

Component (0x32):

Offset Size Description

0x00 16 Standard VBLK Header

0x10 2 Update Status

0x12 2 Record Type and Flags (a)

0x14 4 Data Length

0x18 P Object ID

..0x18 P Name (string)

..0x18 P Volume State (b)

..0x18 1 Component Type (c)

..0x19 4 Zeroes

..0x1D P Number of Children

..0x1D 8 Log Commit ID

..0x25 8 Zeroes

..0x2D P Parent ID (a Volume)


Offset Size Description

..0x2D 1 Zero

..0x2E P Strip Size (in sectors)

..0x2E P Number of Columns

(a) — Revision 3 of VBLK type 2


(b) — Component type: 1 Stripe, 2 Basic or Spanned, 3 RAID
(c) — Volume State: "ACTIVE"

Table 12.6

The flags describe the presence of an optional field, as shown in Table 12.4:

Flags Description

0x08 Both optional fields

Table 12.7

Partition (0x33):

Offset Size Description

0x00 16 Standard VBLK Header

0x10 2 Update Status

0x12 2 Record type and flags (a)

0x14 4 Data Length

0x18 P Object ID

..0x18 P Name (string)

..0x18 4 Zeroes

..0x1C 8 Log Commit ID

..0x24 8 Start

..0x2C 8 Volume Offset

..0x34 P Size

..0x34 P Parent's Object ID (Component)

..0x34 P Disk Object's ID

..0x34 P Component Part Index


(a) — Revision 3 of VBLK Type 3

Table 12.8

The flags describe the presence of an optional field, as shown in Table 12.8:

Flags Description

0x08 Component Part Index

Table 12.9

Disk (0x34):

Offset Size Description

0x00 16 Standard VBLK Header

0x10 2 Update Status

0x12 2 Record type and flags (a)

0x14 4 Data Length

0x18 P Object ID

..0x18 P Name (string)

..0x18 P Disk ID (GUID, string)

..0x18 P Alternate Name

..0x18 4 Zeroes

..0x1D 8 Log Commit ID

(a) — Revision 3 of VBLK Type 4


This VBLK doesn't have any flags

Table 12.10

Disk (0x44):

Offset Size Description

0x00 16 Standard VBLK Header

0x10 2 Update Status

0x12 2 Record type and flags (a)

0x14 4 Data Length

0x18 P Object ID
Offset Size Description

..0x18 P Name (string)

..0x18 16 Disk ID (GUID, binary)

..0x28 16 Disk ID (GUID, binary)

..0x38 3 Zeroes

..0x3B 2 ID?

..0x3D 8 Log Commit ID

(a) — Revision 4 of VBLK Type 4


This VBLK doesn't have any flags

Table 12.11

Disk Group (0x35): As a note, this disk group name is limited to 28 characters.

Offset Size Description

0x00 16 Standard VBLK Header

0x10 2 Update Status

0x12 2 Record type and flags (a)

0x14 4 Data Length

0x18 P Object ID

..0x18 P Name (string)

..0x18 P Disk Group ID (GUID, string)

..0x18 4 Zeroes

..0x1C 8 Log Commit ID

..0x24 P 0xFFFFFFFF

..0x24 P 0xFFFFFFFF

(a) — Revision 3 of VBLK Type 5

Table 12.12

The flags describe the presence of an optional field, as shown in Table 12.12:

Flags Description
Flags Description

0x08 Both optional fields (probably)

Table 12.13

Disk Group (0x45):

Offset Size Description

0x00 16 Standard VBLK Header

0x10 2 Update Status

0x12 2 Record type and flags (a)

0x14 4 Data Length

0x18 P Object ID

..0x18 P Name (string)

..0x18 16 Disk Group ID (GUID, binary)

..0x28 16 Disk Set ID (GUID, binary)

..0x38 4 Zeroes

..0x3C 8 Log Commit ID

..0x44 P 0xFFFFFFFF

..0x44 P 0xFFFFFFFF

(a) — Revision 4 of VBLK Type 5

Table 12.14

The flags describe the presence of an optional field, as shown in Table 12.14:

Flags Description

0x08 Both optional fields (probably)

KLOG
The KLOG, along with the VMDB structures are in charge of recording new changes, made to a database. The KLOG also stores the
sequence in which new VBLKs are added and old ones are removed. The KLOG structure is given in Table 12.15.

Table 12.15

Offset Size Description

0x00 4 KLOG Magic Number


Offset Size Description

0x04 4 Zeroes

0x08 4 Something about extra records

0x0C 4 Log Count (N)

0x10 4 Number of KLOGs

0x14 4 KLOG Index

0x18 1 Don't Know (always 0x03)

0x19 8 Committed ID 1

0x21 8 Pending ID 1

0x29 8 Committed ID N-1

0x31 8 Pending ID N-1

0x39 8 Committed ID N-2

0x41 8 Pending ID N-2

0x49 8 …

16*N+0x09 8 Committed ID 2

16*N+0x11 8 Pending ID 2

You might also like