Looking at this a bit more:
A Disk mounter / reader can determine the LBA size of the writer by: Verify the MBR signature at byte offset 510 of the disk Verify the MBR partition type is protective (or valid hybrid if desired) Search for the GPT EFI header signature starting at byte 512 (Search at offsets that are powers of two? or that are multiples of 512?) The offset of the signature is the LBA size. Verify MyLBA is 1
The standard specifies that any data in LBA 0 past the 512 byte offset mark is filled with 0s. This should ensure no false matches, assuming people actually do it.
So it should be possible to mount a GPT written by a writer with a different LBA size. Now does anyone do it?
WRT to leaving room to adjust existing GPT for a different block-size Space could be left after the last table entry before the first partition. This can be represented in FirstUsableLBA if allowed.
Bill
-----Original Message----- From: William Mills [mailto:wmills@ti.com] Sent: Sunday, July 1, 2018 10:38 AM To: arm.ebbr-discuss@arm.com; Architecture Mailman List Subject: Questions about GPT
All,
I rely on your greater knowledge to help me understand these questions. Thanks in advance.
1) GPT and block size GPT uses LBA for its data stuctures The size of a block is historically 512B but is moving to larger sizes (4KB). The code needs to handle this on a per device mount basis. How does the driver know the block size used in the LBA?
1A) By querying the device 1B) Some MBR magic?
If 1A then that means to me that dd if=/dev/sdb of=/dev/sdc won't produce a usable image on sdc if its block size is different than sdb's.
(Of course I also assume that the total space ion sdc is also == or > than that of sdb. Which brings me to ...)
2) Can GPT be grown?
In the above example if sdc is much bigger than sdb,
I presume this is OK, at least as long as the GPT header in LBA1 passes its CRC. Mounters won't query the drive size and refuse to mount the GPT just because it does not cover the whole disk right?
Now what happens if LBA1 becomes corrupted? Now does the driver query the drive size and block size and look at drive_size-block_size for the backup GOT header? Again does it use the block size from the device or does try something else? (I suppose to could try several block sizes until it found a good CRC. However it does seem that it must assume that the redundant copy is at the end of the physical disk.)
So even if the GPT is "mounted" OK, the extra space on the drive is not usable, even for new partitions. Are there utilities that will "grow" the GPT? Such growing would find the new end of disk and move the redundant GPT table & header there.
3) Is it actually required that the partition array start at LBA2?
If not, then it would be possible to create a GPT assuming 512B blocks but allow it to be "re-block sized" later by leaving 7 512B blocks free before the table. Of course the partitions themselves should be aligned and sized to multiples of the max block size expected. This is probibly already done as you would want them to align to the prefered read/write and those will almost certianly be larger than 512B.
Why?
The main case I am thinking about is:
wget http://downloads.new-wizbang-os.org/images/latest/aarch64-disk.img dd if=aarch64-disk.img of=/dev/my-usb-sd-adapter
Then boot the image and the OS will resize the GPT and last filesystem to cover the 16GB of my SD card even though they only require a minimum size of 2GB.
Thanks, Bill
---------------- William A. Mills Chief Technologist, Open Solutions, SDO Texas Instruments, Inc. 20450 Century Blvd Germantown MD 20878 240-643-0836