Next: , Previous: , Up: NNCP   [Index]


Chunked files

There is ability to transfer huge files with dividing them into smaller chunks. Each chunk is treated like a separate file, producing separate outbound packet unrelated with other ones.

This is useful when your removable storage device has smaller capacity than huge file’s size. You can transfer those chunks on different storage devices, and/or at different time, reassembling the whole packet on the destination node.

Splitting is done with nncp-file -chunked command and reassembling with nncp-reass command.

Chunked FILE produces FILE.nncp.meta, FILE.nncp.chunk0, FILE.nncp.chunk1, … files. All .nncp.chunkXXX can be concatenated together to produce original FILE.

.nncp.meta contains information about file/chunk size and their hash checksums. This is XDR-encoded structure:

+------------------------------+---------------------+
| MAGIC | FILESIZE | CHUNKSIZE | HASH0 | HASH1 | ... |
+------------------------------+---------------------+
XDR typeValue
Magic number8-byte, fixed length opaque dataN N C P M 0x00 0x00 0x02
File sizeunsigned hyper integerWhole reassembled file’s size
Chunk sizeunsigned hyper integerSize of each chunk (except for the last one, that could be smaller)
Checksumsvariable length array of 32 byte fixed length opaque dataMerkle Tree Hashing checksum of each chunk

It is strongly advisable to reassemble incoming chunked files on ZFS dataset with deduplication feature enabled. It could be more CPU and memory hungry, but will save your disk’s IO and free space from pollution (although temporary). But pay attention that you chunks must be either equal to, or divisible by dataset’s recordsize value for deduplication workability. Default ZFS’s recordsize is 128 KiBs, so it is advisable to chunk your files on 128, 256, 384, 512, etc KiB blocks.


Next: Bundles, Previous: Niceness, Up: NNCP   [Index]