NetCDF may use HDF5 as a storage format (when files are created with NC_NETCDF4/NF_NETCDF4/NF90_NETCDF4). For those files, the writer may control the size of the chunks of data that are written to the HDF5, along with other aspects of the data, such as endianness, a shuffle and checksum filter, on-the-fly compression/decompression of the data.
The chunk sizes of a variable are specified after the variable is defined, but before any data are written. If chunk sizes are not specified for a variable, default chunk sizes are chosen by the library.
The selection of good chunk sizes is a complex topic, and one that data writers must grapple with. Once the data are written, there is no way to change the chunk sizes except to copy the data to a new variable.
Chunks should match read access patterns; the best chunk performance can be achieved by writing chunks which exactly match the size of the subsets of data that will be read. When multiple read access patterns are to be used, there is no one way to best set the chunk sizes.
Some good discussion of chunking can be found in the HDF5-EOS XIII workshop presentation (http://hdfeos.org/workshops/ws13/presentations/day1/HDF5-EOSXIII-Advanced-Chunking.ppt).