test averaging on input
Capturing the results of Eva and my discussion last week...
Currently some quicklooks appear somewhat speckly, e.g. isolated light pixels on a dark background. I think this is caused by the tendency of striding to occasionally select outlier pixels (effectively we are seeing noise due to sub-sampling). Have observed that reducing the stride changes the pattern of speckliness, but does not noticeably improve it. Suspect that providing larger arrays to the plotting routines does not help because some kind of striding is done internally in the plotting routines.
Task is to try averaging discrete blocks of pixels on ingest, instead of striding. Block size would be chosen similar to current auto-stride, to give the smallest array that is larger than the image size. One complication is that missing values will have to be screened out before averaging (one good value should be enough). Also data may have to be discarded near the edges to avoid using under-sized blocks.
If averaging produces significantly better images, should assess impact on plotting times and consider adding for a future release. Could offer command line option controlling striding vs sampling.
Here is an example that looks speckly to me (though note there is nothing special about this example):