How to get Affine Transformation

Hi !
For my temporal analysis script (again :grinning:) i’m trying to use rasterstats module to calculate cloud proportion in blocks. To do this I need Affine Transformation of images.

I get cloud mask with a CloudMaskRequest called all_cloud_masks like in s2cloudless example.

for idx, [prob, mask, data] in enumerate(all_cloud_masks):

cloud = zonal_stats(blocks_merge, mask, affine=affine, nodata=-999, add_stats={‘cloud’:cloud_pixel_count})

for the moment I run another WcsRequest to save one image and get affine transformation with rasterio module:

with as src:

affine = src.transform

But is it possible to get Affine Transformation of masks more simply?

1 Like

Yes, you can calculate transform from coordinates of image bounding box and shape of the image.

There are only 4 non-constant parameters in transform matrix. Two of them are coordinates of upper left corner of the image, which you can get from the bounding box. The other two are resolution in x and y directions. Those you can calculate by dividing size of bounding box by number of pixels in each of x and y dimensions of your image.



I have a similar question regarding transformations.

I am using a WcsRequest to get a numpy array for a study area defined by a BBox.

However, the study area inside the BBox has an irregular shape, and I want to mask out the pixels that fall outside the study area boundary.

What I’m doing is getting the data first and then generating a mask manually by rasterizing the polygon that is inside the BBox with rasterio, like this:

    mask = rasterize(
        shapes=((g, 1) for g in aoi_boundary.geometry), 
        out_shape=(nrow, ncol),

nrow and ncol come from the downloaded array. I generate the transform like this:

rasterio.transform.from_bounds(*aoi_boundary.total_bounds, width=ncol, height=nrow)

Then I apply the mask to the downloaded array by multiplying both arrays.

However, I’m worried that the rasterization of the polygon done by rasterio might differ from the one done by Sentinel Hub when clipping the image to the requested BBox. This would mean the pixels are not well aligned, and I’m masking out the wrong pixels.

I’d appreciate a lot any ideas to improve this.