Clipping Downloaded data

I have download the data using SentinelHubRequest(). I got an array of 256 X 256 (though I have shared the exact geometry of my farm). I want to understand how sentinel hub process box v/s full geometry in processing API, also please help me understand is there any simple way to clip the 256X256 nd array? Thanks in Advance

Hi Manish,

what is your expected output? Since rasters always have to be rectangular you will always get a regular raster.

If you provide a geometry, a raster covering the entire bounding box of the geometry will be returned. But any values outside of the geometry will not be computed and set to a no data value, however it is technically impossible to omit those pixels completely.

Thanks Jonas… I am fetching different bands to calculate NDVI, Pixels outside AOI are tagged to 0. There is possibility that in my AOI the reflectance value is also 0 and I want to take mean value of my patch. When I am completely ignoring 0 while calculating mean, pixels inside AOI are getting ignored, when I am considering 0 then number of pixels are getting increased. Any solutions for this problem?

Basically to exclude the nodata values you need to include a condition based on the dataMask band. If you are only looking for the NDVI mean of your area, and don’t need the values, then you can use the statistical API. Have a look at this tutorial for how to use it. It’s exactly your usecase, so calculating NDVI statistics for a geometry.

Thanks Jonas… For visualization purpose I need pixel level index values (NDVI, CI, SIPI etc) so I am calculating NDVI value for each pixel, reclassify the values and merging similar classes… I am using below code

        ndvi_before_reclassified = reclassify(ndvi_before,"NDVI")
        xmin, ymin, xmax, ymax = gdf.iloc[i]["geometry"].exterior.bounds
        height, width = ndvi_before_reclassified.shape
        print(height, width)
        print(xmin, ymin, xmax, ymax)
        xres = (xmax - xmin)/width
        yres = (ymax - ymin)/height
        rows = int(height / yres)
        cols = int(width / xres)
        polygons = []
        for i in range(height):
            for j in range(width):
                value = ndvi_before_reclassified[i, j]
                if value > 0:
                    xmin_i = xmin + j * xres
                    ymin_i = ymin + i * yres
                    xmax_i = xmin_i + xres
                    ymax_i = ymin_i + yres
                    poly = box(xmin_i, ymin_i, xmax_i, ymax_i)
                    polygons.append((value, poly))
        df = gpd.GeoDataFrame(polygons, columns=['category', 'geometry'], crs='epsg:4326')
        # Sort the polygons by category value
        polygons = df.sort_values(by='category')
        # Loop through the polygons and group them by category value
        groups = []
        for category, group in df.groupby('category'):
            # Merge the polygons in each group
            merged = unary_union(group['geometry'])
            # Create a new GeoDataFrame for the merged polygon
            merged_gdf = gpd.GeoDataFrame({'category': [category], 'geometry': [merged]},
                              crs='epsg:4326')
            # Add the merged polygon to the list of groups
            groups.append(merged_gdf)

        # Combine all the groups into a single GeoDataFrame
        merged_polygons = gpd.GeoDataFrame(pd.concat(groups, ignore_index=True),
                               crs='epsg:4326')

Can you please guide me where I am going wrong?
My original shape got changed. :frowning:

Unfortunately I can’t tell from the code you provided what you are trying to achieve.
Please provide more context.

And again, if you only want the mean of your field, please take a look at the tutorial I linked above. This will give you the value immediately without having to do any additional work.

I am trying to do below steps:

  1. Reclassifies a raster layer of NDVI values using the “reclassify” function. so basically if my pixel value is between 0.2 to 0.4 I am reclassifying it as 2, if value is in between 0.4 and 0.6 I am making it as 3 and so on.
  2. Extracts the bounding box coordinates of a polygon (AOI farm).
  3. Calculates the height, width, and resolution of the reclassified raster layer.
  4. Loops through each cell in the reclassified raster layer, creates a polygon for each cell with a non-zero value, and adds it to a list.
  5. Converts the list of polygons to a GeoDataFrame.
  6. Sorts the GeoDataFrame by category value and groups the polygons by category value.
  7. Merges the polygons in each group using the “unary_union” function from the shapely library.
  8. Creates a new GeoDataFrame for each merged polygon and adds it to a list.

The resulting GeoDataFrame contains a set of merged polygons, each representing a contiguous region of cells with the same range NDVI value within the original polygon. The NDVI values are stored as the “category” attribute of each polygon. The final GeoDataFrame can be used for further analysis or visualization.

Please suggest if my approach has some issues?

I am sorry, but this workflow doesn’t use Sentinel Hub services, so we can’t really help you here with this. If you need help with general Geospatial workflows in python, have a look at https://gis.stackexchange.com/.

But just a general comment: If not completely necessary, do not go away from the raster data. So instead of converting to a geodataframe, stay with a numpy array and do your analysis there. Using a geodataframe like this is really inefficient.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.