Sentinel1 Data in Geotiff for a large area

Background

  • I want to process S1 GRD data for large areas(~700 sq.km) each for about 10 locations. So that is 7000sq. km. worth of data twice a week.
  • I would like to preferably use Python and if possible process everything through SH platform.
  • I have very limited experience with SAR data. I have mainly worked with optical data in the past.

Data and Processing Specifications:

  • I am using S1 IW GRD HiRes product (10m/px) data with orthorectification
  • Once the orthorectified data is available, I am applying some form of speckle filtering on these images and then some thresholding technique for object recognition.
  • I would like to store these pre-processed images somewhere in the cloud for later access for training data

What I have tried so far:

  • Simple CURL request through Python. Ran into issues downloading the geotiff images to work with on my local machine. I am assuming this is some syntax error on my part but this is probably not the best approach to achieve what I am doing
  • SH python SDK with process API. Ran into an issue with the area being too large.

Questions:

  • I should be using batch processing API for this if I understand correctly, is that a correct assumption?
  • Is there a way to store the pre-processed images on AWS s3 or GCS after the batch-processing API directly from the SH platform?
  • Am I missing anything else here that I should be considering?

Thank you in advance. - Chinmay

Hi @chinmay,

the “area being too large” can be easily solved by splitting request into smaller parts, sentinelhub-py SDK has a helper function for that (large area utilities).

That being said, the use-case you describe seems perfect for Batch processing - there you set the configuration for processing (probably just outputting data) and the AWS S3 bucket, where you would like to have the data stored to (needs to be configured properly) and voila, you will have the data there. Batch processing is also three times more cost-efficient than our normal process API.

You might also try our CARD4L application, which should do more or less exactly what you want. This is just a simple UI triggering Batch API requests, so everything you see there, you can do by using API directly as well:

1 Like

Thanks for your response @gmilcinski I will try the batch processing API. I just tried the CARD4L app and it doesn’t accept my request. In the browser console, it says that my user does not have permission to perform that action. I wonder if this is because I am trying it from the basic account.
Screenshot

Regardless, it seems that it might be better for me to use the API directly if I want to save the output at a specific S3 bucket.

Note that the CARD4L application has exactly the same “features” as the API (including delivering results to the specific S3 bucket) as it actually is just a user interface to the API… If you look at the network console (as you did), you will see the APIs being generated. So you can use the app to “play around”, then take the API call and integrate it in your process…

Your account has been configured to work with Batch.

A post was split to a new topic: How to download S2 tiffs