Fusion without aggregating (mosaicking) the dates

When using a data fusion request with a SentinelHubRequest, the output is always one image (e.g. latest).
Is it possible to receive multiple images - i.e. all the available dates which had both sentinel 1 and 2 acquisitions?

The other way around this is to WMS request the data for sentinel 1, request the data for sentinel 2, use get_dates(), find the intersection of the dates, and iterate on those dates using the fusion SentinelHubRequest.

2 Likes

Hello,

You can indeed return multiple images using SentinelHubRequest. To do so, you would need to adjust the Evalscript that you are calling and return the multiple acquisitions as bands: please see the second example in this section of the documentation (as well as the return section).

Here is another example of how to access multi-temporal data through the Sentinel Hub python package.

Let us know if you have questions about the implementation!

Thank you.
I went through the docs and examples you provided but when I don’t know how many times senitenl1 and sentinel2 had overlapping acquisition dates, I’m not sure how to adjust my script.

evalscript = """
//VERSION=3
function setup() {
  return {
    input: [{
        datasource: "S1GRD",
        bands: ["VV","VH"]
      },
      {
        datasource: "S2L2A",
        bands: ["B01","B02","B03",
                "B04","B05","B06",
                "B07","B08","B8A",
                "B09","B11","B12"]
      }
    ],
    output: [{
      bands: 14,
      sampleType:"FLOAT32"
    }]
  }
}

function evaluatePixel(samples) {
  var s1 = samples.S1GRD[0]
  var s2 = samples.S2L2A[0] 
  return [10 * Math.log((s1.VV)+0.0001) / Math.LN10, 10 * Math.log((s1.VH)+0.0001) / Math.LN10,
          s2.B01,s2.B02,s2.B03,s2.B04,
          s2.B05,s2.B06,s2.B07,s2.B08,
          s2.B8A,s2.B09,s2.B11,s2.B12]
}
"""
poly = some polygon
bbox = BBox(bbox=poly, crs=CRS.WGS84) 
time_interval =  ('2019-12-27', '2021-01-14') 
request = SentinelHubRequest(
            evalscript=evalscript,
            input_data=[
                SentinelHubRequest.input_data(
                    data_collection=DataCollection.SENTINEL1_IW,
                    time_interval=time_interval,        
                    other_args = {"dataFilter":{"resolution":"HIGH","acquisitionMode":"IW"},"processing":{"backCoeff":"GAMMA0_TERRAIN","orthorectify":True,"demInstance":"COPERNICUS"},"id":"S1GRD"}
                ),
                SentinelHubRequest.input_data(
                    data_collection=DataCollection.SENTINEL2_L2A,
                    time_interval=time_interval,        
                    other_args = {"id":"S2L2A"}
                ),
            ],
            responses=[
                SentinelHubRequest.output_response('default', MimeType.TIFF), 
            ],
            data_folder='imgs',  
            bbox=bbox,
            config=config
        )

Hi @tonish,

The solution is not so straightforward, but still doable!

I looked into this a bit and whilst it is possible to do this all in the Evalscript, the solution is not elegant. Since you are working in Python, there is a solution to obtain what you want in an easier way.

You can query the Catalog API (you will need sentinelhub-py >= 3.2.0) to fetch the list of dates for each dataset . Then you can calculate the overlapping acquisition dates and filter your Datafusion request for those dates.

Because it’s a bit complicated, @batic and I decided to share a Jupyter Notebook with the processing steps here on the forum for all interested people to use. In your case, you can adapt it to request the bands that you need (I only put 3 bands in for simplicity). The output of the script is 1 geotiff per satellite band. Each geotiff contains as many bands as common acquisition dates.

Jupyter Notebook: Github link, NBviewer link.

1 Like

Thanks. This is very helpful!

We have been thinking a little bit more about the approach because it is quite an interesting problem! Although the solution posted above is ok, we found a smarter way of getting the results without using Catalog API.

The difference is that you can do everything in the Evalscript using the preProcessScenes (doc):

  • fetch the dates for each sensor
  • find the intersection of the date lists
  • directly filter the scenes considered in the evaluatePixel function

The advantage is that you don’t need the Catalog request and you don’t need to pass any variables from python to the Evalscript.

The links in my post above were updated. If you want to compare with the old version, you can check the previous version in Git.

1 Like