Temporal Analysis is invalid for API

I tried to use sentinel 2 data to calculate the information on the time dimension of each pixel in a area of interest (such as calculate the percentile, median or maxmin of each pixel). I know I can first download all the images in the time interval and calculate it in the local PC. But according to the official description of Sentinel-hub, this can be done in the cloud through API. I used the S2L1CWCSInput function from eo-learn library in the python environment to submit the request through the evalscript of customurlParam, and the time interval parameter of the execute() function is set to 2017/01/01 - 2017/12/31. But in the test, through the debugging code "throw new Error (samples.length)", I found that the length of the input parameter (samples) of the function (evaluationPixel) was always one (sometimes two). I think that means, only data from one time (or one scene) was in the processing.

I have already set the layer in the called dashboard with "temporal: true". I refered to some example custom scripts for multi-temporal data processing, including api version of v1, v2 and v3. I tested them in python, still the length of the input parameter (samples) of the function (evaluationPixel) was always one, such as the example agriculture_growth_stage script from github. I also confirmed it in the sentinel-hub playground and dashboard layer preview.

And some codes can’t be executed at all, such as https://github.com/sentinel-hub/custom-scripts/blob/master/sentinel-2/s2gm/script.js. I have tried it in python, playground, dashboard all.

I would like to ask how to use the custom evalscript to get the information from the time dimension of each pixel, or get all the data in the time interval at each pixel through evaluationPixel()? Is evaluationPixel() can just get sample of only one scene at one time? Is temporal analysis is actually invalid for OGC API and sentinel hub API through python?

Here is my custom_evalscript:

//VERSION=2

function setup(ds) {
    return {
        components: [ds.B04, ds.B08],
        output:[{
                id: 'ndvi',
                sampleType: SampleType.AUTO,
                componentCount: 1
            }],
        temporal: true
        }
}


function evaluatePixel(samples) {

var message = samples.length
throw new Error(message)

var ndvi = []

for (var i = 0; i < samples.length; i++) {
    ndvi[i] = index(samples[i].B08, samples[i].B04);
}

var ndvi0 = index(samples[0].B08, samples[0].B04);

return [ndvi0];
}

my python code:

from eolearn.core import LinearWorkflow, FeatureType, OverwritePermission, SaveToDisk
from eolearn.io import S2L1CWCSInput
from sentinelhub import BBox, CRS, CustomUrlParam

add_data = S2L1CWCSInput(
    layer='BANDS-S2-L1C', 
    feature=(FeatureType.DATA, 'BANDS'), 
    custom_url_params={CustomUrlParam.EVALSCRIPT: custom_script}, 
    resx='10m', 
    resy='10m', 
    maxcc=0.2
)

path_out = './eopatches/'
if not os.path.isdir(path_out):
    os.makedirs(path_out)
save = SaveToDisk(path_out, overwrite_permission=OverwritePermission.OVERWRITE_PATCH)

workflow = LinearWorkflow(
    add_data
    save
)

time_interval = ['2017-01-01', '2017-12-31']

extra_param = {
    add_5indices:{'bbox': BBox({'min_x':112, 'max_x':112.01, 'min_y':34, 'max_y':34.01}, CRS.WGS84), 'time_interval': time_interval},
    save: {'eopatch_folder': 'eopatch_test'},
}

workflow.execute(extra_param)

and the information from “throw new Error (samples.length)”:

DownloadFailedException: During execution of task S2L1CWCSInput: Failed to download from: ...

with HTTPError:
400 Client Error: Bad Request for url: ...

Server response: "Failed to evaluate script! 

evalscript.js:19: Error: 1 
throw new Error(message) 
^ 

Error: 1 
at evaluatePixel (evalscript.js:19:11)"

Is it possible that the temporal analysis function of API is actually being intentionally banned by the official based on some special considerations? Such as computation cost and etc.

Temporal functions are not banned (they do however impact the “spent processing units”, which might hit the account limits faster).
I am not sure though whether eo-learn has an option to work with temporal custom scripts. This was never yet done, even though there is no reason why not to.

Your script is a bit wrong, the return part.
This should work:

//VERSION=2

function setup(ds) {
    return {
        components: [ds.B04,ds.B08],
        output: [
            {
                id: "default",
                sampleType: SampleType.AUTO,
                componentCount: 1
            }
        ],
        temporal: true
    }
}
function evaluatePixel(samples) {
var ndvi = []

for (var i = 0; i < samples.length; i++) {
    ndvi[i] = index(samples[i].B08, samples[i].B04);
}

  return {
    default: [ndvi[0]]
  }
}

Thanks gmilcinski, but I have the similar problem.

The fact is when I use sentinel-playground-temporal, I can’t get the debug information, so I can’t confirm the input parameter (samples) of the function (evaluationPixel). Even though, sometimes the script can be run, the input parameter (samples) may still be one, which will not cause any bug for the FOR LOOP. And just as yutouwang95 said, some official examples can’t be run in any way(eo-learn or python API or sentinel-playground-temporal), such as the Sentinel-2 Global Mosaic best pixel selection script.

And I tried your modified script, when I use throw new Error (samples.length) to debug, the output still tells me that, I can only get one sample at a time.

We have met the similar problem. The temporal analysis seems not available. I hope that someone can help us.

Not sure, how you asses that there is only one sample.
I modified the script a bit to show the “number” of samples for each pixel:

You will see that not everything is black (0) or dark grey (0.2)… Meaning that there are certainly more than 1.

In terms of debugging, I know this is tricky. We are trying to find the best way to address this. If you think about the fact, that each individual pixel on your screen (there are probably 2 million of them) runs through the same loop, which can contains tens or hundreds of scenes, you will see that it is not an easy thing to hassle.

I hope this helps?

If I put your code in my script:
var message = samples.length
throw new Error(message)

I get results:
<![CDATA[ Failed to evaluate script!
evalscript.js:19: Error: 7
throw new Error(message)

So it means there are 7 samples…

Thanks for your careful answer.

I ran your script in the playground, the result is the same as yours. However, I met some problems with python. I used the modified script as you suggested. When I set the corresponding layer in my instance “temporal: true”, the modified V2 script cannot execute successfully with the error “During execution of task S2L1CWCSInput: Numpy array of FeatureType.MASK feature has to have 4 dimensions”.

Only under the condition that the layer is set “temporal: false” and the V2 script is also set “temporal: false”, it can work.

And, when I tried the example V1 NDVI script, the layer also needs to be set “temporal: false”. Otherwise, there will be the same error. Why is this so?

The example V1 NDVI script,

let viz = new Identity();

function evaluatePixel(samples) {
    let val = index(samples[0].B08, samples[0].B04);
    return viz.process(val);
}

function setup(ds) {
    setInputComponents([ds.B04, ds.B08]);
    setOutputComponentCount(1);
}

This problem seems to be connected with this one.

The description of what normally happens when a request is made in eo-learn/sentinehub-py:

  • make a SH request for N bands using sentinelhub-py within or without eo-learn
    • eo-learn/sentinelhub-py actually requests N bands + transparency layer, which is used to determine pixels with valid data
    • Sentinel Hub usually returns a [height, width, N+1] dimensional tiff/array
    • eo-learn/sentinelhub-py interprets the last channel as transparency layer and removes it, adds [height, width, N] to the EOPatch

It looks like when the layer is set to “temporal: true” the transparency layer is not added to the returned array and eo-learn/sentinelhub-py remove a valid band yielding an array with one band short.

In your case, your making a single band request, which gets removed (as it is interpreted as transparency) and hence the resulting array doesn’t have proper shape anymore.

Solution for the moment is to set layer to “temporal: false”.

Thanks so much for your kind and patient response, Anze! Your explanation helps me a lot.

When using the custom script returning the original bands without any other calculation, such as “return [B01,B02,B03]”, your explanation seems right. If I only return one band, such as “return [B01]”, it will rasie the error (During execution of task S2L1CWCSInput: Numpy array of FeatureType.MASK feature has to have 4 dimensions).
However, If I try to increase the number of returning bands to 2,3,4… with a custom script involved in calculation, such as NDVI, the error always exists.
The situation is still very pessimistic. I doubt that if the OGC request cannot deal with any relatively complex custom script with some calculation when the layer is set to “temporal: true”. And the output of “var message = samples.length; throw new Error(message)” is always one in my execution as same as yutouwang95, regardless of the setting of the “temporal” parameter (True or False are both not OK). If it holds, the temporal analysis is actually unable to be achieved with custom scripts. In addition, the python API and eo-learn library seems not robust enough, and many serious bugs exist.

My question now is that what are indeed changed when the layer is set to “temporal: true” or when the “temporal” parameter is set to true through the custom script (except the change of transparent band which you have already told me). There is no relative document described this, which leads to great difficulty in debugging.
And based on all the above fact, is the temporal analysis actually unable to be done through any API except Sentinel-playground-temporal? And does it means that the custom script can actually get the sample[0] in function evaluatePixel()? So no relatively complex custom script can be done?

I would appreciate for your help!

Best wishes

Yes, this behavior is explained above.

I don’t understand what do you mean with this. Please provide the code that reproduces the problem.

We all strive for perfection, but perfection is impossible to achieve. If you find any bugs or have a feature request, please open an issue on our GitHub repository. Note: support for temporal custom scripts in eo-learn is a feature and not a bug IMO.

Here someone else can give better answer than me.

Best regards,
Anze

Thank you, @anze.zupanc. Your kind and timely answers always help me a lot. And thanks Sentinel-hub for providing us such a great and convenient platform to support earth observation applications, which we all benefit much from.

Here is my code where three NDVI bands are returned. When using it with python, the error “During execution of task S2L1CWCSInput: Numpy array of FeatureType.MASK feature has to have 4 dimensions” still exists.

//VERSION=2

function setup(ds) {
    return {
        components: [ds.B04,ds.B08],
        output: [
            {
                id: 'default',
                sampleType: SampleType.AUTO,
                componentCount: 1
            },
            {
                id: 'default1',
                sampleType: SampleType.AUTO,
                componentCount: 1
            },
            {
                id: 'default2',
                sampleType: SampleType.AUTO,
                componentCount: 1
            }
        ],
        temporal: true
    }
}
function evaluatePixel(samples) {
    var ndvi0 = index(samples[0].B08, samples[0].B04);
    var ndvi1 = index(samples[0].B08, samples[0].B04);
    var ndvi2 = index(samples[0].B08, samples[0].B04);
    
    return {
    default: [ndvi0],
    default1: [ndvi1],
    default2: [ndvi2]
    }
}

What’s more, I have tried the modified script from @gmilcinski, but I still only got one instead of seven considering the length of samples (input parameter of the evaluatePixel() funtion), just like @yutouwang95 . Is the difference in our output still due to the configuration of the layer? What do you think is the major cause?

//VERSION=2

function setup(ds) {
    return {
        components: [ds.B04,ds.B08],
        output: [
            {
                id: "default",
                sampleType: SampleType.AUTO,
                componentCount: 1
            }
        ],
        temporal: true
    }
}
function evaluatePixel(samples) {

    var message = samples.length
    throw new Error(message)
    
    var ndvi = []

    for (var i = 0; i < samples.length; i++) {
        ndvi[i] = index(samples[i].B08, samples[i].B04);
    }

    return {
    default: [ndvi[0]]
    }
}

In fact, what I want to achieve is the capability to implement some temporal analysis through API (including python), not just through the Sentinel-playground-temporal, which needs the evaluatePixel() function to obtain more than one sample at a time. If this need is impossible for now, you can just tell me, and I will find another way to achieve it.
BTW, what’s the difference between the “temporal: true” configuration of the layer, and the version 2 code “temporal: true” in the setup() function of the custom script? To achieve temporal analysis, we need both of them to be true, all we just need one of them?

I like this platform so much and I hope we may work together to help make Sentinel-hub to become better and better. In addition, we may help to contribute some more detailed documents and tutorials for the use of Sentinel-hub and eo-learn library.
I would appreciate for your kind help.

Best wishes,
Han

After the careful test, it can be confirmed that individual users cannot use temporal analysis through the OGC API. Our test used the layer “BANDS-S2-L1C”, through the WCS request method, and I confirmed that the layer was set to “temporal: true” and related settings are correct. The script used for the test is provided by @gmilcinski as shown in below, and the output is reposited in https://github.com/geoliuh18/testEOLEARN/blob/master/BANDS.npy. The output everywhere is 0.2, which means samples.length is 1 and there is no temporal dimension. However, the temporal analysis can be performed on sentinel-playground-temporal.

//VERSION=2

function setup(ds) {
    return {
        components: [ds.B04,ds.B08],
        output: [
            {
                id: "default",
                sampleType: SampleType.AUTO,
                componentCount: 1
            }
        ],
        temporal: true
    }
}
function evaluatePixel(samples) {
    var res;
    if (samples.lenght==0) res = 0;
    else if (samples.length==1) res = 0.2;
    else if (samples.lenght==2) res = 0.4;
    else if (samples.length<6) res = 0.6;
    else if (samples.length<10) res = 0.8;
    else res = 1;
    return {
      default: [res]
    } 

}

The potential reasons may include,

  • Sentinel-hub officially closed the temporal analysis function according to the instance ID of the individual users.
  • Temporal analysis requires more special settings other than “temporal: true” for the layer, but the official documentation does not explain.

In addition, I have confirmed that under the condition that the layer is set to “temporal: true”, the error (During execution of task S2L1CWCSInput: Numpy array of FeatureType.MASK feature has to have 4 dimensions) came from the bug of eo-learn library which due to the loss of transparency band ( Thank to @anze.zupanc 's help)

We need to have a deeper look into this, but I can immediately state that there is no limitation in terms of “individual user”. So the problem must be elsewhere.
Can you paste here an example of the REQUEST URL you have done? Do mask half of the instance ID.

Thanks, @gmilcinski.

Here is one of my request urls.

https://services.sentinel-hub.com/ogc/wcs/ba30c17d-dba6-4cc0-8xxxxxxxxxxxxxxxx?SERVICE=wcs&MAXCC=80.0&ShowLogo=False&Transparent=True&EvalScript=Ly9WRVJTSU9OPTIKCmZ1bmN0aW9uIHNldHVwKGRzKSB7CiAgICByZXR1cm4gewogICAgICAgIGNvbXBvbmVudHM6IFtkcy5CMDQsZHMuQjA4XSwKICAgICAgICBvdXRwdXQ6IFsKICAgICAgICAgICAgewogICAgICAgICAgICAgICAgaWQ6ICdkZWZhdWx0JywKICAgICAgICAgICAgICAgIHNhbXBsZVR5cGU6IFNhbXBsZVR5cGUuQVVUTywKICAgICAgICAgICAgICAgIGNvbXBvbmVudENvdW50OiAxCiAgICAgICAgICAgIH0KICAgICAgICBdLAogICAgICAgIHRlbXBvcmFsOiB0cnVlCiAgICB9Cn0KZnVuY3Rpb24gZXZhbHVhdGVQaXhlbChzYW1wbGVzKSB7CnZhciBuZHZpID0gW10KCi8vIERlYnVnIHBhcnQgdG8gc2VlIGhvdyBtYW55IHNjZW5lcyB0aGVyZSBhcmUKdmFyIHJlczsKaWYgKHNhbXBsZXMubGVuZ2h0PT0wKSByZXMgPSAwOwplbHNlIGlmIChzYW1wbGVzLmxlbmd0aD09MSkgcmVzID0gMC4yOwplbHNlIGlmIChzYW1wbGVzLmxlbmdodD09MikgcmVzID0gMC40OwplbHNlIGlmIChzYW1wbGVzLmxlbmd0aDw2KSByZXMgPSAwLjY7CmVsc2UgaWYgKHNhbXBsZXMubGVuZ3RoPDEwKSByZXMgPSAwLjg7CmVsc2UgcmVzID0gMTsKcmV0dXJuIHsKICBkZWZhdWx0OiBbcmVzXQp9CgogLy8gRW5kIG9mIGRlYnVnIHBhcnQKCn0%3D&BBOX=50.01521134567398%2C133.4504208984375%2C50.02521134567398%2C133.4664208984375&FORMAT=image%2Ftiff%3Bdepth%3D32f&CRS=EPSG%3A4326&TIME=2017-05-20T02%3A08%3A05%2F2017-05-20T02%3A08%3A05&RESX=10m&RESY=10m&COVERAGE=BANDS-S2-L1C&REQUEST=GetCoverage&VERSION=1.1.2

And the whole debugging output is reposited in https://github.com/geoliuh18/testEOLEARN/blob/master/debug_output1.txt.

Here you are setting the TIME parameter to one day only, so the one time slice in the result is somehow expected.
If you change this to
TIME=2017-05-20T02%3A08%3A05%2F2017-06-20T02%3A08%3A05
you will get different result

With VERSION=2 of the Custom Script this setting is no longer necessary in the Configuration utility as it is passed by the script alone. If you have it in the script, it will behave the same, regardless of the setting in the Configuration utility.

@geoliuh18 can you also send an example of the REQUEST URL?

Thank you for your timely answer, which help me locate the problem. The situation is whether using WcsRequest() or eo-learn’s s2L1CWCSInput(), the set time interval is automatically split into a list of single date. Therefore, each time is processed separately, and the entire time interval cannot be processed once as the temporal analysis needs. I have already checked the source code and if this is the case, the current Python API (whether sentinel-hub or eo-learn) cannot generate an OGC URL that is valid for temporal analysis. That is, for now, we can only manually construct a valid ogc url for temporal analysis, and no python API can help us to achieve this. Here are some useful screenshots.

As I have already mentioned a few days ago, eo-learn and sentinelhub-py were not yet used for multi-temporal requests. That being said, these packages are open-source so it should be easy to upgrade this part (and if possible, create a pull request so that others can benefit as well)

1 Like