Processing API eval scripts and the Configuration Utility

Hi Sentinel Hub team,

I’m transitioning some of my work from using the OGC API to the SenHub Processing API.
Originally I would have set all my layers using eval scripts in the Configuration Utility then just used the instance id and layer name in my python scripts.

Do I understand correctly that the Processing API doesn’t relate to the Configuration Utility and that I must therefore always provide eval scripts in my requests - i.e. I must handle the eval scripts locally and I cannot do it via the dashboard?

If this is the case, how do address the following use case which I’d usually set up in the Configuration Utility:

Requesting Sentinel 1 VH and VV backscatter for ascending orbit only, while choosing the backscatter coefficient and orthorectification method?

Dear Barrett,

You can specify all these parameters in process API requests. Please take a look at our process API webinar, which goes into detail about the structure of process API requests, and will show you how you can use the Requests Builder to construct a request by selecting your options in the user interface. You will also see that you can send the request directly from the Requests Builder, or if you prefer, run it in Python. The example in the webinar is specifically for Sentinel-1, so you will see how to select the backscattering coefficient, polarization, ascending orbit and orthorectification.

About the evalscripts - yes, you have to specify the evalscript in the request itself, but you can copy in the exact same evalscript as you used in the dashboard. The webinar will also cover where to put your evalscripts in the request.

Don’t forget to check the video description for timestamps and relevant links.

I hope this helps.
Best,
Monja

Hi @barrett,

there actually is a configuration API, which exposes information (including EVALSCRIPT) in your instances in JSON, which you can then parse:
https://services.sentinel-hub.com/configuration/v1/wms/instances/<INSTANCE_ID>/layers
(the request needs to be authenticated, like for example this one).

That said, some thoughts:

  • We do recommend to keep the EVALSCRIPT (and other parts of requests) in your code as this makes it a much more controlled environment. It is also easier to version these (e.g. if you are using GIT or SVN or similar internally).
  • The API is of internal nature and is prone to changes, during which we cannot focus to backward compatibility and/or managed roll-out. Therefore it may happen that your script will at some point break and you will have to tweak it. It is not, that these changes are happening very often, but when they do, you might find it a bit inconvenient.

And as a general rule everything that you see (and can set) in the Configuration utility, can be set via process API directly. This is due to the fact that when you are using OGC API, it will do exactly the same - get configuration from the Configuration utility, then “translate” the request into process API and execute it… If you do it yourself, you are just skipping one step.

Thanks @gmilcinski and @monja.sebela for the explanations.
Also mostly working with EO-learn, so I will see if I understand your answers and how they apply to EO-learn, if not you’ll hear back!

Have a good weekend!

Continuing this topic, how can I directly use eval scripts from the custom scripts repository?
The repo says this in the readme:

“This repository contains a collection of custom scripts for Sentinel Hub, which can be fed to the services via the URL.”
(https://github.com/sentinel-hub/custom-scripts/#custom-scripts-repository)

I’ve tested, and I can’t just provide the URL as the evalscript parameter when using eolearn. How is this meant to be done?

Cheers
Sam

Hmm, this is super rarely used feature.
That said, I think the parameter EVALSCRIPTURL should do the trick.

If I understand correctly, that is for the WCS service. My use case is using EO-learn with the processing API…

Currently I’m failing to use eval scripts at all with EO-learn.
I’m trying to use “SentinelHubInputTask” for Sentinel 2 data using the evalscript parameter instead of the bands parameter.
I always get this response:

400 Client Error: Bad Request for url: https://services.sentinel-hub.com/api/v1/process
Server response: “{“error”:{“status”:400,“reason”:“Bad Request”,“message”:“Output bands requested but missing from function setup()”,“code”:“COMMON_BAD_PAYLOAD”}}”

The eval script I’m using is just for NDVI (to test):

//VERSION=3

        function evaluatePixel(samples) {
            let val = index(samples.B08, samples.B04);
            return [val, samples.dataMask];
        }
        
        function setup() {
          return {
            input: [{
              bands: [
                "B04",
                "B08",
                "dataMask"
              ]
            }],
            output: {
              bands: 2
            }
          }
        }

If I go into the debugging, the request.post_values is as follows:

{‘input’: {‘bounds’: {‘properties’: {‘crs’: ‘http://www.opengis.net/def/crs/EPSG/0/32632’}, ‘bbox’: [684569.915276757, 5239875.965672306, 685176.9600969265, 5240483.455006348]}, ‘data’: [InputDataDict({‘type’: ‘S2L2A’, ‘dataFilter’: {‘timeRange’: {‘from’: ‘2019-08-02T09:17:49Z’, ‘to’: ‘2019-08-02T11:17:49Z’}, ‘maxCloudCoverage’: 90, ‘mosaickingOrder’: ‘mostRecent’}}, service_url=None)]}, ‘evalscript’: '//VERSION=3\n \n function evaluatePixel(samples) {\n let val = index(samples.B08, samples.B04);\n return [val, samples.dataMask];\n }\n \n function setup() {\n return {\n input: [{\n bands: [\n “B04”,\n “B08”,\n “dataMask”\n ]\n }],\n output: {\n bands: 2\n }\n }\n } ', ‘output’: {‘responses’: [{‘identifier’: ‘bands’, ‘format’: {‘type’: ‘image/tiff’}}, {‘identifier’: ‘bool_mask’, ‘format’: {‘type’: ‘image/tiff’}}, {‘identifier’: ‘userdata’, ‘format’: {‘type’: ‘application/json’}}], ‘width’: 61, ‘height’: 61}}

I have no problem with this request if I just use bands rather than evalscript…

Any idea what I’m doing wrong?

EVALSCRIPTURL does indeed work only with WCS.
For process API it is best practice to keep the EVALSCRIPT stored in your environment, for versioning purposes.

(and just in case it is not clear, processAPI is a strongly recommended option, rather than WCS)

The problem in your case is that you have several outputs defined in the request (“bands”, “bool_mask”, “userdata” - check the “identifiers”), whereas EVALSCRIPT does not specify, which output is what, therefore you get the error “Output bands requested but missing…” (perhaps best to avoid term “bands” for identifiers as it gets confusing).

I suggest to check this example.

That said, if I modify your request a bit:

{
    "input": {
        "bounds": {
            "properties": {
                "crs": "http://www.opengis.net/def/crs/EPSG/0/32632"
            },
            "bbox": [
                684569.915276757, 5239875.965672306, 685176.9600969265, 5240483.455006348            ]
        },
        "data": [
            {
                "type": "S2L2A",
                "dataFilter": {
                    "timeRange": {
                        "from": "2019-08-02T11:17:49Z",
                        "to": "2019-08-02T11:17:49Z"
                    }
                }
            }
        ]
    },
    "output": {
        "width": 61,
        "height": 61,
        "responses": [
            {
                "identifier": "bands",
                "format": {
                    "type": "image/png"
                }
            },
	    {
                "identifier": "bool_mask",
                "format": {
                    "type": "image/tiff"
                }
            },
	    {
                "identifier": "userdata",
                "format": {
                    "type": "application/json"
                }
            }
        ]
    }
}

and EVALSCRIPT:

//VERSION=3
function evaluatePixel(samples) {
  let val = index(samples.B08, samples.B04);
  return {
        bands: val, 
        bool_mask: samples.dataMask
   }
}            
function setup() {
    return {
	input: [{
          bands: [
             "B04",
             "B08",
             "dataMask"
           ]
	}],
	output: [
        {id: "bands", bands: 2},
        {id: "bool_mask", bands: 1}
	]
     }
 }

it wors for me

So I don’t get the previous error any more, but I still don’t get it to work with eo-learn.

If I understand the code correctly, when using the processing api, you can specify which bands you want to download. If you do this, and don’t specify an evalscript, it will generate one. However, if you provide an evalscript which doesn’t need all the bands (e.g. the one above), and leave the bands parameter unspecified, it attempts to request all the bands and fails in processing_api._extract_array. Apparently it is trying to extract data for all the different bands, because it thinks it is requesting all Sentinel 2 bands, but the evalscript only returns 1 band (as intended), so it tries to reference a band of the returned image which doesn’t exist.

Alternatively, I can manually include the bands parameter again, e.g. in this case: [“B08”, “B04”] in the senhubinputtask, but then I always get completely empty responses (all nans).

Again, the use case is simply being able to use evalscripts with the processing api via eo-learn. This always worked when using evalscripts configured through the dashboard with the wcs api via eo-learn.
Am I going about this all wrong?

It’s quite complex, I know. Sorry about that.

In general, you have to define (in the request) all the outputs. There can be as many outputs as you want, each has a name (“identifier” and format.
In the EVALSCRIPT you then define, what exactly goes to each of these outputs.

If you have just one output, you can name it “default” and in this case you do nto need to specify, which output goes to it as we will try to auto-magically solve it. This is just a simplification for most common use-cases. However, I can imagine that once you go towards more complex options, it can get confusing.

In your example you have named the output “bands”, which might cause additional confusion… I suggest to avoid that.

eo-learn was previously using WCS but it has now been mostly ported to use processAPI, as far as I know. If you describe, what exactly you would like to do (for the one example), I can ask one of our guys to take a look.

Simple example use case:
I want to make eo-patches with LAI data using the LAI evalscript from the custom scripts repo (using the processAPI through eo-learn).

Broader use case:
As above, but instead of LAI, anything from the custom scripts repo with minimal (preferably no) changes required to the basic eval script from the repo.

Basically I could do this for the old system with the WCS - I wanted a new layer, I could just find it in the repo, set up a new layer using the configuration utility, paste the evalscript into the layer script box and that’s it.

Is that clear?

Basically I could do this for the old system with the WCS - I wanted a new layer, I could just find it in the repo, set up a new layer using the configuration utility, paste the evalscript into the layer script box and that’s it.

If you are using python, you can get the raw evalscript from Github (for example https://raw.githubusercontent.com/sentinel-hub/custom-scripts/master/sentinel-2/lai/script.js, click raw in the header of the file in Github and copy the url):

Then, you can request that with requests in python:

import requests
r = requests.get('https://raw.githubusercontent.com/sentinel-hub/custom-scripts/master/sentinel-2/lai/script.js')
evalscript = r.text

You can save the urls to the raw Github files in some dictionary in python, or you can save the evalscripts in files locally and then import them in the project (to avoid making requests to Github).
And finally use the chosen evalscript (and other parameters, which can also be saved somewhere locally and imported) to make a request to the Sentinel Hub.

In other words, you can create your own list of “layers” by combining evalscript and other parameters to a locally saved file(s) and importing them in the project.

You can still set the evalscript and (some) other parameters in the configuration utility, but you would still need to get the parameters to make a request to Processing API (since Processing API relies on the user to provide the parameters (even if used with sentinelhub-py or eo-learn), contrary to WMS / WCS where the parameters for a layer are saved on the services).

Layer configuration (evalscript and parameters) can be retrieved from Sentinel Hub, but I think it’s not part of sentinelhub-py or eo-learn packages.
Making an authenticated request to https://services.sentinel-hub.com/configuration/v1/wms/instances/<instance-id>/layers/<layer-id> returns most / all of the parameters needed to make a request to Processing API (docs for this request).

Hope this helps in some way.

That is more or less what I’m trying to do. The problem is I can’t get the evalscripts to work with eolearn at all, let alone from just taking the standard scripts from the custom scripts repo.

Perhaps I can share my code (a minimal example) by email and we can discuss further?

Hi @barrett,

Sorry for the issues you are having. Feel free to send me your evalscript/code (in dm) and we’ll have a look.

That being said, we will also update the examples in eo-learn to help users with questions like yours.