Processing API eval scripts and the Configuration Utility

Hi Sentinel Hub team,

I’m transitioning some of my work from using the OGC API to the SenHub Processing API.
Originally I would have set all my layers using eval scripts in the Configuration Utility then just used the instance id and layer name in my python scripts.

Do I understand correctly that the Processing API doesn’t relate to the Configuration Utility and that I must therefore always provide eval scripts in my requests - i.e. I must handle the eval scripts locally and I cannot do it via the dashboard?

If this is the case, how do address the following use case which I’d usually set up in the Configuration Utility:

Requesting Sentinel 1 VH and VV backscatter for ascending orbit only, while choosing the backscatter coefficient and orthorectification method?

Dear Barrett,

You can specify all these parameters in process API requests. Please take a look at our process API webinar, which goes into detail about the structure of process API requests, and will show you how you can use the Requests Builder to construct a request by selecting your options in the user interface. You will also see that you can send the request directly from the Requests Builder, or if you prefer, run it in Python. The example in the webinar is specifically for Sentinel-1, so you will see how to select the backscattering coefficient, polarization, ascending orbit and orthorectification.

About the evalscripts - yes, you have to specify the evalscript in the request itself, but you can copy in the exact same evalscript as you used in the dashboard. The webinar will also cover where to put your evalscripts in the request.

Don’t forget to check the video description for timestamps and relevant links.

I hope this helps.
Best,
Monja

Hi @barrett,

there actually is a configuration API, which exposes information (including EVALSCRIPT) in your instances in JSON, which you can then parse:
https://services.sentinel-hub.com/configuration/v1/wms/instances/<INSTANCE_ID>/layers
(the request needs to be authenticated, like for example this one).

That said, some thoughts:

  • We do recommend to keep the EVALSCRIPT (and other parts of requests) in your code as this makes it a much more controlled environment. It is also easier to version these (e.g. if you are using GIT or SVN or similar internally).
  • The API is of internal nature and is prone to changes, during which we cannot focus to backward compatibility and/or managed roll-out. Therefore it may happen that your script will at some point break and you will have to tweak it. It is not, that these changes are happening very often, but when they do, you might find it a bit inconvenient.

And as a general rule everything that you see (and can set) in the Configuration utility, can be set via process API directly. This is due to the fact that when you are using OGC API, it will do exactly the same - get configuration from the Configuration utility, then “translate” the request into process API and execute it… If you do it yourself, you are just skipping one step.

Thanks @gmilcinski and @monja.sebela for the explanations.
Also mostly working with EO-learn, so I will see if I understand your answers and how they apply to EO-learn, if not you’ll hear back!

Have a good weekend!

Continuing this topic, how can I directly use eval scripts from the custom scripts repository?
The repo says this in the readme:

“This repository contains a collection of custom scripts for Sentinel Hub, which can be fed to the services via the URL.”
(https://github.com/sentinel-hub/custom-scripts/#custom-scripts-repository)

I’ve tested, and I can’t just provide the URL as the evalscript parameter when using eolearn. How is this meant to be done?

Cheers
Sam

Hmm, this is super rarely used feature.
That said, I think the parameter EVALSCRIPTURL should do the trick.

If I understand correctly, that is for the WCS service. My use case is using EO-learn with the processing API…

Currently I’m failing to use eval scripts at all with EO-learn.
I’m trying to use “SentinelHubInputTask” for Sentinel 2 data using the evalscript parameter instead of the bands parameter.
I always get this response:

400 Client Error: Bad Request for url: https://services.sentinel-hub.com/api/v1/process
Server response: “{“error”:{“status”:400,“reason”:“Bad Request”,“message”:“Output bands requested but missing from function setup()”,“code”:“COMMON_BAD_PAYLOAD”}}”

The eval script I’m using is just for NDVI (to test):

//VERSION=3

        function evaluatePixel(samples) {
            let val = index(samples.B08, samples.B04);
            return [val, samples.dataMask];
        }
        
        function setup() {
          return {
            input: [{
              bands: [
                "B04",
                "B08",
                "dataMask"
              ]
            }],
            output: {
              bands: 2
            }
          }
        }

If I go into the debugging, the request.post_values is as follows:

{‘input’: {‘bounds’: {‘properties’: {‘crs’: ‘http://www.opengis.net/def/crs/EPSG/0/32632’}, ‘bbox’: [684569.915276757, 5239875.965672306, 685176.9600969265, 5240483.455006348]}, ‘data’: [InputDataDict({‘type’: ‘S2L2A’, ‘dataFilter’: {‘timeRange’: {‘from’: ‘2019-08-02T09:17:49Z’, ‘to’: ‘2019-08-02T11:17:49Z’}, ‘maxCloudCoverage’: 90, ‘mosaickingOrder’: ‘mostRecent’}}, service_url=None)]}, ‘evalscript’: '//VERSION=3\n \n function evaluatePixel(samples) {\n let val = index(samples.B08, samples.B04);\n return [val, samples.dataMask];\n }\n \n function setup() {\n return {\n input: [{\n bands: [\n “B04”,\n “B08”,\n “dataMask”\n ]\n }],\n output: {\n bands: 2\n }\n }\n } ', ‘output’: {‘responses’: [{‘identifier’: ‘bands’, ‘format’: {‘type’: ‘image/tiff’}}, {‘identifier’: ‘bool_mask’, ‘format’: {‘type’: ‘image/tiff’}}, {‘identifier’: ‘userdata’, ‘format’: {‘type’: ‘application/json’}}], ‘width’: 61, ‘height’: 61}}

I have no problem with this request if I just use bands rather than evalscript…

Any idea what I’m doing wrong?

EVALSCRIPTURL does indeed work only with WCS.
For process API it is best practice to keep the EVALSCRIPT stored in your environment, for versioning purposes.

(and just in case it is not clear, processAPI is a strongly recommended option, rather than WCS)

The problem in your case is that you have several outputs defined in the request (“bands”, “bool_mask”, “userdata” - check the “identifiers”), whereas EVALSCRIPT does not specify, which output is what, therefore you get the error “Output bands requested but missing…” (perhaps best to avoid term “bands” for identifiers as it gets confusing).

I suggest to check this example.

That said, if I modify your request a bit:

{
    "input": {
        "bounds": {
            "properties": {
                "crs": "http://www.opengis.net/def/crs/EPSG/0/32632"
            },
            "bbox": [
                684569.915276757, 5239875.965672306, 685176.9600969265, 5240483.455006348            ]
        },
        "data": [
            {
                "type": "S2L2A",
                "dataFilter": {
                    "timeRange": {
                        "from": "2019-08-02T11:17:49Z",
                        "to": "2019-08-02T11:17:49Z"
                    }
                }
            }
        ]
    },
    "output": {
        "width": 61,
        "height": 61,
        "responses": [
            {
                "identifier": "bands",
                "format": {
                    "type": "image/png"
                }
            },
	    {
                "identifier": "bool_mask",
                "format": {
                    "type": "image/tiff"
                }
            },
	    {
                "identifier": "userdata",
                "format": {
                    "type": "application/json"
                }
            }
        ]
    }
}

and EVALSCRIPT:

//VERSION=3
function evaluatePixel(samples) {
  let val = index(samples.B08, samples.B04);
  return {
        bands: val, 
        bool_mask: samples.dataMask
   }
}            
function setup() {
    return {
	input: [{
          bands: [
             "B04",
             "B08",
             "dataMask"
           ]
	}],
	output: [
        {id: "bands", bands: 2},
        {id: "bool_mask", bands: 1}
	]
     }
 }

it wors for me

So I don’t get the previous error any more, but I still don’t get it to work with eo-learn.

If I understand the code correctly, when using the processing api, you can specify which bands you want to download. If you do this, and don’t specify an evalscript, it will generate one. However, if you provide an evalscript which doesn’t need all the bands (e.g. the one above), and leave the bands parameter unspecified, it attempts to request all the bands and fails in processing_api._extract_array. Apparently it is trying to extract data for all the different bands, because it thinks it is requesting all Sentinel 2 bands, but the evalscript only returns 1 band (as intended), so it tries to reference a band of the returned image which doesn’t exist.

Alternatively, I can manually include the bands parameter again, e.g. in this case: [“B08”, “B04”] in the senhubinputtask, but then I always get completely empty responses (all nans).

Again, the use case is simply being able to use evalscripts with the processing api via eo-learn. This always worked when using evalscripts configured through the dashboard with the wcs api via eo-learn.
Am I going about this all wrong?