Sentinel Hub processing API

We have been working on sentinelhub-py support for the Processing API and would be happy to get some feedback. The implementation is published on the feat/processing-api branch.

We’ve also created a SentinelHubProcessingInput EOTask in eo-learn, published in the feat/processing-api branch. To see an example of it’s use, check the ProcessingIO.ipynb notebook.

3 Likes

@iosvn - can you provide some pointer or example on how to get data in 20 meter resolution?

As far as I understand, with the Processing API you don’t define a resolution explicitly. You get the desired resolution by using the right bounding box and image dimensions. So in the current ipynb example a 1000x1000m bounding box and 100x100px image size gives you 10m resolution. So for the same bounding box you would just request a 50x50px image. I hope that someone working on the service side can confirm this or correct me if I’m wrong.

You are right in terms of the Sentinel Hub processing API.

But how is this handled in sentinelhub-py? It would be good that width/height would be automatically calculated based on desired resolution.

Currently, sentinelhub-py doesn’t support the resolution parameter, and it would be interesting to know why the Processing API doesn’t either. I wouldn’t be a problem for us to implement resolution parameter support in sentinelhub-py, it’s a minor thing. But if it would be a general pattern of development to have different parameter sets on the Processing API and the python package, they could diverge through time and reduce transparency.

This touches a more general topic where we have different views within our group, and thus user feedback and feature requests would be welcome to help us make good design decisions. Some of us think sentinelhub-py should expose the Processing API as transparently as possible, whereas others think sentinelhub-py should also do some auto-magic over the API. A lightweight python package that maps 1:1 to the Processing API would be easy to maintain and transparent. Thus one could understand the python package easier just by reading the Processing API documentation. But it’s also true that this approach requires such minor things to be discussed and resolved by too many people and changes are required on the service side also.

What are your thoughts on this?

The rationale for not (yet) supporting resolution in Processing API is the experience from the existing OGC services and how people are using resolution there.
The resolution depends significantly on the chosen coordinate system and for some it is not even possible to do an exact transform. E.g. from “10m”, which was used very often, and “EPSG:4326”, where “meter” to “lat/lon unit” depends on where in the world you are. And if you are processing large request, this is changing…
The problem with API supporting these conversions is that API is a black box and people do not know, what is actually happening there.

This is why we have decided that, at least for the beginning, we rather see development of the “SDK”, which will do these conversions for users. Sentinelhub-py being one of these SDKs. If you implement the conversions there, users will be able to see, within the library itself, what is happening, what is the accuracy of the conversion, and change this if needed.

Therefore, for this specific feature, please do include it in the “core” features of sentinelhub-py.

Sentinel Hub Process API might support providing resolution instead of width/height, but it will almost certainly only support resolution in the same CRS. Therefore the “10 meters” to “lat/lon” will still have to be supported by sentinelhub-py.

Ok, I get it. I added this feature to our to-do list.