Just checking there is no way in Sentinel Hub to do a dynamic directional transform based on solar azimuth in Sentinel Hub? I noticed this neat setup notebook (GEE) that makes use of s2cloudless but adds the idea that using the sun angle and an NIR value you can get a pretty good estimate of where the shadows fall.
Best I can think of is a blanket resample of s2cloudless layer to lower resolution using max value. This effectively creates a buffer around the s2cloudless pixels that could capture any shadow. And then evaluate if the NIR threshold is met it is probably shadow (if not cloud).
Thanks for the question. Reading into the dynamic directional transform in the GEE notebook, it is based upon pixel neighbourhoods which is not possible to do through evalscripts.
However, this doesn’t mean that you couldn’t then do this within your own Jupyter Notebook. I am sure that there is an equivalent function in one of the open source python libraries that you can apply to the appropriate layer.
As I’m sure you know, the following bands are available to use when handling the Sentinel-2 L2A data collection:
Layer
Description
Resolution
sunAzimuthAngles
Sun azimuth angle
5000m
sunZenithAngles
Sun zenith angle
5000m
viewAzimuthMean
Viewing azimuth angle
5000m
viewZenithMean
Viewing zenith angle
5000m
Hope that this information helps you out, if you have any other questions we can try and answer.
I understand importing more bands and then adding a further output adds to the multiples on the processing units used as per documentation. I am wondering though that with the more complex machine learning scripts if there is a cost for the actual computation? That script is a monster
This is a very good question. For the moment, it does not, but we will certainly add the execution time factor one day, probably for scripts taking more than a second to execute.