What I would like to do is generating an image representing the NDVI anomaly of an area.
By NDVI anomaly, I mean the current NDVI compared to the average NDVI for the same period of the past years (of course this definition is not accurate : how many year, how do I define the same period, etc, but lets begin with the beginning).
Does anyone already did something like that ?
Do you have hint of how to do that ?
You do a two-step approach, first using Statistical API (http://www.sentinel-hub.com/apps/fis-request) to analyze the NDVI behavior over the chosen polygon (e.g. field). Once you have an average NDVI known, you define a Custom script to visualize comparison of actual NDVI and average NDVI, encode this script and pass it as EVALSCRIPT parameter.
You could use “multi-temporal” feature of the Sentinel Hub to do the above in the same step. It might take quite a while though to process multiple years of data so I am not sure the result would be useful.
From the compute/cost point of view, it would make sense to calculate average NDVI once (e.g. once per year?) and store it in the database on your side, along the field geometry. In such way you could do the second step of the first option very fast.
If any of the above somehow fits your idea, I can try to help further. (if needed)
What I did not precise is that the goal is to have a NDVI Anomaly picture of a “big” zone, something like 40x40 km, not only a field.
I like the first idea, as I already use the Statistical API to do NDVI anomaly calculation (and time series). So I already have the “averaging” formula.
But as far as I understand, I can only pass one parameter with EVALSCRIPT, which mean only one NDVI average value for all the zone ?
How can I manage that ?
One proposition can be : split my 40x40 zone into 16 1x1 km square (a mosaic), calculate a NDVI average for each square, request an image for each square and present to the user the mosaic.
Pros : I can store the square NDVI average value
Cons : This is more work on the client side to build the final image, and I will have discontinuity in the NDVI anomaly (from one square to another).
Does it make sens ?
Another idea I had was to get all the historical NDVI of the zone, not as images but as raw data (like the FORMAT=application/json), and compute on my side a NDVI average pixel by pixel.
Then, I just request the NDVI of a given date in the same raw data, compute the NDVI anomaly pixel by pixel, and draw the final image.
But this not simple, and JSON file format do not seems to be adapted. So for now I do not know how I could do that.
What do you think ?
-You could get “zones” in JSON indeed, but these zones will be changing over time so you will have a hard time doing an average.
-I am not entirely certain how you plan to calculate “average NDVI” on 40x40km zone. Such a large area will contain many different features (e.g. crops, water, buildings, forest) and average NDVI of the whole area would not represent much. Or would it? The problem is similar if you calculate 1 x 1 km zones…
At this point I am thinking that it would help if you draw an image of approx what you would like to get…
What I think is the most relevant question is: Would you want to compare actual NDVI value of each individual pixel to some “static average NDVI value” (static = for whole area, e.g. 1x1 km) or would you expect to compare actual NDVI of each individual pixel with the average NDVI value of that specific pixel.
If the latter is correct, I think you cannot use Statistical API and you would have to try multi-temporal processing option.
For this, check this script for calculating “Maximum NDVI over some period of time”:
You can adapt a very similar approach to calculate average NDVI instead of maximum NDVI.
And once you have average NDVI, you can (in the same script) also do a comparison with the latest one and visualize it however you like.
Some notes to consider with this approach:
-this script needs to be run with “TEMPORAL=true” parameter in the request
-script will take much longer to process as it will crawl through several (or several tens) scenes. I suggest you try with smaller tiles (e.g. 1x1km = 100x100 px) and start with shorter time intervals, so that you see, how it scales. You might also want to use “MAXCC=30” (or similar) parameter so that you filter out the scenes, which are too cloudy, resulting in faster processing.
-this approach might in general be tricky from the “correctness” point of view as average NDVI will be significantly influenced by the clouds over specific pixel, so result might not be related to “reality on the ground”. To avoid this you might want to try to “skip” pixel, where clouds are identified, e.g. using this script:
(but this script is not perfect either…)
Let me know, if I guessed or missed some elements and I am willing to try to help further.