Hello everyone.
Using Python, I have to extract statistical data relating to very small areas (the crown of the trees) but I have to do it for many trees (about 10000 trees) and, for each tree, we need data for many days (from 2017-7-1 to today with P1D aggregation).
Through Request Builder I prepared the single API request (see below), I get the data but, since I have to make many requests for very long periods, I’d like to be able to reduce the amount of PU used for each request.
Can someone kindly help me understand where I can intervene?
In particular:
- Bbox question:
For statistical requests, is it necessary to include the BBox in the request? If I also send this data (BBox) is the PU used less?
- ResX ResY question:
To have precise statistical data relating to very small areas, like the ones I use, max 2500 m2 (square meters), how is it best to set the resX and ResY values?
I leave the default but I’m not sure it’s the best solution because I don’t understand how this data can influence the statistics (I don’t have to get the image).
So, which parameters of resX and ResY do you recommend?
- PU question:
Is there a way to know how much PU a single request consumed? So I can test and figure out how to consume less PU.
Thank you very much to anyone who wants to help me or provide me with some precise links where I can find information.
Greetings everyone.
HI,
Giampiero
-------------------------
-------------------------
The requests look like this:
import requests
url = "https://services.sentinel-hub.com/api/v1/statistics"
headers = {
"Authorization": "<by auth>",
"Accept": "application/json",
"Content-Type": "application/json"
}
data = {
"input": {
"bounds": {
"geometry": {
"type": "Polygon",
"coordinates": [
[
[13.4990155251312,45.980966589012375],[13.499047427833514,45.98096306634004],[13.499076207665873,45.980952843147996],[13.499099047450867,45.98093692015681],[13.499113711471212,45.980916856023825],
[13.49911876431732,45.980894614769475],[13.499113711392582,45.98087237352394],[13.499099047323647,45.98085230941404],[13.499076207538648,45.98083638645138],[13.499047427754881,45.98082616328244],
[13.4990155251312,45.980822640618925],[13.49898362250752,45.98082616328244],[13.498954842723753,45.98083638645138],[13.498932002938755,45.98085230941404],[13.49891733886982,45.98087237352394],
[13.498912285945082,45.980894614769475],[13.49891733879119,45.980916856023825],[13.498932002811534,45.98093692015681],[13.498954842596529,45.980952843147996],[13.498983622428888,45.98096306634004],
[13.4990155251312,45.980966589012375]
]
]
}
},
"data": [
{
"dataFilter": {},
"type": "sentinel-2-l2a"
}
]
},
"aggregation": {
"timeRange": {
"from": "2017-07-01T00:00:00Z",
"to": "2023-05-04T23:59:59Z"
},
"aggregationInterval": {
"of": "P1D"
},
"width": 512,
"height": 513.666,
"evalscript": "//VERSION=3\nfunction setup() {\n return {\n input: [{\n bands: [\n \"B04\",\n \"B08\",\n \n \"dataMask\"\n ]\n }],\n output: [\n {\n id: \"myData\",\n bands: [\"NDVI\",\"B08_NIR\",\"B04_RED\"]\n },\n \n {\n id: \"dataMask\",\n bands: 1\n }]\n };\n}\n\nfunction evaluatePixel(samples) {\n let ndvi = (samples.B08 - samples.B04) / (samples.B08+samples.B04);\n return {\n myData: [ndvi,samples.B08,samples.B04],\n dataMask: [samples.dataMask],\n \n };\n}\n"
},
"calculations": {
"default": {}
}
}
response = requests.post(url, headers=headers, json=data)