Hi,
I’ve tried sending two requests with a small variation in each: the sampleType is in one Float32, and AUTO in the second.
The processing units spent when sending a request with sampletype FLOAT32 are 13.097086978865264.
Changing the sampleType in the request to AUTO, leads to only spending 6.548543489432632 Processing units.
However, in both cases the resulting statistics’ values are the same -
“min”: 0.014415781944990158,
“max”: 0.8943591713905334,
“mean”: 0.41468574880270276
Shouldn’t we expect some difference in the statistics values given different sampleTypes? What leads to different PU spent but same results?
Evalscript in the request:
"evalscript": "//VERSION=3\n\nfunction setup() {\n return {\n input: [{bands: [\"B04\", \"B08\", \"dataMask\", \"CLM\", \"SCL\"]}],\n\toutput: [\n {id: \"data\", bands: 1, sampleType: \"FLOAT32\"},\n {id: \"dataMask\", bands: 1}\n ]\n }\n}\nfunction evaluatePixel(samples) {\n \n // masking cloudy pixels\n let combinedMask = samples.dataMask\n if (samples.CLM > 0 || samples.SCL === 1 || samples.SCL === 3 || samples.SCL === 8 || samples.SCL === 9 || samples.SCL === 10 || samples.SCL === 11) {\n combinedMask = 0;\n }\n\n return {\n data: [index(samples.B08, samples.B04)],\n dataMask: [combinedMask]\n }\n}"
Where sampleType is varied between AUTO or Float32.
I have also tried different areas and EPSG, but the same situation happens.