Imagery scenes not downloading

We are having problem with inconsistency with downloading scene tiles. It seems that imagery is available on ESA site but not on AWS very frequently.

The current issue is scene tiles 17SKR and surrounding scenes on 6/6/18 and 6/11/18. They do show up in quicklooks but cannot access through our script.

Lanny

You might need to be more specific with description of the problem. Both of these granules are available on AWS:
http://sentinel-s2-l1c.s3-website.eu-central-1.amazonaws.com/#tiles/17/S/KR/2018/6/11/0/
http://sentinel-s2-l1c.s3-website.eu-central-1.amazonaws.com/#tiles/17/S/KR/2018/6/6/0/

I can also visualize them without problem in Sentinel Hub, so it seems the data are just fine.


Yes, they are available to view. The downloadable scene is not when we run our script. I just downloaded 159 tiles last night so the script works. When I run the script it says 0 scenes of 17SKR are available for the 6/6 and 6/11 image. I also have downloaded the 6/6 image from ESA. I don’t believe its on our end. I just downloaded with the script 6/9 images along side this pass

Lanny

I do not know, what your script does and how it works.
That being said, it is clear that data are on AWS and that the files are not corrupt. EO Browser gets data from exactly he same source, not the “quicklook” but full high-res files. So if data would not be there, images would not be shown.
I am pretty certain that the problem is on your side.

Perhaps it would be worth checking our Sentinel Hub services, so that you would not need to download nay data at all, you would simply ask for “get me NDVI of this parcel on this date”.
Check:
https://www.sentinel-hub.com/develop/documentation/api/ogc_api/wcs-request

We run a field by field analytics system with multiple vegetation algorithms besides NDVI so that would not be possible. The canned solutions of analysis does not fit our system.

Our script works fine, I am using it on other scene tiles as I write this. 5/27 tiles if this area were uploaded recently. The issue is these scene tiles do not follow the same pattern.

Not sure what it is but its an issue that I can’t seem to track down.

You can configure just about any algorithm in Sentinel Hub, see:
https://sentinel-hub.com/custom-processing-scripts

For the specific files - as mentioned above, we cannot help you unless you identify, what exactly is supposed to be a problem on AWS side. That being said, I am pretty confident that nothing has changed on the side of filename patterns etc.

Following is what script we run in the server - I just did 159 15T scene tiles last nite - This is for the 6/6/18 scene tile

The system cannot find this scene tile, why would this be different for this tile than others is really my question. I cant see that it is on our end.

Lanny

/Documents/scripts/sentinel_amazon_download_processor.py -s ‘2018-06-06’ -e ‘2018-06-07’ -t 17RKQ 17SKR -f

2018-06-12T14:42:49.325222: Starting up…
2018-06-12T14:42:49.347982: Connected to database.
2018-06-12T14:42:49.393306: Found 0 scenes to download.
2018-06-12T14:42:49.539420: Beginning processing of images.
2018-06-12T14:42:49.681104: Cleaning up.

Don’t know, sorry, but neither do I know about any difference on the side of the AWS bucket.

Doesn’t make sense why this pass is not available and others are. I hope someone can figure this out! Its been a very reoccurring problem for the last 2 years when we started using AWS.

Lanny

Lanny, can you provide a url or key to an object/image that fails and one that succeeds with your script and I can try and see if there is any difference on the AWS side?

Not sure what to provide as I am not the programmer - I can show you the terminal script output code of a successful access, like previous example. Would that suffice?

Here is the output on a successful download that It picked up 3 scene tiles

2018-06-11T19:58:00.125131: Starting up…
2018-06-11T19:58:00.126073: Connected to database.
2018-06-11T19:58:03.290692: Found 1 scenes to download.
2018-06-11T19:58:04.231565: Starting up…
2018-06-11T19:58:04.262993: Connected to database.
2018-06-11T19:58:04.284788: Downloading bands for 24124522-b3c7-4466-bb15-be87ee
8a23ca, tile 14UNU
2018-06-11T19:58:04.743120: Found 2 scenes to download.
2018-06-11T19:58:05.664421: Downloading bands for 24124522-b3c7-4466-bb15-be87ee
8a23ca, tile 14UNU
2018-06-11T19:58:05.667900: Downloading bands for 44039e0c-ef7b-4bdd-8232-7cdf68
b64b1f, tile 14UNV
2018-06-11T19:58:17.755427: Recording download of 24124522-b3c7-4466-bb15-be87ee
8a23ca, tile 14UNU
2018-06-11T19:58:17.903046: Beginning processing of images.
2018-06-11T19:58:17.952480: Creating mosiac for band S2_T14UNU_Jun10_18_B03.jp2
2018-06-11T19:58:17.952622: Creating mosiac for band S2_T14UNU_Jun10_18_B04.jp2
2018-06-11T19:58:17.952785: Creating mosiac for band S2_T14UNU_Jun10_18_B02.jp2
2018-06-11T19:58:17.952866: Creating mosiac for band S2_T14UNU_Jun10_18_B08.jp2
2018-06-11T19:58:18.053919: Merging bands for image S2_T14UNU_Jun10_18.
2018-06-11T19:58:19.423027: Recording download of 44039e0c-ef7b-4bdd-8232-7cdf68
b64b1f, tile 14UNV
2018-06-11T19:58:21.388617: Recording download of 24124522-b3c7-4466-bb15-be87ee
8a23ca, tile 14UNU
2018-06-11T19:58:21.527143: Beginning processing of images.
2018-06-11T19:58:21.703882: Creating mosiac for band S2_T14UNU_Jun10_18_B03.jp2
2018-06-11T19:58:21.704010: Creating mosiac for band S2_T14UNU_Jun10_18_B04.jp2
2018-06-11T19:58:21.704170: Creating mosiac for band S2_T14UNU_Jun10_18_B02.jp2
2018-06-11T19:58:21.705300: Creating mosiac for band S2_T14UNU_Jun10_18_B08.jp2
2018-06-11T19:58:21.711606: Creating mosiac for band S2_T14UNV_Jun10_18_B03.jp2
2018-06-11T19:58:21.711728: Creating mosiac for band S2_T14UNV_Jun10_18_B04.jp2
2018-06-11T19:58:21.711929: Creating mosiac for band S2_T14UNV_Jun10_18_B02.jp2
2018-06-11T19:58:21.713866: Creating mosiac for band S2_T14UNV_Jun10_18_B08.jp2
2018-06-11T19:58:21.772081: Merging bands for image S2_T14UNU_Jun10_18.
2018-06-11T19:58:21.783749: Merging bands for image S2_T14UNV_Jun10_18.
2018-06-11T19:59:24.058606: Resampling S2_T14UNU_Jun10_18.
2018-06-11T19:59:30.452960: Resampling S2_T14UNV_Jun10_18.
2018-06-11T19:59:30.998483: Resampling S2_T14UNU_Jun10_18.
2018-06-11T20:00:02.161806: Starting up…
2018-06-11T20:00:02.643852: Connected to database.

Not while using an external script but in the EO Browser itself I had a similar problem on one occasion. Preview in the Browser was fine while using my custom visualization, but when trying to export as image all images just ended up black, as if there were no data. It only happened once so I thought it might have just been a fluke. Was about two weeks ago, never happened again since or before.

We have had that happen many times. Actually they can turn black or all white or the bands are stretched wrong. It seems to happen with no pattern. It seems our script accesses the data but it might not be all there or something. We have to redownload and it will usually fix the issue. It becomes a major QC issue as we download thousands of scene tiles per month.

I believe you are looking too much into this issue, which is not really an issue. For Sentinel hub operation we read each and every tile ingested in AWS (cca 300.000 Sentinel-2 tiles per month) and the errors are extremely rare. They happen sometimes but they are just about always related to something on ESA’s side (e.g. corrupt band file, missing band, wrong meta-data, etc.). I think we have noticed “something” related to S3 two times in the last 3 years (we create about 0.5 billion requests to S3 every month) but we could not reproduce it.

In this specific case (scenes from your entry post) there is certainly no error. The fact that we can visualize it in EO Browser means that we can read the full resolution band files. The fact that you can click on the links above and download data using standard web browser, means that things are OK. So the script should do that as well.
You should therefore ask your programmers to look into the script a bit more in details. If there is some “change in pattern” resulting in script returning no results, it is still an error in the script. ESA has changed the “pattern” at least 10 times in the last year. So the script should take these changes into account.

On the side note, try to download thousands of scenes per month from Copernicus Open Hub and you will see, what the problems really are. We notice them each and every day, trying hard to get the data to S3.

I understand the frustration with ESA errors. That is why we moved to AWS 2 years ago after 5 months of frustration. The cart got ahead of the horse.

We are reevaluating our script and I hope its our issue, but at the same time the non availability of certain scene tiles is very suspicious when we have just run batches from a few tiles to a couple hundred with no issues.

I appreciate your efforts. Its just that we have customers needs that requires us to deliver a commercial product which we have done for 24 years, We know this industry very well and have built server image cloud delivery systems since 2003.

The movement of easy satellite imagery access from R&D to commercial creates a whole set of challenges and value. If this industry is going to grow properly then the image sources needed to address commercial operations must be done correctly in a timely and consistent manner .

We have seen the erratic delivery of Sentinel data after the press release says different. I can not deal with hype. My customers do not allow me that option.

Quality control in all levels is very much required. I am a problem solver. That is why I am still in the satellite imagery business after 24 years. When logic clashes with output then I get in the mix, That is why I am asking the questions, because many time programmers have thought I was wrong, but they were not using basic logic.

My 2 cents!

Also, if we have found one of the few errors, then it would be in the 17SKR image on 6/6/18.

I am still not able to get this image to kick, but we did get the 6/11/18 17SKR image to work. What is the difference in the two files?

Please use this as a test case and prove me wrong that it is not the file. I want to find out the issue! I will gladly accept the blame.

Lanny