Sentinel 2 Inventory files

I would to create a CSV containing all the products available in the Sentinel-2 bucket and their URI.
I understand we should use the inventory files. However, it looks impossible to get the proper URI of a product to download the jpg2000, and vice versa.

Product: S2B_MSIL1C_20210118T083159_N0209_R021_T34NHK_20210118T103746
URI: s3://sentinel-s2-l1c/tiles/34/N/HK/2021/1/18/1

How is the sequence number generated? Does it follow a chronological order? Is there a way to link it to a specific productId?

I would like to find a better way than downloading/parsing all productInfo.json.

Thanks! Thomas

You can check the tiles’ path in the productInfo.json:

Generally the sequence number is generated in the order of ingestion.

Hello Grega,
thanks for your reply.
I know it is available in JSON files but as I mentionned, I do not want to download all of them for performance reasons.
I will try to generate them using the ingestion order nd see if everything match.

If I might suggest a modification, I think it would be a good thing to add an empty file with product id within the tile folder:

This way, using just inventory files, we can rebuild the whole product list and URI to get tiles. Else, we cannot know which sequence number is associated to a product without downloading the json files - using the ingestion date is cumbersome and not robust if products are deleted at some point.

Yeah, I can imagine that things are a bit more complex than one would have wanted. The reason that products and tiles are separated at all is due to ESA initially distributing Sentinel-2 products with tens (sometimes hundreds) of granules and there was clear need to have structured access to granules. At the time we did not want to duplicate the product data in each of the granules/tiles. Since then, ESA obviously realised it is better to have single-granule products so there is no need for this detachment anymore. Still, to ensure consistency back to the past, we left it like that.

Your suggestion does sound reasonable. That said, I cannot promise/commit to if/when we would do that as our effort in maintining this archive is not paid for, so we handle it as a low priority one. Even then, we have a lot of work to just ensure that all the TBs of data are copied…

Once this is implemented, I will let you know.

FYI, so far we have not deleted any of the products so there is no need to worry about this constraint.

Thanks, we will be the first to try / test if this gets implemented!