Since the new sentinelhub-py version has been released, I have continued to experience slow download speeds.
After significant investigation, I have found that the config parameter “number_of_download_processes” and the way it is used in the library are to blame.
For a test case downloading several months of NDVI data for an agricultural parcel:
number_of_download_processes = 0, download time = 2-5s
number_of_download_processes = 1, download time = 4-5s
number_of_download_processes = 6, download time = 14s
number_of_download_processes = 30, download time = 66s
This parameter is only used in one place in the library: to calculate the minimum wait time between making download requests. This means, that the higher the number, the longer the wait time is enforced. This of course is completely counter-intuitive as increasing the number of download processes should reduce download times (to a point). The actual number of processes used to simultaneously download requests is actually undefined in the library, and defined automatically by the python ThreadPoolExecutor class.
I don’t know if this feature works as intended for requests with larger areas, but for my use-case, it is counter-intuitive. However, I can use this parameter in the config file to roughly control the rate at which requests are sent to match my available processing unit limits.