Good morning folks,
As of this morning a process I run to get valid scenes over a given area has started failing with 403 errors from the OpenSearch API. My most recent successful run was on the 5th, so I’m not sure when in the interval this started.
The traceback:
[ERROR] DownloadFailedException: Failed to download from:
http://opensearch.sentinel-hub.com/resto/api/collections/Sentinel2/search.json?startDate=2020-10-10T00%3A00%3A00&completionDate=2023-01-09T23%3A59%3A59&box=-64.81561785%2C-8.22966416%2C-63.81704292%2C-7.23368282&maxRecords=500&index=1
with HTTPError:
403 Client Error: Forbidden for url: http://opensearch.sentinel-hub.com/resto/api/collections/Sentinel2/search.json?startDate=2020-10-10T00%3A00%3A00&completionDate=2023-01-09T23%3A59%3A59&box=-64.81561785%2C-8.22966416%2C-63.81704292%2C-7.23368282&maxRecords=500&index=1
Server response: ""
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 244, in lambda_handler
main(granule, ignore_prior_ll=ignore_prior_ll, early_date=early_date,
File "/var/task/lambda_function.py", line 215, in main
imagery_dict = query(granule, minx, miny, maxx, maxy, early_date, latest_used,
File "/var/task/lambda_function.py", line 142, in query
for tile_info in get_area_info(search_bbox, search_time_interval):
File "/opt/python/sentinelhub/opensearch.py", line 172, in search_iter
response = client.get_json(url)
File "/opt/python/sentinelhub/download/client.py", line 212, in get_json
return self._single_download(request, decode_data=True)
File "/opt/python/sentinelhub/download/client.py", line 120, in _single_download
response_content = self._execute_download(request)
File "/opt/python/sentinelhub/download/handlers.py", line 44, in new_download_func
return download_func(self, request)
File "/opt/python/sentinelhub/download/handlers.py", line 27, in new_download_func
raise DownloadFailedException(_create_download_failed_message(exception, request.url)) from exception
And the offending line of code:
for tile_info in get_area_info(search_bbox, search_time_interval):
(search_bbox and search_time_interval are calculated earlier in the code)
The strange thing is, I can copy the URL that failed and paste it into my browser and get a JSON listing no problem. The failure is occurring in an AWS Lambda function, so I’m wondering if that’s the source of the problem (perhaps sentinelhub is denying queries from AWS?)…but if it is, I’m not sure what to do. Any advice would be appreciated.