429 Client Error for url: https://services.sentinel-hub.com/oauth/token

Hi there,

I’m seeing a lot of “429 Client Error: Too Many Requests for url: https://services.sentinel-hub.com/oauth/token” responses under high traffic. I’m caching the SentinelHubSession or SHConfig objects for almost an hour (3600 seconds) and still getting 429’s for oauth.

Using sentinelhub==3.9.2

I believe our client id has enough PUs and also we are not seeing any 429’s for requests other than OAuth. Our license (actually its our customer’s license, we act as a proxy service) supports very high PUs.

How our code is structured:

We initialise the “SentinelHubClient” class (it is defined below) with client Id and client secret and cache it for 3600 seconds.

We use the “SentinelHubSession”'s session_headers getter to get the token and add the token to the headers before sending http request.

We are a multi-tenant service. By multi-tenant, I mean, we create and cache multiple "SentinelHubClient"s instances belonging to multiple client ids. Also, we use Kubernetes as our compute, so we will have many pods(container) which create and cache the “SentinelHubClient” for a given a client Id.

Code where we create and cache the “SentinelHubClient”:

Class SentinelHubClient is defined as:


class SentinelHubClient:
    sh_config: SHConfig
    sh_session: SentinelHubSession
    sh_catalog: SentinelHubCatalog

    def __init__(self, client_id: str, client_secret: str):
        self.sh_config = SHConfig()
        self.sh_config.sh_client_id = client_id
        self.sh_config.sh_client_secret = client_secret
        self.sh_config.number_of_download_processes = 1
        self.sh_config.max_download_attempts = 3
        self.sh_config.download_timeout_seconds = 120
        self.sh_session = SentinelHubSession(config=self.sh_config)
        self.sh_catalog = SentinelHubCatalog(config=self.sh_config)

And SentinelHubClient is created only if not present in cached:

    def __get_sh_client(self, client_id: str, client_secret: str) -> SentinelHubClient:
        """Get SH client from cache or create if it doesn't exist"""
        cache_key = f"{client_id}@{hash(client_secret)}"
        try:
            sh_client = self.__sh_client_dictionary[cache_key]
        except KeyError:
            sh_client = SentinelHubClient(client_id, client_secret)
            self.__sh_client_dictionary[cache_key] = sh_client

        return sh_client

Our cache:

    def __init__(self) -> None:
       ## I'm 100% sure we are not hitting the max size of cache. We only have 4-5 unique client ids.
        self.__sh_client_dictionary: TTLCache = TTLCache(maxsize=100, ttl=3600) 

Usage:

And the way how we fetch and use the session headers:

  1. Passing session headers to a python request library header:
            response = requests.post(
                self.SENTINEL_HUB_SEARCH_URL,
                json=payload,
                headers=self.__get_sh_client(
                    search_request_body.provider_client_id, search_request_body.provider_client_secret
                ).sh_session.session_headers, # <=========================================== Throwing exception with 429 Oauth as the reason
            )
  1. Using SentinelHubClient’s config method (focus on config parameter, please ignore the rest):
        request_all_bands = SentinelHubRequest(
            evalscript=<eval script here>,
            input_data=<Input data here>,
            responses=<response>,
            geometry=<Geometry>,
            resolution=(<res_x>, <res_y>),
           config=self.__get_sh_client(client_id, client_secret).sh_config, # <========================== Focus here
            data_folder=<data_folder>,
        )
## And then use the "SentinelHubRequest" like this
                response = request_all_bands.get_data(save_data=True, decode_data=False)
## response body is "429 Client Error: Too Many Requests for url: https://services.sentinel-hub.com/oauth/token".

There are many pods (~ 30-40 pods) running the same piece of code at the same time, and in each pod, many threads run simultaneously, but in a given pod, the sh_client_dictionary (TTLCache) is shared across threads, and so it gaurantees that SentinelHubClient instance for a given client id is created only once per pod.

Given the above details, what can be the reason for high 429 response that we are seeing for the OAuth token request?

Edit 1: We have as many as 100 pods running at sometime and each pod has 2 processes. So you can assume that we have 200 SentinelHubSessions cached at a given time.

But I assume that since these are cached and new token is fetched only once every 1 hour, there shouldn’t be more than 200*1 token refresh requests per hour. Which is very less traffic.

Then why is this artificial limit of <100 active authentication sessions?? : Sharing Sentinel Hub authentication session — Sentinel Hub 3.10.1 documentation.

How to share SentinelHubSession authentication session across multiple compute instances? I’m running my code in Kubernetes pods and pods scale up and down based on load.

Now how to use a single authentication session across pods? I can share data across pods using a cache or DB, but how to use a single authentication session which refreshes token when expiration is near??

@gmilcinski Can you help us?

Hi @MSFTFB ,

Have you thought about creating a main function to refresh the token every hour and caching the token? Then for each pod you can create a session from the cached token (session = SentinelHubSession.from_token(token), more details in the doc). When the session is expired, the pod should read the cached token again and create a new session.

Thanks for the reply @chung.horng

You meant to say that run a different process which refreshes the token when its near expiry and also share the same token across different pods ?? This adds complexity as the token is a secret and so it needs to be only saved in some in-memory cache to share between pods.

Why is the limit so restrictive? I mean, since the token validity is 1hour (3600 seconds), why is it an issue if there are say 100-500 different pods each having their own token?

Assuming there are 1000 pods (different instances) running, I see that there won’t be more than 1000 token fetch request per 1 hour (3600 seconds), which is like ~17 Requests per minute (RPM), which is very low traffic by any standard.

I’m asking to ease the restriction because, in distributed and microservices architecture, compute instances scale horizontally and are often stateless and want to scale independently.

I don’t think 17 Request per minute ( which is less than 1 request per second (RPS)) is that big of a traffic to handle.

Can you please ease the restriction? @gmilcinski

The complexity is not in token creation, but keeping many tokens active.

If you need custom constraints on the account, reach to us and we will consider a custom package.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.