Creating a database for deep learning

After reading your blog post (Area Monitoring — Crop Type Marker | by Sinergise | Sentinel Hub Blog | Medium), and the two cited academic papers, I wonder what is the best way to try and reproduce and retrain such a database and those models.

Downloading entire tiles is a cheap and long option but it makes no use of Sentinel-hub WMS service.
However, even if I only have 500 field parcels (for example), and I need a three years sentinel 2 data like in the blog - if I use WMS requests it will blow up for thousands of requests/processing units.

Is there a better way?

Shahar

Thousand of processing units costs cca 1 EUR, so making thousands of them should not be too expensive. A trial account should be sufficient for 500 parcels.
I would also recommend you take a look at the ESA-sponsored packages:

Which API would be best to tackle this task?
Assuming I have a csv with 500 rows- each row has a geometry column and some other columns like crop type, season dates, etc.
Should I iterate it line by line using a WMS request or is there a cleaner approach using batch API?
From what I saw batch API is mostly suitable for working with eo-patches and not individual geometries.

For individual geometries Batch is indeed not the best solution, so it is best to go line by line.

As you are working on the object level, you might also want to take a look at our Statistical API. Here, instead of working with pixels, you are working with JSON, so you have orders of magnitude less data. Our data sientists make quite a good use of this.