Hello everyone ;
I’m working with eolearn and as I marked the last time that I will try to extract the coordinates of point of an eopatch .You have advice me to use eopatch_to_dataset & new_coordinates . I try to work with them but I’m blocked in the same step for many days now . It’s true that I get a result but when I try to put the coordinates in a dataframe I have one line with two columns , one that contains an array of the latt and one that contains an array of long .While I want to split the content of each column into many rows .
Any help please !
Hello everyone ;
Converting raster data to pandas dataframe seems inefficient; typically it is much faster to perform operations directly on raster data (in eo-learan, raster data are numpy multidimensional arrays, and numpy has very optimised operations on such data).
Could you perhaps explain what would be the benefit of having said raster data stored as pandas data frames?
Thank you for your feedback .
To achive my objectif I should have the coordinates of each pixel in a csv file .
This csv file with others informations will be used to generate the result carte .
EOPatch has bounding box and the coordinate reference system, and dimensions of the data gives you size of each pixel.
ExportToTiff task will output the features from your EOPatch to geoTiff, which is geo-referenced, so in this sense, the EOPatch already is a “map”.
That being said, with methods mentioned in Extact latitude and longitude from eopatch - #4 by maxim.lamare, you get two arrays, one of latitudes and one of longitudes. If you now crawl through pixels of your raster data (you could for instance loop through them), you have for each pixel its lat, lon and value.
Pay attention to dimensions’ order in numpy, so that you don’t inadvertently switch top/bottom or lat/lon.
Again, unless you have a valid reason why to do that, I strongly recommend sticking to raster data. It might be easier to convert the other information you mention into raster data (e.g.