BIOMASS ESA MAAP L1a Data Access Example#

@ ESA, 2025 = Licensed under “European Space Agency Community License”

Author: Saskia Brose (saskia.brose@esa.int)

Acknowledgements: This notebook incorporates suggestions and edits from Serdar Rama, which have been integrated into this version.

Created: 22-01-2026

Modifications:

  • [v2] 26-01-2026: Updated with new queryables: product:type=’S1_SCS__1S’ instead of productType = ‘S1_SCS__1S’

  • [v3] 11-02-2026: Asset naming for quicklooks changed to quicklook_png and quicklook_kml


This script shows how to use the STAC API to query the ESA MAAP catalog, stream, preview, or download Biomass data. More examples on how to access Biomass data can also be found in the ESA MAAP knowledge base and user guides:

Prerequisities

import os
import pathlib
from tqdm import tqdm
from io import BytesIO
import xml.etree.ElementTree as ET
import requests
import fsspec
from PIL import Image
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import rasterio as rio
from rasterio.windows import Window
import geopandas as gpd
from shapely.geometry import Polygon
import fiona

fiona.supported_drivers["KML"] = "rw"  # Enable kml driver
import folium
from pystac_client import Client

Using the STAC API to query the ESA MAAP stac catalog#

While the discovery of data (querying the ESA MAAP catalogue) does not require any authentication or authorization, accessing the data requires a token generated with an authorized ESA account (EO Sign in) to verify the user. This is the same account and credentials you will have used for the OADS system.

catalog_url = 'https://catalog.maap.eo.esa.int/catalogue/'
catalog = Client.open(catalog_url)

Step 1: Find the collection#

# find all Biomass collections
results = catalog.collection_search(
    filter="platform='Biomass' and parentIdentifier='EOP:ESA:MAAP'",
    datetime="2025-12-01T00:00:00.000Z/..",
)
print(f"{results.matched()} collections found.")

data = results.collection_list_as_dict()
df = pd.json_normalize(data, record_path=["collections"]).sort_values(
    by="title", ignore_index=True
)
df[["id", "title", "extent.temporal.interval"]]
17 collections found.
id title extent.temporal.interval
0 BiomassAux Biomass Auxiliary [[2024-07-31T00:00:00.000Z, None]]
1 BiomassAuxIOC Biomass Auxiliary (IOC) [[2024-07-31T00:00:00.000Z, None]]
2 BiomassAuxRest Biomass Auxiliary Restricted [[2024-07-31T00:00:00.000Z, None]]
3 BiomassCalVal10 Biomass Cal/Val [[2025-04-29T00:00:00.000Z, None]]
4 BiomassLevel0 Biomass Level 0 [[2024-07-31T00:00:00.000Z, None]]
5 BiomassLevel0IOC Biomass Level 0 (IOC) [[2024-07-31T00:00:00.000Z, None]]
6 BiomassLevel1a Biomass Level 1A [[2024-07-31T00:00:00.000Z, None]]
7 BiomassLevel1aIOC Biomass Level 1A (IOC) [[2024-07-31T00:00:00.000Z, None]]
8 BiomassLevel1b Biomass Level 1B [[2024-07-31T00:00:00.000Z, None]]
9 BiomassLevel1bIOC Biomass Level 1B (IOC) [[2024-07-31T00:00:00.000Z, None]]
10 BiomassLevel1c Biomass Level 1C [[2024-07-31T00:00:00.000Z, None]]
11 BiomassLevel1cIOC Biomass Level 1C (IOC) [[2024-07-31T00:00:00.000Z, None]]
12 BiomassLevel2a Biomass Level 2A [[2024-07-31T00:00:00.000Z, None]]
13 BiomassLevel2aIOC Biomass Level 2A (IOC) [[2024-07-31T00:00:00.000Z, None]]
14 BiomassLevel2b Biomass Level 2B [[2024-07-31T00:00:00.000Z, None]]
15 BiomassLevel2bIOC Biomass Level 2B (IOC) [[2024-07-31T00:00:00.000Z, None]]
16 BiomassSimulated Biomass Simulated data [[2024-07-31T00:00:00.000Z, None]]

Currently the Open & Free Biomass collections are:

  • BiomassLevel1a

  • BiomassLevel1b

  • BiomassLevel2a

  • BiomassLevel2b

Results#

Each granule (one Biomass acquisition per product) includes multiple assets, which are different files that serve distinct purposes. These assets can include preview images, scientific data, metadata, and more. Because Biomass products are stored unzipped, you can point to single files instead of downloading the entire archive.

Examples / Tips:

  • Want a quick look? Use the quicklook or thumbnail to preview the data

  • Need to analyze? Work with the individual product files enclosure_i_abs_tiff and enclosure_i_phase (.tiff files)

  • Don’t need everything? Avoid the .zip unless you really need to download all files

# show product properties/accessible data and metadata
search.item_collection()[-1]

Quicklook of the data

# convert to df for easy visualisation
data = search.item_collection_as_dict()

df = pd.json_normalize(data, record_path=["features"])[
    [
        "id",  # unique id
        "properties.eopf:datatake_id",  # not unique id
        "properties.product:type",
        "properties.updated",
        "properties.sat:absolute_orbit",
        "properties.sat:orbit_state",
        "assets.thumbnail.href",
        "assets.quicklook_kml.href",
        "assets.quicklook_png.href",
        "assets.enclosure_i_abs_tiff.href",
        "assets.enclosure_i_phase_tiff.href",
        "assets.product.href",
    ]
]

# Renaming the assets for
df.rename(
    columns={
        "properties.eopf:datatake_id": "dt_id",
        "properties.product:type": "product_type",
        "properties.updated": "last_modified",
        "properties.sat:absolute_orbit": "abs_orbit",
        "properties.sat:orbit_state": "orbit_state",
        "assets.thumbnail.href": "thumbnail",
        "assets.quicklook_kml.href": "quicklook_kml",
        "assets.quicklook_png.href": "quicklook_png",
        "assets.enclosure_i_abs_tiff.href": "abs_product",
        "assets.enclosure_i_phase_tiff.href": "phase_product",
        "assets.product.href": "zipped_product",
    },
    inplace=True,
)

df.sort_values(by="id", ascending=True, ignore_index=True, inplace=True)
df
id dt_id product_type last_modified abs_orbit orbit_state thumbnail quicklook_kml quicklook_png abs_product phase_product zipped_product
0 BIO_S1_SCS__1S_20251121T011412_20251121T011433... 24719280 S1_SCS__1S 2025-12-19T11:26:18Z 3017 descending https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/zipper/bi...
1 BIO_S1_SCS__1S_20251121T011432_20251121T011452... 24719280 S1_SCS__1S 2025-12-19T11:26:18Z 3017 descending https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/zipper/bi...
2 BIO_S1_SCS__1S_20251121T011529_20251121T011550... 24719280 S1_SCS__1S 2025-12-19T11:26:19Z 3017 descending https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/zipper/bi...
3 BIO_S1_SCS__1S_20251121T011548_20251121T011609... 24719280 S1_SCS__1S 2025-12-19T11:26:20Z 3017 descending https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/zipper/bi...
4 BIO_S1_SCS__1S_20251121T011626_20251121T011647... 24719280 S1_SCS__1S 2025-12-19T11:26:20Z 3017 descending https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/zipper/bi...
5 BIO_S1_SCS__1S_20251121T015447_20251121T015508... 24724035 S1_SCS__1S 2025-12-19T11:26:21Z 3018 ascending https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/zipper/bi...
6 BIO_S1_SCS__1S_20251121T031903_20251121T031923... 24729080 S1_SCS__1S 2025-12-19T11:26:24Z 3018 ascending https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/zipper/bi...
7 BIO_S1_SCS__1S_20251121T032000_20251121T032021... 24729080 S1_SCS__1S 2025-12-19T11:26:25Z 3018 ascending https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/zipper/bi...
8 BIO_S1_SCS__1S_20251121T032348_20251121T032408... 24729080 S1_SCS__1S 2025-12-19T11:26:26Z 3019 ascending https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/zipper/bi...
9 BIO_S1_SCS__1S_20251121T032852_20251121T032912... 24729080 S1_SCS__1S 2025-12-19T11:26:29Z 3019 ascending https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/biomass-p... https://catalog.maap.eo.esa.int/data/zipper/bi...
# We can view the quicklooks of the first ten products: 

fig, axes = plt.subplots(1, len(df), figsize=(2 * len(df), 10), sharey=True)

for ax, row in zip(axes, df.iterrows()):
    response = requests.get(row[1].thumbnail)
    img = Image.open(BytesIO(response.content))
    ax.imshow(img)
    ax.set_title(row[1].dt_id, fontsize=10)
    #ax.axis("off")

plt.tight_layout()
plt.show()
../_images/97eae2b74e4c3636669539b98e8d41358889a99e19d124753a882211c61cae58.png

Stream and plot data

Generate your personal access token here

This token is valid for 90 days. It is an offline token that in combination with a set of (permanent) credentials can be used to generate refresh tokens to actually access the data

# Permanently valid, no need to refresh
CLIENT_ID="offline-token"
CLIENT_SECRET="p1eL7uonXs6MDxtGbgKdPVRAmnGxHpVE"
# TODO: Update this ith your .txt
if pathlib.Path("token_yourname.txt").exists():
  with open("token_yourname.txt","rt") as f:
    OFFLINE_TOKEN = f.read().strip().replace("\n","")
    url = "https://iam.maap.eo.esa.int/realms/esa-maap/protocol/openid-connect/token"
    data = {
        "client_id": CLIENT_ID,
        "client_secret": CLIENT_SECRET,
        "grant_type": "refresh_token",
        "refresh_token": OFFLINE_TOKEN,
        "scope": "offline_access openid"
    }

    response = requests.post(url, data=data)
    response.raise_for_status()

    response_json = response.json()
    access_token = response_json.get('access_token')
    print("Access token retrieved successfully.")

    if not access_token:
        raise RuntimeError("Failed to retrieve access token from IAM response")
Access token retrieved successfully.
def read_gpd_streaming(url, token):
    fs = fsspec.filesystem("https", headers={"Authorization": f"Bearer {token}"})
    with fs.open(url, "rb") as f:
        kml_data = f.read()  # get file from HTTPS using token
        root = ET.fromstring(kml_data)  # parse kml
        ns = {
            "kml": "http://www.opengis.net/kml/2.2",
            "gx": "http://www.google.com/kml/ext/2.2",
        }  # define namespaces
        latlonquads = root.findall(".//gx:LatLonQuad", ns)  # get gx:LatLonQuad elements

        polygons = []
        for quad in latlonquads:
            coords_elem = quad.find("coordinates")  # find coords
            if coords_elem is None:
                continue
            coords_text = coords_elem.text.strip()
            coord_pairs = coords_text.split()
            coords = [tuple(map(float, pair.split(","))) for pair in coord_pairs]
            if len(coords) >= 4:  # should be a closed polygon
                polygons.append(Polygon(coords))

        gdf = gpd.GeoDataFrame(
            {"name": "unnamed", "geometry": polygons}, crs="EPSG:4326"
        )

        return gdf


def read_rio_streaming(url, token, subset=False):
    fs = fsspec.filesystem("https", headers={"Authorization": f"Bearer {token}"})

    with fs.open(url, "rb") as f:
        with rio.open(f) as src:
            if subset:
                window = Window(
                    col_off=subset[0],
                    row_off=subset[1],
                    width=subset[2],
                    height=subset[3],
                )
                bands = [src.read(i + 1, window=window) for i in range(src.count)]
                gcps = src.get_gcps()  # image encoded in GCP
            else:
                bands = [src.read(i + 1) for i in range(src.count)]
                gcps = src.get_gcps()  # image encoded in GCP

        return src, bands
# sample
img_n = 5
abs_img_url = df.loc[img_n, "abs_product"]
kml_url = df.loc[img_n, "quicklook_kml"]


print(abs_img_url)
print(kml_url)
https://catalog.maap.eo.esa.int/data/biomass-pdgs-01/BiomassLevel1a/2025/11/21/BIO_S1_SCS__1S_20251121T015447_20251121T015508_T_G01_M01_C01_T002_F036_01_DJULXB/BIO_S1_SCS__1S_20251121T015447_20251121T015508_T_G01_M01_C01_T002_F036_01_DJULXB/measurement/bio_s1_scs__1s_20251121t015447_20251121t015508_t_g01_m01_c01_t002_f036_i_abs.tiff
https://catalog.maap.eo.esa.int/data/biomass-pdgs-01/BiomassLevel1a/2025/11/21/BIO_S1_SCS__1S_20251121T015447_20251121T015508_T_G01_M01_C01_T002_F036_01_DJULXB/BIO_S1_SCS__1S_20251121T015447_20251121T015508_T_G01_M01_C01_T002_F036_01_DJULXB/preview/bio_s1_scs__1s_20251121t015447_20251121t015508_t_g01_m01_c01_t002_f036_map.kml
# open kml
gdf = read_gpd_streaming(kml_url, access_token)

center = gdf.geometry.union_all().centroid
m = folium.Map(location=[center.y, center.x], zoom_start=8, tiles="OpenStreetMap")
folium.GeoJson(gdf).add_to(m)
m
Make this Notebook Trusted to load map: File -> Trust Notebook
# --- load a slice of the scene
subset = (0, 200, 1000, 1500)
src, bands = read_rio_streaming(abs_img_url, access_token, subset=None)
# --- plot the images
fig, axs = plt.subplots(4,1, figsize=(30,10))
polarisations = ['HH', 'HV', 'VH', 'VV']
for ((i, ax), p) in zip(enumerate(axs.flatten()), polarisations):
    ax.imshow(np.rot90(bands[i]), vmin=0, vmax=2 * np.nanmean(bands[i]), cmap="gray", aspect='equal' )
    ax.set_title(p)
    ax.set_xticks([])
    ax.set_yticks([])

plt.tight_layout(pad=1.0, w_pad=0.5, h_pad=1.0)
plt.show()
../_images/fabbcff5d187ea36363cdf949ed80fa6b5315dfc12a16693b5f3433b7e12f1b0.png

Download data#

You can also use your token and the url to download data and not just stream it.

Bulk download : To bulk download data, run the cell below in a loop passing a list of .h5 (or other) urls you fetch from the code previously shown.

def download_file_with_bearer_token(url, token, folder_path, disable_bar=False):
    try:
        headers = {"Authorization": f"Bearer {token}"}
        response = requests.get(url, headers=headers, stream=True)
        response.raise_for_status()  # Raise an exception for bad status codes
        file_size = int(response.headers.get("content-length", 0))

        chunk_size = 8 * 1024 * 1024  # Byes - 1MiB
        file_path = (
            url.rsplit("/", 1)[-1]
            if "." in url.rsplit("/", 1)[-1]
            else url.rsplit("/", 1)[-1] + ".zip"
        )
        os.makedirs(folder_path, exist_ok=True)
        file_path = folder_path + file_path
        with open(file_path, "wb") as f, tqdm(
            desc=file_path,
            total=file_size,
            unit="iB",
            unit_scale=True,
            unit_divisor=1024,
            disable=disable_bar,
        ) as bar:
            for chunk in response.iter_content(chunk_size=chunk_size):
                read_size = f.write(chunk)
                bar.update(read_size)

        if disable_bar:
            print(f"File downloaded successfully to {file_path}")

    except requests.exceptions.RequestException as e:
        print(f"Error downloading file: {e}")
folder_path = "maap-data/"
# Example, only download the enclosure_i_abs_tiff
download_file_with_bearer_token(abs_img_url, access_token, folder_path)
maap-data/bio_s1_scs__1s_20251121t015447_20251121t015508_t_g01_m01_c01_t002_f036_i_abs.tiff: 100%|██████████| 225M/225M [00:05<00:00, 40.2MiB/s]