Conversation
Remove DHIS2 connection string references from setup section, add /extents and /datasets to endpoint table, and expand STAC example to show catalog discovery before opening a dataset with xarray.
uv run uvicorn resolves the uvicorn binary via PATH, which picks up conda's uvicorn when the base environment is active. Using python -m uvicorn forces the venv's interpreter and avoids the module not found error in the reload subprocess.
Datasets use x/y dimension names not latitude/longitude. Direct access example now reads open_kwargs from the STAC collection rather than hardcoding consolidated=False, which fails for Zarr v3 stores.
Step-by-step guide covering extent configuration, environment setup, first ingestion, and ERA5-Land DestinE authentication. Links added from README and user_guide.md.
…stance - Add 30s timeout to both httpx.get() calls in client.py to prevent indefinite hangs on network issues - Set allow_credentials=False in CORSMiddleware; combining allow_origins=["*"] with allow_credentials=True is a CORS spec violation and a security footgun - Use isinstance(x, (str, Path)) instead of str | Path union syntax for broader clarity (tuple form is unambiguous across all Python versions)
…x plural in docs
- Validate href in each STAC child link before slicing the id from it
- Check that assets is a dict before calling .get("zarr") to avoid
AttributeError on malformed STAC responses
- Fix "Confirm configured extents" heading to singular in managed data guide
Previously, built-in dataset YAMLs were located by walking four directory levels up from datasets.py and appending data/datasets/. This works in a source checkout or editable install but fails silently in a wheel install: the package lands in site-packages/ and the project-root data/ directory is never included in the wheel, causing list_datasets() to crash with "Path is not a directory". Move the YAMLs into the package at src/climate_api/data/datasets/ and load them via importlib.resources.files(). importlib.resources is package-aware and resolves correctly whether the package is an unpacked directory or a zip inside a wheel. User-provided datasets_dir (from CLIMATE_API_CONFIG) continues to use regular Path objects via _load_from_dir() — that path is always on disk.
…ts, safer conftest teardown - Raise ValueError (not KeyError) when the Zarr asset is missing or not a dict — all other error paths in open_dataset raise ValueError, so callers catch one exception type - Inject id into a copy of the link dict instead of mutating the parsed JSON object in-place - Use os.environ.pop() instead of del in conftest session fixture teardown to avoid KeyError if the env var was already removed by a test's monkeypatch - Replace next() generator in setup guide with an explicit list so an empty catalog gives an IndexError with clear context rather than StopIteration
…ative path Walking __file__ four levels up to find data/downloads/ fails when the package is installed with pip because __file__ lands in site-packages/ and the project root is not accessible. The directory may also be non-writable. Default to $XDG_DATA_HOME/climate-api/downloads (~/.local/share/climate-api/downloads if XDG_DATA_HOME is unset), which is always user-writable. The existing CACHE_OVERRIDE env var continues to work and takes precedence, keeping Docker and dev deployments unchanged.
abyot
left a comment
There was a problem hiding this comment.
Review Summary
This PR is well-structured and the direction is good. The package/installability work, config model, client, docs, and supporting tests are all substantial improvements.
The main issue is that the installable-package transition is incomplete. Built-in dataset templates and the download cache were decoupled from the repo tree, but three other runtime paths still assume repo-relative writable/readable files. That breaks the new pip/wheel install story and should be fixed before merge.
Findings
src/climate_api/ingestions/services.py:50-52
The artifact store still resolves to a repo-relative path:
DATA_DIR = Path(__file__).resolve().parent.parent.parent.parent / "data"
ARTIFACTS_DIR = DATA_DIR / "artifacts"
ARTIFACTS_INDEX_PATH = ARTIFACTS_DIR / "records.json"On a wheel install this will usually land inside site-packages and be non-writable. ensure_store() will fail on first ingestion or sync. This is a must-fix because the CLI can start but primary operations fail immediately.
Suggested fix:
Use the same XDG-style resolution pattern already applied to DOWNLOAD_DIR.
src/climate_api/publications/services.py:19-24andsrc/climate_api/startup.py:16-19
pygeoapi still depends on repo-relative paths in two places.
There are two separate problems:
-
Writable output:
data/pygeoapi/pygeoapi-config.ymlandpygeoapi-openapi.ymlare still written into a repo/package-relative directory, which will fail under a wheel install. -
Read-only input:
config/pygeoapi/base.ymlis still read from a repo-relative path and is not undersrc/. With the current build setup, it is not guaranteed to be present in the installed wheel._load_base_config()can fail withFileNotFoundErroreven if the writable output path is fixed.
Suggested fix:
- move writable pygeoapi output to an XDG-writable location
- move
base.ymlintosrc/climate_api/and load it viaimportlib.resources
.env.example:1-5andsrc/climate_api/config.py:21-25
The config bootstrap story is brittle for installed CLI usage.
The example config sets:
CLIMATE_API_CONFIG=./climate-api.yamlget_config_path() resolves this relative to the caller’s current working directory. climate-api.yaml is a top-level repo file, not packaged runtime data. A user who installs the package and runs climate-api outside the repo root will get:
FileNotFoundError: CLIMATE_API_CONFIG not found: /current/cwd/climate-api.yaml
This breaks the intended installed-CLI workflow.
Suggested fix:
- update
.env.exampleand docs to make the path semantics explicit - preferably support a more durable bootstrap path, such as an XDG config location or a clearer example-based workflow
climate-api.yaml
Shipping a live default extent is risky.
The committed default extent is Sierra Leone. If a deployer forgets to replace it, the instance runs against the wrong spatial scope silently. This is operationally risky.
Suggested fix:
Rename to climate-api.yaml.example and ignore the live file, mirroring .env.example.
src/climate_api/extents/services.py
get_extent_or_404 may now be dead code.
The GET /extents/{extent_id} route was removed and the instance model is now single-extent. If this helper is no longer used in the ingestion path, remove it. If it is still used indirectly, add a focused test to justify keeping it.
src/climate_api/data_registry/services/datasets.py
cache_info -> ingestion is a breaking change with no migration assist.
The validator now requires ingestion.eo_function, so older custom templates using cache_info fail immediately. The breaking change itself is acceptable, but there is no migration aid.
Suggested fix:
Add a startup-time warning when custom templates contain cache_info, to make the upgrade failure easier to diagnose.
src/climate_api/data_registry/services/datasets.py
The dataset registry validation message could be more precise.
The current error message conflates:
- missing
ingestionblock ingestionblock present but missingeo_function
This is not a correctness bug, but splitting the messages would improve operator debugging for malformed custom templates.
src/climate_api/client.py
The new client implementation has a few small design debts.
Not blockers, but worth noting:
list_datasets()derivesidfromhrefusing string splitting; fragile if link shapes change- each call creates a fresh
httpxrequest rather than reusing a client/session - the
30stimeout is hardcoded
src/climate_api/data_manager/services/downloader.py
The downloader coordinate rename block is correct but subtle.
The coordinate normalization is correct. The reassignment to longitude / latitude immediately after rename is slightly non-obvious and would benefit from a short explanatory comment.
Test Coverage
Overall coverage is strong and the new tests are useful.
Notable gaps:
- no tests for artifact-store path resolution / XDG writable behavior
- no tests for pygeoapi base-config packaging/runtime path
- no tests for CLI bootstrap with
CLIMATE_API_CONFIGoutside repo root - no focused test for whether
get_extent_or_404remains live code - if validation messages are split, add a test for missing
ingestionblock vs missingingestion.eo_function
…safe Fixes four issues that would break a pip-installed deployment: - ingestions/services.py: ARTIFACTS_DIR now resolves to XDG_DATA_HOME/climate-api/artifacts (or CACHE_OVERRIDE/artifacts) instead of a package-relative path. - publications/services.py + startup.py: PYGEOAPI_DIR now resolves to XDG_DATA_HOME/climate-api/pygeoapi. startup.py imports the constants from publications.services rather than computing its own path. - publications/services.py: _load_base_config() now reads base.yml via importlib.resources rather than a __file__-relative path. base.yml is moved into src/climate_api/data/pygeoapi/ so it is bundled inside the wheel. - climate-api.yaml renamed to climate-api.yaml.example and added to .gitignore, mirroring the .env.example pattern. Deployers copy it before editing so their live extent config never lands in version control. Also renames ingestion.eo_function to ingestion.function throughout (dataset YAMLs, downloader, data registry validation, docs, tests), adds a note to downloader.py explaining the coordinate rename invariant, and documents that CLIMATE_API_CONFIG must be an absolute path when running the installed CLI from a directory other than the repo root. Tests added: XDG path resolution for DOWNLOAD_DIR, ARTIFACTS_DIR, and PYGEOAPI_DIR; base.yml loadable from package; datasets_dir resolved relative to the config file location (covers pip install outside the repo).
|
All findings from @abyot's review (4241378691) have been addressed. Here is the complete status:
Test coverage gaps (from review):
|
Why
The Climate API was designed from the start for a single deployment scenario: clone the repo, edit files in place, run. This worked for early development but creates real problems as we move toward production deployments and want users to be able to install the package with
pip install climate-api:site-packages/because the project root is no longer accessible.This PR addresses all of these to make the package usable outside a source checkout.
What changed
Instance configuration via
CLIMATE_API_CONFIG(closes #61)A new
CLIMATE_API_CONFIGenvironment variable points to a YAML file that lives outside the repository. This separates instance-specific configuration from the package itself, so the package can be upgraded without overwriting local config.The extent is a single block per instance (not a list). The
GET /extentendpoint returns it, or 404 if not configured. Dataset templates fromdatasets_dirare merged with the built-ins — a custom template with the sameidoverrides the built-in one.Built-in dataset templates bundled inside the package
Previously, the built-in YAML templates (
chirps3.yaml,era5_land.yaml,worldpop.yaml) lived indata/datasets/at the project root and were located by walking four directory levels up from the source file. This breaks when the package is installed withpip install, because the package ends up insite-packages/with no path to the original project root.The YAMLs are now bundled inside the package at
src/climate_api/data/datasets/and loaded viaimportlib.resources, which resolves the correct location regardless of how the package was installed.Coordinate normalisation at write time
All Zarr datasets are now written with canonical coordinate names (
time,latitude,longitude) regardless of what the upstream source uses (valid_time,lat/lon,x/y). This is enforced inbuild_dataset_zarr()for both flat and pyramid outputs.Every downstream consumer — the client, the user guide, the OGC API — can now use
ds.latitude,ds.longitude,ds.timewithout dataset-specific branching.Python client for dataset discovery and access (closes #60)
A new
climate_api.clientmodule makes it possible to discover and open datasets without constructing URLs manually:Module-level functions (
list_datasets,open_dataset) fall back to theCLIMATE_API_BASE_URLenvironment variable, so scripts work without hardcoding a URL.create_app()factory functionThe FastAPI application is now created via a
create_app()factory, making it straightforward to embed the API in a larger application:CORS credentials flag corrected
allow_credentialswas incorrectly set toTruealongsideallow_origins=["*"]. This combination violates the CORS specification and is rejected by browsers. It is now set toFalse, which is correct for a public data API that does not use cookies or session tokens.Dataset template field renamed:
cache_info→ingestionThe
cache_infoblock in dataset template YAMLs is renamed toingestion. Theingestion.eo_functionfield is now required for all sync kinds, not just temporal ones.Documentation
docs/setup_guide.md— step-by-step instance setup from install to first ingestiondocs/user_guide.md— consumer guide: STAC discovery, opening with xarray, subsettingdocs/adding_custom_datasets.md— how to write a custom dataset template and wire it upexamples/stac_discover_and_open.pyandexamples/zarr_direct_access.py— runnable examples using the clientMigration note
Existing datasets must be deleted and re-ingested. Coordinate normalisation only applies to newly written Zarr stores. Zarr files written before this PR will retain their original source coordinate names.
Rename
cache_info:toingestion:in any custom dataset YAML templates.Test plan
make runstarts the API without errorsuv run examples/stac_discover_and_open.pylists published datasets and prints dataset infouv run examples/zarr_direct_access.pyopens a Zarr store and prints a spatial mean time seriesfrom climate_api.client import Client; print(Client("http://127.0.0.1:8000").catalog())works in a Python sessionclimate-api.yamlserves the correct extent and built-in datasetsdatasets_dirwith a custom YAML adds that dataset alongside the built-insmake testpasses