# Known Issues

### Known Issues

Global Nature Watch is an **experimental preview**. The platform combines AI assistance with a range of geospatial datasets, and both the underlying models and the data layers are actively evolving. Features, behaviors, data coverage, and outputs may change (sometimes significantly) between releases. You should always treat results as a starting point for investigation rather than as a finished analysis, and validate important findings against the original data sources before relying on them for decisions, reporting, or publication.

This page collects issues our team is currently aware of and working through. It is not exhaustive, and new issues are identified regularly. If you come across something that looks wrong or confusing, please let us know through the feedback tools in the platform (your reports help us prioritise).

### How the assistant interprets your question&#xD;

The AI assistant does not always interpret requests in the way a user might expect. You may notice:

* Date ranges in the response that do not match the date range you asked about, or totals that cover a wider span than intended.
* The assistant answering about a specific year when you did not specify one, or asking for a year when a reasonable default could have been used.
* Two similarly worded questions returning noticeably different answers.
* Occasional responses that do not clearly address the question that was asked.

If a response feels off, try rephrasing your question, being explicit about the time period, location, and topic, and check that the parameters shown in the response match what you intended.

### Which dataset the assistant chooses <a href="#which-dataset-the-assistant-chooses" id="which-dataset-the-assistant-chooses"></a>

The assistant selects datasets automatically based on your question. In some cases the selection may not be the most appropriate one for your use case. In particular:

* Queries about forest loss or deforestation may be answered using alert-style datasets (near-real-time disturbance signals) rather than annual loss datasets, or vice versa. These measure different things and are not directly interchangeable.
* The assistant may occasionally suggest an analysis or dataset that is not actually available on the platform.
* For some topics (for example, certain carbon storage questions), the most suitable dataset may not yet be integrated and a related dataset will be used instead.
* The assistant does not always select a more specific dataset (such as a primary-forest variant) when that would be the more appropriate choice for a tropical or sub-tropical query.

Where the choice of dataset matters to your interpretation, we recommend confirming which dataset was actually used and whether it is fit for the question you are asking.

### Numbers and calculations <a href="#numbers-and-calculations" id="numbers-and-calculations"></a>

Numerical results should be treated with care during the preview. Known points to be aware of:

* Figures on Global Nature Watch may not always match equivalent figures you see elsewhere, even when the underlying dataset is the same. Differences can come from how the area, time period, or filters are applied.
* In some combinations of filters (for example, certain land-mask options), results can move in an unexpected direction compared with the unfiltered query.
* Emissions and removals queries occasionally return incomplete results, omit parts of the time series, or describe outputs in ways that do not fully match what was computed.

If a number looks surprising or important to a decision, please cross-check it against the original dataset rather than relying on the in-platform figure alone.

### Charts and visualizations <a href="#charts-and-visualisations" id="charts-and-visualisations"></a>

You may occasionally see inconsistencies between the chart and the accompanying narrative, or within the chart itself. Examples include:

* Chart titles or captions that reference a dataset, metric, or series that is not actually shown on the chart.
* Axis labels, units, legends, or sign conventions that are incorrect, missing, or inconsistent.
* Charts that render with missing or empty data even though the accompanying text implies results exist.
* Differences in how the same underlying data is visualized here compared with other products.
* Occasional failures to render a chart, shown as an error message in the chat. The assistant is usually able to continue its analysis and interpretation from the underlying data, so the written response should still be useful. If the chart itself does not appear, asking the assistant to try generating it again succeeds in the majority of cases.

When in doubt, the textual narrative and the underlying data values (not the chart formatting) are the better reference.

### Language, terminology, and tone <a href="#language-terminology-and-tone" id="language-terminology-and-tone"></a>

Because the assistant generates text, the way results are described is not always scientifically precise. We are actively refining this, but you may still see:

* Terms such as "deforestation", "forest loss", and "disturbance" used less precisely than their technical definitions require (for example, certain disturbance signals being described as deforestation when they may represent other processes such as fire).
* Subjective or evaluative language (for example, words like "significant") in places where a more neutral description is appropriate.
* Hedging or softened language where a direct statement would be clearer, and vice versa.
* Conclusions that combine or generalize across categories in ways the underlying data does not strictly support.
* Outputs that touch on sensitive topics (carbon credits, wildfire interpretation, conservation status) where the assistant's framing may not yet be sufficiently careful.

Please read the assistant's wording as AI-generated narrative, not as expert interpretation.

### Sources, citations, and dataset information <a href="#sources-citations-and-dataset-information" id="sources-citations-and-dataset-information"></a>

The platform is not yet fully reliable in describing the data behind an answer:

* Citations and dataset references may occasionally be incomplete, incorrect, or generated rather than drawn from the actual source.
* Dataset cards, metadata, and temporal coverage statements may not always match the dataset that was used, including availability dates.
* Some analyses draw on a contributing dataset that is not explicitly named in the response.
* Internal configuration text occasionally appears in user-facing metadata where it should not.

If you intend to cite an output from the platform, please verify the source details against the dataset's own documentation.

### Locations and maps <a href="#locations-and-maps" id="locations-and-maps"></a>

The way the assistant resolves a place name to a specific area on the map can be inconsistent:

* Common or ambiguous place names may resolve to a different location than you intended, particularly where multiple places share a name or where the match is partial.
* Some protected areas or administrative units are not yet reliably found by name.
* In certain named locations a very small or unrepresentative area may be selected.
* Some regions currently see a higher rate of query failures than others.
* Information describing the satellite imagery currently shown on the map is not always accurate.

Where the exact area matters (for example, for a named protected area or concession), please confirm visually on the map before drawing conclusions.

### Messages shown in the application <a href="#messages-shown-in-the-application" id="messages-shown-in-the-application"></a>

A few rough edges in the interface itself:

* Technical error messages from underlying tools may sometimes appear in the chat even when the assistant has recovered and continued successfully. These can usually be ignored if a complete answer follows.
* Some staging or intermediate status messages may be phrased in ways that are unclear to users outside the team.

### Reporting something new <a href="#reporting-something-new" id="reporting-something-new"></a>

This list is not complete, and we would rather hear about an issue than not.&#x20;

Please continue to flag anything that looks wrong, confusing, or inconsistent through the feedback options in the platform: [Report a bug](https://surveys.hotjar.com/860def81-d4f2-4f8c-abee-339ebc3129f3) or [contact us](https://landcarbonlab.org/contact/). Each report helps us prioritize what to fix next. Thanks!


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.globalnaturewatch.org/resources/known-issues.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
