# AI Feature API Getting Started Guide¶

This guide is aimed at providing practical assistance to a developer or data scientist getting up to speed on the AI Feature API. Formal documentation is available on https://docs.nearmap.com/display/ND/NEARMAP+AI, and the swagger spec (detailed information for developers) is available via the Knowledge Hub page.

In order to make use of these examples, you will need to have:

• The Nearmap AI, AI Features API and at least one relevant AI Pack available on your subscription.
• Your user must be enabled by the administrator to use Nearmap AI, including the AI Features API.

# Prepare Requested Parcel Polygons¶

Here we prepare a set of custom parcel polygons for use with the API. Each one represents a custom "Query AOI", which in this case represents one property parcel.

Note that all our AI data is in EPSG:4326 latitude/longitude coordinates. You may wish to take epoch into account (as at the survey_date provided by a given example) if transforming into a local coordinate referenced, to get the best geospatial accuracy. As presented, these results will align perfectly with the Nearmap imagery survey from which they were produced without adjustment.

# Discover Available Classes: The "classes.json" endpoint¶

The "classes" endpoint is simple - it doesn't take any query parameters, and simply provides the list of IDs and Descriptions for every feature class you have access to on your account. If you add a new AI Pack, or we add new feature classes to your existing AI Packs, this will expand over time.

# Bring Your Own Parcels: The "features.json" endpoint¶

This endpoint is the best one to use if you have access to your own parcel database. Usually, this will either be a local government entity who manages their own parcels, a parcel provider, or a larger organisation such as an insurance carrier with some level of GIS capability in house that has aligned with a particular provider's parcels as their internal source of truth for property boundaries. No property boundary data is used behind the scenes to produce this data, other than an "on-the-fly" query with the polygon you provide to the endpoint.

## Get a first API Response¶

Assuming you have the API_KEY environment variable set to your Nearmap API Key. This API Key can be the same as the one used to access imagery, as long as it is pointing to the subscription on which you have Nearmap AI set up.

### Input Parameters¶

The key parameters of interest are:

• polygon: A comma separated list of lon1,lat1,lon2,lat2... in EPSG:4326, representing the "Query AOI". Multi-polygons are not supported - a single outer ring. Highly complex geometries with many points may take longer to query. Larger Query AOIs will take longer.
• since: (optional) instructs the API to ignore any AI results prior to date yyyy-mm-dd. Default is unrestricted.
• until: (optional) instructs the API to ignore any AI results after date yyyy-mm-dd. Default is unrestricted. The most recent processed AI results will be returned, optionally constrained by since and until (if both are provided, the most recent processed result within that window is returned.
• packs: (optional) restricts data to certain AI Packs as described in https://docs.nearmap.com/display/ND/AI+Packs. Default is to provide data for all AI Packs enabled on your account.
• apikey: Nearmap API Key.

Let's go through each of the top level sections of the response payload (i.e. the keys in the json file), and understand them as we go.

#### credits¶

The number of Nearmap AI credits used in the query. Typically, a single property will correspond to 1 credit. Larger or composite custom parcels may consume more credits. Buffering the parcel polygon to check what is present nearby will also consume more credits (a single property surrounded by a grid of 9 properties will consume 10 credits if a buffer is used that is the size of a whole property. Note that credits will only be charged if a valid response is returned (e.g. if no AI data matches the query, an empty result is returned, and no credits are charged).

#### systemVersion¶

The version of the data (which includes the machine learning model, post processing algorithms, and processing pipeline configuration). This is tied to the data, rather than the API (different requests from the same endpoint may retrieve different versions of data at various times/locations). A detailed changelog of AI System data versions can be found on knowledge hub under https://docs.nearmap.com/display/ND/AI+Content.

New attributes and changes are introduced from time to time, so please consider carefully how your application deals with versions.

A link to the location in Map Browser (based on both image capture date and lat/lon of parcel centre). This is exceptionally useful for troubleshooting, understanding whether the Nearmap AI result or some other internal data source is correct, and understanding why the Nearmap AI result may be wrong in some circumstances. We recommend enabling the relevant AI Layers after clicking the link, exploring different dates using the date picker, or using the 3D or oblique views.

#### Features¶

The "features" is the meat of the payload, containing all the geospatial objects (features), and information about them (attributes) in a flat list. Each feature has:

• surveyDate is the date of the survey which captured the image used in producing the data. All Nearmap AI data is calculated against a specific point in time.
• A unique id that identifies the feature, which is very useful for deduplicating identical features returned from nearby API requests, or joining cropped "large objects" in the surfaces and vegetation packs.
• A classId and description that describe what type of feature it is. The classId should be used within code (as a persistent ID for all features of that type), and the description is the human readable reference (which is not guaranteed to be persistent).
• confidence is the probability (float range 0-1) that the feature is in fact a real object of type classId. This is useful for determining a threshold for objects such as buildings. Typical confidence distributions can be found in https://docs.nearmap.com/display/ND/AI+Packs#AIPacks-Confidencetalk-677. For example, most solar panels are well above 90% confidence - solar panels with 50-70% confidence may either be difficult to spot (high glare, shadow etc.), or be a similar type of object such as a solar hot water system or skylight.
• parentId describes the relationship between features. All features within the custom parcel polygon are provided in a flat list. The parentId is blank for many features. However, ones that are heirarchically nested (a roof on a building, a tree overhang, roof material, roof type or solar panel on a roof). For example, a solar panel on a roof will have that roof's id as it's parent. A solar panel on the ground will have an empty parentId. This is a little more nuanced in structure than simple spatial overlap queries (e.g. tree overhang parents are roofs, and roof parents are buildings, despite them all overlapping).
• geometry: The geometry is an EPSG:4326 polygon describing the extent of the feature in 2D.
• attributes: Some features have particular metadata calculated about them. Buildings, for example, have a Roof Material attribute, which has a dominantComponent (the best estimate of the material covering the majority of the roof) as well as components describing the proportion and confidence of each material present on the roof.

This is the representation of the entire payload for a single Query AOI.

This is a convenient way of visualising the payload's results, and exporting into a geospatial file for use with a GIS system.

# Explore some examples¶

Now we move through each of the sample parcel boundaries, to show a variety of cases.

### Example ID 1 (Chicago Houses)¶

This example is an artificial hand drawn boundary around approximately 3 residential properties. It showcases a common issue with parcel boundaries. Where the parcel boundary is shifted relative to ground, it is reasonably common to retrieve a feature that clearly belongs to a neighbouring parcel, or to have a legitimate feature that extends beyond the queried parcel by some amount. While the on-the-ground accuracy of the imagery and AI can cause this, it is more commonly caused by parcel boundaries that have been digitised incorrectly (and there is a high degree of local variability in this!). One method to deal with this is to ignore any objects where area of the intersection of the feature and the parcel is small compared to the area of the feature, AND small compared to the size of the parcel. This allows you to retain small features that lie almost completely within the parcel, as well as large multi-parcel buildings, where enough of the building protrudes into the parcel that the parcel "substantially" contains some of the building.

### Example ID 2 (Seattle)¶

This is a waterfront property, with a large, reasonably complex residential home. Note that this was a hand drawn polygon, and it uses two credits, because it covers what are technically two smaller parcels. We then calculate the number of square metres of water body within 30 metres of the building.

### Example ID 3 (Omaha Townhouse)¶

This is a single parcel for a townhouse where the building extends over three blocks. It's important to ensure you don't discard buildings like this, and make a decision on whether to "crop" the building to the parcel (for the part of it belonging to this owner), or wish to analyse the building as a whole.

In this example, the building passes the above tests that we want it included, but an explicit decision is needed - are we interested in the whole physically connected building (which exists substantially both within and without the parcel), or just that part which falls within the parcel?

### Example ID 4 (Omaha School)¶

This is a large hand drawn parcel around a school and its campus to show a typical payload from a large, complex set of buildings returning in one query.