I was showing my earlier PRISM/matplotlib examples to a student and their immediate response was to ask how hard it would be to add state outlines to the map without using Basemap. A quick Google/stackoverflow search suggested that it shouldn’t be too hard (link, link, link, link), as long as the state outlines were in the same coordinate system/projection as the raster data.
The first thing was to read in the PRISM array and compute the extent from the HDR file in order to set the imshow extent.
import numpy as np def read_prism_hdr(hdr_path): """Read an ESRI BIL HDR file""" with open(hdr_path, 'r') as input_f: header_list = input_f.readlines() return dict(item.strip().split() for item in header_list) prism_path = 'PRISM_tmean_30yr_normal_4kmM2_annual_bil.bil' header_dict = read_prism_hdr(prism_path.replace('.bil', '.hdr')) prism_nodata = float(header_dict['NODATA']) prism_rows = int(header_dict['NROWS']) prism_cols = int(header_dict['NCOLS']) # Assume the cells are square and have the same cellsize in each direction prism_cs = float(header_dict['XDIM']) # prism_cs = float(header_dict['YDIM']) # Build extent as xmin, xmax, ymin, ymax # ULXMAP and ULYMAP are upper left cell center prism_extent = [ float(header_dict['ULXMAP']) - 0.5 * prism_cs, float(header_dict['ULXMAP']) - 0.5 * prism_cs + prism_cols * prism_cs, float(header_dict['ULYMAP']) + 0.5 * prism_cs - prism_rows * prism_cs, float(header_dict['ULYMAP']) + 0.5 * prism_cs ] # Read in the array prism_array = np.fromfile(prism_path, dtype=np.float32) prism_array = prism_array.reshape(prism_rows, prism_cols) prism_array[prism_array == prism_nodata] = np.nan
My first approach was basically copied from this script. I used the 1:20,000,000 state shapefile from the US Census Cartographic Boundaries website. I installed gdal, fiona, descartes, and shapely using the wheel files built by Christoph Gohlke, since none of the modules would install with conda (on my windows computer). I had to use GDAL 1.11.4 instead of 2.0.2, otherwise I got an error when reading the shapefile with fiona.
import os from collections import defaultdict from descartes import PolygonPatch import fiona import matplotlib.pyplot as plt from matplotlib.collections import PatchCollection from shapely.geometry import MultiPolygon, Polygon, shape # Read the polygon geometries from the shapefile shp_path = 'cb_2014_us_state_20m.shp' mp = MultiPolygon( [shape(pol['geometry']) for pol in fiona.open(shp_path, 'r')]) fig = plt.figure() ax = fig.add_subplot(111) ax.imshow(prism_array, cmap='viridis', extent=prism_extent) patches = [PolygonPatch(p, fc='none', ec='0.1', zorder=1) for p in mp] ax.add_collection(PatchCollection(patches, match_original=True)) plt.grid() plt.show()
The problem with this combination of approach and dataset, are that the state features are multi-polygons, but only the first polygon is read if the feature is a multi-polygon. As you can see, some of the state with more than one polygon are missing (California, Michigan, Florida, etc.).
The following approach will check whether the feature is a polygon or multi-polygon. If it is a multi-polygon, it will read each polygon geometry from the multi-polygon.
geom_list =  with fiona.open(shp_path, 'r') as cell_f: for feature in cell_f: ftr_geom = shape(feature['geometry']) if ftr_geom.is_empty: continue elif ftr_geom.geom_type == 'MultiPolygon': # Unpack multi-polygons to lists of polygons geom_list.extend([g for g in ftr_geom if not g.is_empty]) elif ftr_geom.geom_type == 'Polygon': geom_list.append(ftr_geom) else: print('Invalid geometry type') continue mp = MultiPolygon(geom_list) fig = plt.figure() ax = fig.add_subplot(111) ax.imshow(prism_array, cmap='viridis', extent=prism_extent) patches = [PolygonPatch(p, fc='none', ec='0.1', zorder=1) for p in mp] ax.add_collection(PatchCollection(patches, match_original=True)) plt.grid() plt.show()
I will definitely try this again and see if there is a simpler approach or one that relies on fewer modules.