Data and Scripts
for Hydrological Streamline Detection Using a U-net model

$Zewei$ $Xu^{1,2,4}$; $Nattapon$ $Jaroenchai^{1,2,4}$; $Arpan$ $Man$ $Sainju^{3,5}$; $Li$ $Chen^{1,2,4}$; $Zhiyu$ $Li^{1,2,4}$; $Larry$ $Stanislawski^{6}$; $Ethan$ $Shavers^{6}$; $Bin$ $Su^{1,2,4}$; $Zhe$ $Jiang^{3,5}$;$Shaowen$ $Wang^{1,2,4}$

$^{1}$$CyberGIS Center for Advanced Digital and Spatial Studies$
$^{2}$$Department of Geography and Geographic Information Science$
$^{3}$$Department of Computer Science$
$^{4}$$University of Illinois at Urbana-Champaign$
$^{5}$$University of Alabama$
$^{6}$$U.S. Geological Survey$
$Corresponding Author: zeweixu2@illinois.edu$


Notebook Structure:


Introduction

Surface water is an irreplaceable strategic resource for human survival and social development. Accurate delineation of hydrologic streamlines, which represent one of the major forms of land surface water, is critically important for various scientific disciplines, such as water resources assessment, climate modelling, agricultural suitability, river dynamics, wetland inventory, watershed analysis, surface water surveying and management, flood mapping, and environmental monitoring. Traditional hydrological models depend largely on topographic information, which inevitably contain errors. For example, when deriving drainage lines from elevation, dry drainage channels are likely to be falsely recognized as streamlines, and roads or bridges can act as barriers forcing flowlines along the roadway rather than under the bridge or through a culvert. Traditional methods also generally ignore the information about the complex 3D environment of stream channels as well as surface reflectance, which is potentially very useful for accurately delineating streamlines. In recent years, the availability of high accuracy LiDAR data provides a promising method to capture both the 3D environment structure and the land surface reflectance information. Terrestrial LiDAR sensors use NIR light in the form of a pulsed laser to measure range (variable distances) to a surface and reflectance intensity for multiple returns. These light pulses generate precise, three-dimensional information about the shape and characteristics of reflecting surfaces. In this research, multiple LiDAR features maps are generated, and we developed a deep learning model based on the U-Net architecture and attention mechanism for streamline detection. We also compared the deep learning model with several traditional machine learning methods as baseline. An advanced cyberinfrastructure is used for computing power and CyberGIS-Jupyter is applied for research reproducibility. An accuracy evaluation indicates the attention U-net model outperforms the best baseline method by 8.08% on average by F1-score, and it provides better smoothness and connectivity over the predicted streamlines.

Research Objectives

  • To implement a U-Net Convolutional Neural Network model with cyberinfrastructure to identify streamlines and diffuse knowledge (reproducibility!)
  • To understand the role of parameterization in model performance
  • To compare the results and accuracy from the U-Net CNN model with those from traditional methods for streamline identification
  • Build the foundation for a model that can more accurately and efficiently identify streamlines and is scalable (replicability!)
  • Further the discourse of R&R in CyberGIScience, and develop lifelong collaborative partners and friends

Input Dataset

Data format: las (raw LiDAR data),tiff (feature maps), and numpy array (organized data)

  • Panther_LiDAR: Store LiDAR point cloud data (.las) (e.g. IA_Statewide_2008_009081.las, naming format: State_projection_year_pathlines)

  • Rowan_LiDAR: Store LiDAR point cloud data (e.g. 0_1.las, naming forma8: row_number+column_number from the top-left corner of the research area), organized LiDAR feature maps and reference data under data folde under data folder.

  • mask.npy: The mask of research area. 1 indicates research area. 0 indicates outside area.

  • reference.npy: numpy array format of reference data. 1 indicates streamlines, 0 indicates non-streamlines

  • total.npy: total prediction data of all feature maps

  • TIFF: All of the feature maps are derived from the ground return of LiDAR point clouds:

    (1) DEM(digital elevation model): a 1-m resolution digital elevation model (DEM) derived from the ground return points.

    (2) Curvature map: geometric curvature determined from the DEM. Geometric curvature is determined using GeoNet software (Sangireddy, et al., 2016). The software applies the non-linear diffusion Perona-Malik filter on the DEM to remove noise and sharpen the localization of channels (Passalacqua et al., 2010).

    (3) TPI1: a topographic position index (TPI) derived from the DEM using a 3-cell by 3-cell window. The TPI value of a cell is the difference between the cell elevation and the local average elevation within a specific radius or within a surrounding window of cells (De Reu et al., 2013).

    (4) TPI2: a TPI derived from the DEM using a 21-cell by 21-cell window.

    (5) Geomorphon openness: zenith angle openness derived from the DEM using a 10-cell radius with 32 directions (Doneus, 2013).

    (6) Intensity: return intensity determined from the lidar ground points averaged with inverse distance weighting using 10 nearest points.

    (7) DSM1: point density for return points between zero and 1 foot above ground.

    (8) DSM2: point density for return points between 0 and 3 feet above ground.

curvature, DSM1 (digital surface model with 1feet below), DSM2 (digital surface model with 3feet below), geomorphon openness, LiDAR ground reflectance, TPI1 (topological position index with 21 moving window size), TPI2 (topological position index with 3 moving window size), reference image.

In [1]:
## Visualization of reference data
import numpy as np
import os
import gdal
import matplotlib.pyplot as plt
import os.path
from os import path

#Define function for tif file ploting
def read_plot_data(name, label):
    ds = gdal.Open(name)
    if ds.RasterCount > 1:
        for i in range(3):
            #If the image has more than 1 band,
            #commbine Red, Green, and Blue bands 
            temp = ds.GetRasterBand(i+1).ReadAsArray()[:,:,np.newaxis]
            if i == 0:
                cc = temp
            else:
                cc = np.concatenate((cc,temp),axis = 2)
        fig= plt.figure(figsize=(16,16))
        plt.title(label, fontsize=16)
        plt.imshow(cc)
        plt.show()
        
    else:
        #If the image has 1 band, plot the image 
        data = ds.GetRasterBand(1).ReadAsArray()
        fig= plt.figure(figsize=(10,10))
        plt.title(label, fontsize=16)
        plt.imshow(data,cmap='gray',vmin=0, vmax=255)
        plt.show()

root="/home/jovyan/shared_data/data/unet_streamline_detection/"

counter = 0
#Label for each image
labels = ["Reference Data",
         "Digital Elevation Model (DEM)",
         "The Reflectance Values from LiDAR Datum",
         "Digital Surface Model (DSM) 3 feet below", 
         "Digital Surface Model (DSM) 1 feet below",
         "Topographic Position Index (TPI)with moving window size 21",
         "Topographic Position Index (TPI)with moving window size 3",
         "Topological Curvature",
         "Geomorphon Openness"]

#The path to the TIF files
feats_name = [root+'data/TIFF/reference.tif',
              root+'data/TIFF/DEM.tif',
              root+'data/TIFF/LiDAR_relectance.tif',
              root+'data/TIFF/DSM_3ft_below.tif',
              root+'data/TIFF/DSM_1ft_below.tif',
              root+'data/TIFF/TPI_ws21_Clip21.tif',
              root+'data/TIFF/TPI_ws3_Clip1.tif',
              root+'data/TIFF/Curvature.tif',
              root+'data/TIFF/Geomorphon.tif']

for i in feats_name:
    path.exists(i)
    read_plot_data(i, labels[counter])
    counter = counter+1