The effects of point or polygon based training data on randomForest classification accuracy of wetlands

Jennifer Corcoran, Joseph Knight, Keith Pelletier, Lian Rampi, Yan Wang

Research output: Contribution to journalArticlepeer-review

18 Scopus citations


Wetlands are dynamic in space and time, providing varying ecosystem services. Field reference data for both training and assessment of wetland inventories in the State of Minnesota are typically collected as GPS points over wide geographical areas and at infrequent intervals. This status-quo makes it difficult to keep updated maps of wetlands with adequate accuracy, efficiency, and consistency to monitor change. Furthermore, point reference data may not be representative of the prevailing land cover type for an area, due to point location or heterogeneity within the ecosystem of interest. In this research, we present techniques for training a land cover classification for two study sites in different ecoregions by implementing the RandomForest classifier in three ways: (1) field and photo interpreted points; (2) fixed window surrounding the points; and (3) image objects that intersect the points. Additional assessments are made to identify the key input variables. We conclude that the image object area training method is the most accurate and the most important variables include: compound topographic index, summer season green and blue bands, and grid statistics from LiDAR point cloud data, especially those that relate to the height of the return.

Original languageEnglish (US)
Pages (from-to)4002-4025
Number of pages24
JournalRemote Sensing
Issue number4
StatePublished - 2015

Bibliographical note

Publisher Copyright:
© 2015 by the authors; licensee MDPI, Basel, Switzerland.


  • LiDAR
  • Object based image analysis
  • Optical and infrared sensors
  • Segmentation
  • Topographic
  • Wetlands


Dive into the research topics of 'The effects of point or polygon based training data on randomForest classification accuracy of wetlands'. Together they form a unique fingerprint.

Cite this