We present an end-to-end computer vision system for yield mapping in apple orchards. Our proposed system is platform independent and does not require any specific lighting conditions. Our main technical contributions are (1) a semi-supervised clustering algorithm that utilizes colors to identify apples and (2) an unsupervised clustering method that utilizes spatial properties to estimate fruit counts from apple clusters having arbitrarily complex geometry. Additionally, we utilize camera motion to merge the counts across multiple views. We verified the performance of our algorithms by conducting multiple field trials. Results indicate that the detection method achieves F1-measure.95–.97 for multiple color varieties and lighting conditions. The counting method achieves an accuracy of 89–98%. Additionally, we report merged fruit counts from both sides of the tree rows. Our yield estimation method achieves an overall accuracy of 91.98–94.81% across different datasets.
Bibliographical noteFunding Information:
The authors thank Joshua Anderson, Professors Emily Hoover, and Cindy Tong from the Department of Horticultural Science, University of Minnesota, for their expertise and help with the experiments. This work is supported in part by NSF grant # 1317788 , USDA NIFA MIN-98-G02 and the MnDrive initiative.
- Apple counting
- Apple detection
- Machine vision
- Semi-supervised image segmentation
- Yield estimation