Sunday, February 19, 2017
Exhibit Hall (Hynes Convention Center)
Tologon Eshimkanov, Wentworth Institute of Technology, Boston, MA
Background. Cornhole is an American lawn game in which teams of players take turns throwing bags of corn (or bean bags) at a raised platform with a hole in the far end. The rules state that at the end of a round, any bag on the platform yields one point, a bag within the hole earns three points, and a bag outside of the platform receives zero points, after which scores for both teams are calculated and the score difference is awarded to a team with highest score. The game continues until either team reaches the score of 21 or above. Our efforts here are part of an ongoing research project to develop a robot that can effectively play cornhole. One important problem in this larger vision is state estimation – using sensors to interpret the state of a game, and we present here one approach to an automatic score keeper (ASK) via computer vision: using a commodity webcam to observe a platform and display scores for both teams at any given moment. Furthermore, while the ASK can be used for research in autonomous play, we hope that, with future updates, it can also be used in a traditional game between humans, for either amateur or professional play – that is, the score keeper eliminates the need for a human referee or the players themselves to maintain the correct game score for each player. Methods. Our ASK system implements a pipeline of techniques from computer vision and mathematics – the system is implemented in C++ and makes heavy use of OpenCV (an open source computer vision library). We first apply a Gaussian blur to reduce noise. The extent of the platform (assumed to be rectangular) is then identified via largest contour’s detection. Then, the hole within the platform is recognized using a Hough transform. Finally, the system implements a heuristic search for the location and color bags via Hue, Saturation, and Value (HSV) color model (by finding colors given their shades and brightness). Results. We evaluated the accuracy of the ASK on an annotated dataset of 198 static cornhole state images. On this dataset, the ASK achieved approximately 90% accuracy on bag color detection and 85% percent accuracy on bag-location detection – that is, classifying whether the bag was on/off the platform, in the hole, or not on the platform. Conclusions. In practice, this level of accuracy is a good starting point for both robotics and human play, but we intend to evaluate machine-learning approaches in the future to achieve human-level scoring performance.