# Locount: A large-scale retail scenario object detection and counting task [Rethinking Object Detection in Retail Stores](http://arxiv.org/abs/2003.08230). ## Problem definition The convention standard for object detection uses a bounding box to represent each individual object instance. However, it is not practical in the industry-relevant applications in the context of warehouses due to severe occlusions among groups of instances of the same categories. For example, as shown in [Fig. 1(g)](https://isrc.iscas.ac.cn/gitlab/research/locount-dataset/-/tree/master/Images/dataset-comparison.jpg), it is extremely difficult to annotate the stacked dinner plates even by a well-trained annotator. Meanwhile, it is almost impossible for object detectors to detect all stacked dinner plates accurately, even for the state-of-the-art detectors. Thus, it is necessary to rethink the definition of object detection in such scenarios. In order to solve the practical industrial application problems and promote the academic research of the problem, we put forward the necessary elements of the task: *problem definition*, *Locount dataset*, *evaluation protocol* and *baseline method*. This work was accepted by the International Artificial Intelligence Conference AAAI 2021. ![image](https://isrc.iscas.ac.cn/gitlab/research/locount-dataset/-/tree/master/Images/dataset-comparison.jpg) ## Locount dataset To solve the above issues, we collect a large-scale object localization and counting dataset at 28 different stores and apartments, which consists of 50,394 images with the JPEG image resolution of 1920x1080 pixels. More than 1.9 million object instances in 140 categories (including *Jacket*, *Shoes*, *Oven*, etc.) are annotated. To facilitate data usage, we divide the dataset into two subsets, i.e., *training* and *testing* sets, including 34,022 images for training and 16,372 images for testing. The dataset includes 9 big subclasses, i.e., Baby Stuffs (e.g., *Baby Diapers* and *Baby Slippers*), Drinks (e.g., *Juice* and *Ginger Tea*), Food Stuff (e.g., *Dried Fish* and *Cake*), Daily Chemicals (e.g., *Soap* and *Shampoo*), Clothing (e.g., *Jacket* and *Adult hats*), Electrical Appliances (e.g., *Microwave Oven* and *Socket*), Storage Appliances (e.g., *Trash* and *Stool*), Kitchen Utensils (e.g., *Forks* and *Food Box*), and Stationery and Sporting Goods (e.g., *Skate* and *Notebook*). There are various factors challenging the performance of algorithms, such as scale changes, illumination variations, occlusion, similar appearance, clutter background, blurring and deformation, *etc*. ## Evaluation protocol To fairly compare algorithms on the *Locount* task, we design a new evaluation protocol, which penalizes algorithms for missing object instances, for duplicate detections of one instance, for false positive detections, and for false counting numbers of detections. Inspired by MS COCO protocol, we design new metrics *AP^{lc}*, *AP_{0.5}^{lc}*, *AP_{0.75}^{lc}*, and *AR^{lc}_{max}=150}* to evaluate the performance of methods, which takes both the localization and counting accuracies into account. For more detailed definitions, please refer to the [paper](http://arxiv.org/abs/2003.08230). ## Baseline method ## Citation If you find this dataset useful for your research, please cite ``` @inproceedings{Cai2020Locount, title={Rethinking Object Detection in Retail Stores}, author={Yuanqiang Cai and Longyin Wen and Libo Zhang and Dawei Du and Weiqiang Wang}, booktitle={arXiv preprint arXiv:2003.08230}, year={2020} } ``` ## Dataset The *Locount* dataset is formed by 50,394 JPEG images with the resolution of 1920 × 1080 pixels. The *training* subset includes 34,022 images. The *testing* subset includes 16,372 images. ## Feedback Suggestions and opinions of this dataset are welcome. Please contact the authors by sending email to libo@iscas.ac.cn.