Commit df626cd4 authored by 蔡院强's avatar 蔡院强
Browse files

Update README.md

parent c686b041
Loading
Loading
Loading
Loading
+11 −8
Original line number Diff line number Diff line
@@ -27,15 +27,17 @@ the bounding boxes. Different colors denotes different object categories.</p>
To solve the above issues, we collect a large-scale object localization and counting dataset at 28 different stores and apartments, which consists of 50,394 images with the JPEG image resolution of 1920x1080 pixels. 
More than 1.9 million object instances in 140 categories (including *Jacket*, *Shoes*, *Oven*, etc.) are annotated. 

![Table 1: Summary of existing object detection benchmarks in retail stores. “C” indicates the image classification task, “S”
indicates the single-class object detection task, and “M” indicates the multi-class object detection task.](Images/dataset-summary.png)

<div align=center><img src="Images/dataset-summary.png" width="800" height="400"/></div>
Table 1: Summary of existing object detection benchmarks in retail stores. “C” indicates the image classification task, “S”
indicates the single-class object detection task, and “M” indicates the multi-class object detection task.

To facilitate data usage, we divide the dataset into two subsets, i.e., *training* and *testing* sets, including 34,022 images for training and 16,372 images for testing.
The dataset includes 9 big subclasses, i.e., Baby Stuffs (e.g., *Baby Diapers* and *Baby Slippers*), Drinks (e.g., *Juice* and *Ginger Tea*), Food Stuff (e.g., *Dried Fish* and *Cake*), Daily Chemicals (e.g., *Soap* and *Shampoo*), Clothing (e.g., *Jacket* and *Adult hats*), 
Electrical Appliances (e.g., *Microwave Oven* and *Socket*), Storage Appliances (e.g., *Trash* and *Stool*), Kitchen Utensils (e.g., *Forks* and *Food Box*), and Stationery and Sporting Goods (e.g., *Skate* and *Notebook*). 
There are various factors challenging the performance of algorithms, such as scale changes, illumination variations, occlusion, similar appearance, clutter background, blurring and deformation, *etc*.

<div align=center><img src="Images/category-tree.png" width="800" height="400" alt="sfsf" /></div>
<div align=center><img src="Images/category-tree.png" width="800" height="400"/></div>
<p align="center">Figure 2: Category hierarchy of the large-scale localization and counting dataset in
the shelf scenarios.</p>

@@ -73,12 +75,13 @@ Thus, the counting task is formulated as the multi-class classification task, wh

We conduct several experiments of the state-of-the-art object detectors and the proposed CLCNet method on the proposed dataset, to demonstrate the effectiveness of CLCNet, Table 2 and Fig. 4.

![Table 2: Comparison results of the algorithms on the proposed dataset. Detection results of all comparison methods on the
proposed dataset. The mark lc on the upper right corner indicates that its value is computed by the proposed metrics](Images/Experiment-results.png)
<p align="center">HelloWorld</p>
<div align=center><img src="Images/Experiment-results.png" width="700" height="400" /></div>
Table 2: Comparison results of the algorithms on the proposed dataset. Detection results of all comparison methods on the
proposed dataset. The mark lc on the upper right corner indicates that its value is computed by the proposed metrics

<div align=center><img src="Images/show-results.jpg" width="700" height="400" /></div>
Figure 4: Qualitative results of the proposed CLCNet method on the Locount dataset.

![Figure 4: Qualitative results of the proposed CLCNet method on the Locount dataset.](Images/show-results.jpg)
<p align="center">HelloWorld</p>

## Citation
If you find this dataset useful for your research, please cite