SHREC 2010- Large Scale Benchmark
The objectives of this shape retrieval contest are to evaluate the performance of 3D shape retrieval approaches on a new generic 3D shape benchmark.
In response to a given set of queries, the task is to evaluate similarity scores with the target models and return an ordered ranked list.
In this new generic benchmark there are 10000 3D models, ranging from insects to aircraft carriers. About 40 of those models will be taken as query models. The models are created with modelling software. The size of the models varies from a few hundred polygons to half a million polygons. Typically there are between 1000 and 20000 polygons. The size of the ground truth set differs from query to query. The file format to represent the 3D models is the ply format. The files will be in the little endian binary format and will only contain vertices and vertex_indices. So the headers of the ply files will be like:
format binary_little_endian 1.0
element vertex 2400
property float x
property float y
property float z
element face 4500
property list uchar int vertex_indices
We will employ the following evaluation measures: Precision-Recall curve; Mean Average Precision (MAP); First-Tier (Tier1) and Second-Tier (Tier2).
The following list is a step-by-step description of the activities:
- The participants must register by sending a message to email@example.com. Early registration is encouraged, so that we get an impression of the number of participants at an early stage.
- The database will be made available via this website. Similarly the query set will be made available, against which the evaluation will be run.
- Participants will submit the ranked lists and similarity scores for each query per email. Ranked lists may be submitted, resulting from different runs. Each run may be a different algorithm, or a different parameter setting. No more than 5 runs per participant may be submitted. A precise description of the way the data will be provided and the way the results will have to be submitted can be found in this zip file.
- The evaluations will be done automatically.
- The organization will release the evaluation scores of all the runs.
- The participants write a short paper describing their method and commenting the evaluation results.
- The track results are combined into a joint paper, published in the proceedings of the Eurographics Workshop on 3D Object Retrieval.
- The description of the tracks and their results are presented at the Eurographics Workshop on 3D Object Retrieval.
|January 18||- Call for participation.|
|January 18||- A sample data set and query set will be available on line.|
|January 28||- Please register before this date.|
|January 28||- Distribution of the final query sets. Participants can start the retrieval.|
|February 2||- Submission of results (ranked lists) and a short paper draft describing the method(s).|
|February 6||- Distribution of relevance judgments and evaluation scores.|
|February 14||- Submission of final short papers for the contest proceedings.|
|February 21||- Track is finished, and results are ready for inclusion in a track report.|
|March 7||- Camera ready track papers submitted for printing.|
|May 2||- EUROGRAPHICS Workshop on 3D Object Retrieval including SHREC'10.|