Call for Papers
We solicit contributions in two categories.
A. Performance evaluation on the ASLAN benchmark
The Action Similarity LAbeliNg (ASLAN) benchmark focuses on pair-matching (same/not-same classification) of unconstrained video pairs of human actions. It is available, along with current state-of-the-art results, related code and information, from the following url: ASLAN Data.
Submissions in this category should present new methods and demonstrate results on the ASLAN benchmark, compared to the existing state-of-the-art. By employing this standard benchmark we can compare alternative methods for action pair-matching and clearly identify those which outperform the rest.
In this category, we accept also short papers (2 pages or less) presenting unpublished results of (possibly) previously published methods on the benchmark as described in its web-page. These results will be summarized and described by the organizers during the workshop. Authors may give a short description of their methods or refer to existing publications which give the details of the algorithms used. Short papers will not appear as separate publications in the workshop proceedings, but will be described collectively in a single summary article describing results on the benchmark.
Regular papers (of standard CVPR format and length) should provide details of the algorithms employed. The authors are strongly encouraged to provide a link to their implementation (or an executable), but this is not a requirement for submission.
Exceptional papers / top performing methods on the ASLAN benchmark will be announced at the workshop. Awards sponsorded by Microsoft.
B. New directions and techniques in unconstrained Action Recognition in videos
This includes (but is not limited to) papers concerning the following topics:
Recognition-learning related:
- Representations of actions in videos
- (dis-)similarity techniques for action representations
- Fine-grained Action Recognition
- Use of background samples and attributes for Action Recognition
- Action pair-matching vs. multi-class Action Recognition
- Action Detection in time and in space
- Similarity ranking of actions
Human vision related
- Human vision inspired Action Recognition
Video processing/bottom-up feature related
- Video stabilization methods for better Action Recognition
Although we encourage papers in this category to include performance evaluations on the Action Similarity LAbeliNg challenge (ASLAN), we welcome also papers that will use ASLAN in other ways.