Paper in IJCNN (2017) “Towards Using Visual Attributes to Infer Image Sentiment Of Social Events”

May 18th, 2017 Irfan Essa Posted in Computational Journalism, Computational Photography and Video, Computer Vision, Machine Learning, Papers, Unaiza Ahsan No Comments »

Paper

  • U. Ahsan, M. D. Choudhury, and I. Essa (2017), “Towards Using Visual Attributes to Infer Image Sentiment Of Social Events,” in Proceedings of The International Joint Conference on Neural Networks, Anchorage, Alaska, US, 2017. [PDF] [BIBTEX]
    @InProceedings{    2017-Ahsan-TUVAIISSE,
      address  = {Anchorage, Alaska, US},
      author  = {Unaiza Ahsan and Munmun De Choudhury and Irfan
          Essa},
      booktitle  = {Proceedings of The International Joint Conference
          on Neural Networks},
      month    = {May},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2017-Ahsan-TUVAIISSE.pdf},
      publisher  = {International Neural Network Society},
      title    = {Towards Using Visual Attributes to Infer Image
          Sentiment Of Social Events},
      year    = {2017}
    }

Abstract

Widespread and pervasive adoption of smartphones has led to instant sharing of photographs that capture events ranging from mundane to life-altering happenings. We propose to capture sentiment information of such social event images leveraging their visual content. Our method extracts an intermediate visual representation of social event images based on the visual attributes that occur in the images going beyond
sentiment-specific attributes. We map the top predicted attributes to sentiments and extract the dominant emotion associated with a picture of a social event. Unlike recent approaches, our method generalizes to a variety of social events and even to unseen events, which are not available at training time. We demonstrate the effectiveness of our approach on a challenging social event image dataset and our method outperforms state-of-the-art approaches for classifying complex event images into sentiments.

AddThis Social Bookmark Button

Paper in IEEE WACV (2017): “Complex Event Recognition from Images with Few Training Examples”

March 27th, 2017 Irfan Essa Posted in Computational Journalism, Computational Photography and Video, Computer Vision, PAMI/ICCV/CVPR/ECCV, Papers, Unaiza Ahsan No Comments »

Paper

  • U. Ahsan, C. Sun, J. Hays, and I. Essa (2017), “Complex Event Recognition from Images with Few Training Examples,” in IEEE Winter Conference on Applications of Computer Vision (WACV), 2017. [PDF] [arXiv] [BIBTEX]
    @InProceedings{    2017-Ahsan-CERFIWTE,
      arxiv    = {https://arxiv.org/abs/1701.04769},
      author  = {Unaiza Ahsan and Chen Sun and James Hays and Irfan
          Essa},
      booktitle  = {IEEE Winter Conference on Applications of Computer
          Vision (WACV)},
      month    = {March},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2017-Ahsan-CERFIWTE.pdf},
      title    = {Complex Event Recognition from Images with Few
          Training Examples},
      year    = {2017}
    }

Abstract

We propose to leverage concept-level representations for complex event recognition in photographs given limited training examples. We introduce a novel framework to discover event concept attributes from the web and use that to extract semantic features from images and classify them into social event categories with few training examples. Discovered concepts include a variety of objects, scenes, actions and event subtypes, leading to a discriminative and compact representation for event images. Web images are obtained for each discovered event concept and we use (pre-trained) CNN features to train concept classifiers. Extensive experiments on challenging event datasets demonstrate that our proposed method outperforms several baselines using deep CNN features directly in classifying images into events with limited training examples. We also demonstrate that our method achieves the best overall accuracy on a data set with unseen event categories using a single training example.

AddThis Social Bookmark Button