MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

Paper in AISTATS 2013 “Beyond Sentiment: The Manifold of Human Emotions”

April 29th, 2013 Irfan Essa Posted in AAAI/IJCAI/UAI, Behavioral Imaging, Computational Journalism, Numerical Machine Learning, Papers, WWW No Comments »

  • S. Kim, F. Li, G. Lebanon, and I. A. Essa (2013), “Beyond Sentiment: The Manifold of Human Emotions,” in Proceedings of AI STATS, 2013. [PDF] [BIBTEX]
    @inproceedings{2012-Kim-BSMHE,
      Author = {Seungyeon Kim and Fuxin Li and Guy Lebanon and Irfan A. Essa},
      Booktitle = {Proceedings of AI STATS},
      Date-Added = {2013-06-25 12:01:11 +0000},
      Date-Modified = {2013-06-25 12:02:53 +0000},
      Pdf = {http://arxiv.org/pdf/1202.1568v1},
      Title = {Beyond Sentiment: The Manifold of Human Emotions},
      Year = {2013}}

Abstract

Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities. Besides obtaining significant improvements over a baseline without manifold, we are also able to visualize different notions of positive sentiment in different domains.

via [arXiv.org 1202.1568] Beyond Sentiment: The Manifold of Human Emotions.

AddThis Social Bookmark Button

Paper in ECCV Workshop 2012: “Weakly Supervised Learning of Object Segmentations from Web-Scale Videos”

October 7th, 2012 Irfan Essa Posted in Activity Recognition, Awards, Google, Matthias Grundmann, Multimedia, PAMI/ICCV/CVPR/ECCV, Papers, Vivek Kwatra, WWW No Comments »

Weakly Supervised Learning of Object Segmentations from Web-Scale Videos

  • G. Hartmann, M. Grundmann, J. Hoffman, D. Tsai, V. Kwatra, O. Madani, S. Vijayanarasimhan, I. Essa, J. Rehg, and R. Sukthankar (2012), “Weakly Supervised Learning of Object Segmentations from Web-Scale Videos,” in Proceedings of ECCV 2012 Workshop on Web-scale Vision and Social Media, 2012. [PDF] [DOI] [BIBTEX]
    @inproceedings{2012-Hartmann-WSLOSFWV,
      Author = {Glenn Hartmann and Matthias Grundmann and Judy Hoffman and David Tsai and Vivek Kwatra and Omid Madani and Sudheendra Vijayanarasimhan and Irfan Essa and James Rehg and Rahul Sukthankar},
      Booktitle = {Proceedings of ECCV 2012 Workshop on Web-scale Vision and Social Media},
      Date-Added = {2012-10-23 15:03:18 +0000},
      Date-Modified = {2013-10-22 18:57:10 +0000},
      Doi = {10.1007/978-3-642-33863-2_20},
      Pdf = {http://www.cs.cmu.edu/~rahuls/pub/eccv2012wk-cp-rahuls.pdf},
      Title = {Weakly Supervised Learning of Object Segmentations from Web-Scale Videos},
      Year = {2012},
      Bdsk-Url-1 = {http://dx.doi.org/10.1007/978-3-642-33863-2_20}}

Abstract

We propose to learn pixel-level segmentations of objects from weakly labeled (tagged) internet videos. Speci cally, given a large collection of raw YouTube content, along with potentially noisy tags, our goal is to automatically generate spatiotemporal masks for each object, such as dog”, without employing any pre-trained object detectors. We formulate this problem as learning weakly supervised classi ers for a set of independent spatio-temporal segments. The object seeds obtained using segment-level classi ers are further re ned using graphcuts to generate high-precision object masks. Our results, obtained by training on a dataset of 20,000 YouTube videos weakly tagged into 15 classes, demonstrate automatic extraction of pixel-level object masks. Evaluated against a ground-truthed subset of 50,000 frames with pixel-level annotations, we con rm that our proposed methods can learn good object masks just by watching YouTube.

Presented at: ECCV 2012 Workshop on Web-scale Vision and Social Media, 2012, October 7-12, 2012, in Florence, ITALY.

Awarded the BEST PAPER AWARD!

 

AddThis Social Bookmark Button

Kihwan Kim’s Thesis Defense (2011): “Spatio-temporal Data Interpolation for Dynamic Scene Analysis”

December 6th, 2011 Irfan Essa Posted in Computational Photography and Video, Kihwan Kim, Modeling and Animation, Multimedia, PhD, Security, Visual Surviellance, WWW No Comments »

Spatio-temporal Data Interpolation for Dynamic Scene Analysis

Kihwan Kim, PhD Candidate

School of Interactive Computing, College of Computing, Georgia Institute of Technology

Date: Tuesday, December 6, 2011

Time: 1:00 pm – 3:00 pm EST

Location: Technology Square Research Building (TSRB) Room 223

Abstract

Analysis and visualization of dynamic scenes is often constrained by the amount of spatio-temporal information available from the environment. In most scenarios, we have to account for incomplete information and sparse motion data, requiring us to employ interpolation and approximation methods to fill for the missing information. Scattered data interpolation and approximation techniques have been widely used for solving the problem of completing surfaces and images with incomplete input data. We introduce approaches for such data interpolation and approximation from limited sensors, into the domain of analyzing and visualizing dynamic scenes. Data from dynamic scenes is subject to constraints due to the spatial layout of the scene and/or the configurations of video cameras in use. Such constraints include: (1) sparsely available cameras observing the scene, (2) limited field of view provided by the cameras in use, (3) incomplete motion at a specific moment, and (4) varying frame rates due to different exposures and resolutions.

In this thesis, we establish these forms of incompleteness in the scene, as spatio- temporal uncertainties, and propose solutions for resolving the uncertainties by applying scattered data approximation into a spatio-temporal domain.

The main contributions of this research are as follows: First, we provide an effi- cient framework to visualize large-scale dynamic scenes from distributed static videos. Second, we adopt Radial Basis Function (RBF) interpolation to the spatio-temporal domain to generate global motion tendency. The tendency, represented by a dense flow field, is used to optimally pan and tilt a video camera. Third, we propose a method to represent motion trajectories using stochastic vector fields. Gaussian Pro- cess Regression (GPR) is used to generate a dense vector field and the certainty of each vector in the field. The generated stochastic fields are used for recognizing motion patterns under varying frame-rate and incompleteness of the input videos. Fourth, we also show that the stochastic representation of vector field can also be used for modeling global tendency to detect the region of interests in dynamic scenes with camera motion. We evaluate and demonstrate our approaches in several applications for visualizing virtual cities, automating sports broadcasting, and recognizing traffic patterns in surveillance videos.

Committee:

  • Prof. Irfan Essa (Advisor, School of Interactive Computing, Georgia Institute of Technology)
  • Prof. James M. Rehg (School of Interactive Computing, Georgia Institute of Technology)
  • Prof. Thad Starner (School of Interactive Computing, Georgia Institute of Technology)
  • Prof. Greg Turk (School of Interactive Computing, Georgia Institute of Technology)
  • Prof. Jessica K. Hodgins (Robotics Institute, Carnegie Mellon University, and Disney Research Pittsburgh)
AddThis Social Bookmark Button

In the News (2011): “Shake it like an Instagram picture — Online Video News”

September 15th, 2011 Irfan Essa Posted in Collaborators, Computational Photography and Video, Google, In The News, Matthias Grundmann, Vivek Kwatra, WWW No Comments »

Our work, as described in the following paper, now showcased in youtube.

  • M. Grundmann, V. Kwatra, and I. Essa (2011), “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [PDF] [WEBSITE] [VIDEO] [DEMO] [DOI] [BLOG] [BIBTEX]
    @inproceedings{2011-Grundmann-AVSWROCP,
      Author = {M. Grundmann and V. Kwatra and I. Essa},
      Blog = {http://prof.irfanessa.com/2011/06/19/videostabilization/},
      Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      Date-Modified = {2013-10-22 13:55:15 +0000},
      Demo = {http://www.youtube.com/watch?v=0MiY-PNy-GU},
      Doi = {10.1109/CVPR.2011.5995525},
      Month = {June},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2011-Grundmann-AVSWROCP.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths},
      Url = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Video = {http://www.youtube.com/watch?v=i5keG1Y810U},
      Year = {2011},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2011.5995525}}

YouTube effects: Shake it like an Instagram picture

via YouTube effects: Shake it like an Instagram picture — Online Video News.

YouTube users can now apply a number of Instagram-like effects to their videos, giving them a cartoonish or Lomo-like look with the click of a button. The effects are part of a new editing feature that also includes cropping and advanced image stabilization.

Taking the shaking out of video uploads should go a long way towards making some of the amateur footage captured on mobile phones more watchable, but it can also be resource-intensive — which is why Google’s engineers invented an entirely new approach toward image stabilization.

The new editing functionality will be part of YouTube’s video page, where a new “Edit video” button will offer access to filters and other editing functionality. This type of post-processing is separate from YouTube’s video editor, which allows to produce new videos based on existing clips.

AddThis Social Bookmark Button

Going Live on YouTube (2011): Lights, Camera… EDIT! New Features for the YouTube Video Editor

March 21st, 2011 Irfan Essa Posted in Computational Photography and Video, Google, In The News, Matthias Grundmann, Multimedia, Vivek Kwatra, WWW No Comments »

via YouTube Blog: Lights, Camera… EDIT! New Features for the YouTube Video Editor.

  • M. Grundmann, V. Kwatra, and I. Essa (2011), “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [PDF] [WEBSITE] [VIDEO] [DEMO] [DOI] [BLOG] [BIBTEX]
    @inproceedings{2011-Grundmann-AVSWROCP,
      Author = {M. Grundmann and V. Kwatra and I. Essa},
      Blog = {http://prof.irfanessa.com/2011/06/19/videostabilization/},
      Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      Date-Modified = {2013-10-22 13:55:15 +0000},
      Demo = {http://www.youtube.com/watch?v=0MiY-PNy-GU},
      Doi = {10.1109/CVPR.2011.5995525},
      Month = {June},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2011-Grundmann-AVSWROCP.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths},
      Url = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Video = {http://www.youtube.com/watch?v=i5keG1Y810U},
      Year = {2011},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2011.5995525}}

Lights, Camera… EDIT! New Features for the YouTube Video Editor

Nine months ago we launched our cloud-based video editor. It was a simple product built to provide our users with simple editing tools. Although it didn’t have all the features available on paid desktop editing software, the idea was that the vast majority of people’s video editing needs are pretty basic and straight-forward and we could provide these features with a free editor available on the Web. Since launch, hundreds of thousands of videos have been published using the YouTube Video Editor and we’ve regularly pushed out new feature enhancements to the product, including:

  • Video transitions (crossfade, wipe, slide)
  • The ability to save projects across sessions
  • Increased clips allowed in the editor from 6 to 17
  • Video rotation (from portrait to landscape and vice versa – great for videos shot on mobile)
  • Shape transitions (heart, star, diamond, and Jack-O-Lantern for Halloween)
  • Audio mixing (AudioSwap track mixed with original audio)
  • Effects (brightness/contrast, black & white)

A new user interface and project menu for multiple saved projects

While many of these are familiar features also available on desktop software, today, we’re excited to unveil two new features that the team has been working on over the last couple of months that take unique advantage of the cloud:

Stabilizer

Ever shoot a shaky video that’s so jittery, it’s actually hard to watch? Professional cinematographers use stabilization equipment such as tripods or camera dollies to keep their shots smooth and steady. Our team mimicked these cinematographic principles by automatically determining the best camera path for you through a unified optimization technique. In plain English, you can smooth some of those unsteady videos with the click of a button. We also wanted you to be able to preview these results in real-time, before publishing the finished product to the Web. We can do this by harnessing the power of the cloud by splitting the computation required for stabilizing the video into chunks and distributed them across different servers. This allows us to use the power of many machines in parallel, computing and streaming the stabilized results quickly into the preview. You can check out the paper we’re publishing entitled “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths.” Want to see stabilizer in action? You can test it out for yourself, or check out these two videos. The first is without stabilizer.

And now, with the stabilizer:

AddThis Social Bookmark Button