Paper in M2CAI (workshop MICCAI) on “Fine-tuning Deep Architectures for Surgical Tool Detection” and results of Tool Detection Challange

October 21st, 2016 Irfan Essa Posted in Aneeq Zia, Awards, Computer Vision, Daniel Castro, Medical, MICCAI No Comments »

Paper

  • A. Zia, D. Castro, and I. Essa (2016), “Fine-tuning Deep Architectures for Surgical Tool Detection,” in Workshop and Challenges on Modeling and Monitoring of Computer Assisted Interventions (M2CAI), Held in Conjunction with International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Athens, Greece, 2016. [PDF] [WEBSITE] [BIBTEX]
    @InProceedings{    2016-Zia-FDASTD,
      address  = {Athens, Greece},
      author  = {Aneeq Zia and Daniel Castro and Irfan Essa},
      booktitle  = {Workshop and Challenges on Modeling and Monitoring
          of Computer Assisted Interventions (M2CAI), Held in
          Conjunction with International Conference on Medical
          Image Computing and Computer Assisted Intervention
          (MICCAI)},
      month    = {October},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2016-Zia-FDASTD.pdf},
      title    = {Fine-tuning Deep Architectures for Surgical Tool
          Detection},
      url    = {http://www.cc.gatech.edu/cpl/projects/deepm2cai/},
      year    = {2016}
    }

Abstract

Visualization of some of the training videos.

Understanding surgical workflow has been a key concern of the medical research community. One of the main advantages of surgical workflow detection is real-time operating room (OR) scheduling. For hospitals, each minute of OR time is important in order to reduce cost and increase patient throughput. Traditional approaches in this field generally tackle the video analysis using hand-crafted video features to facilitate the tool detection. Recently, Twinanda et al. presented a CNN architecture ’EndoNet’ which outperformed previous methods for both surgical tool detection and surgical phase detection. Given the recent success of these networks, we present a study of various architectures coupled with a submission to the M2CAI Surgical Tool Detection challenge. We achieved a top-3 result for the M2CAI competition with a mAP of 37.6.

 

AddThis Social Bookmark Button

Paper (ACM MM 2016) “Leveraging Contextual Cues for Generating Basketball Highlights”

October 18th, 2016 Irfan Essa Posted in ACM MM, Caroline Pantofaru, Computational Photography and Video, Computer Vision, Papers, Sports Visualization, Vinay Bettadapura No Comments »

Paper

  • V. Bettadapura, C. Pantofaru, and I. Essa (2016), “Leveraging Contextual Cues for Generating Basketball Highlights,” in Proceedings of ACM International Conference on Multimedia (ACM-MM), 2016. [PDF] [WEBSITE] [arXiv] [BIBTEX]
    @InProceedings{    2016-Bettadapura-LCCGBH,
      arxiv    = {http://arxiv.org/abs/1606.08955},
      author  = {Vinay Bettadapura and Caroline Pantofaru and Irfan
          Essa},
      booktitle  = {Proceedings of ACM International Conference on
          Multimedia (ACM-MM)},
      month    = {October},
      organization  = {ACM},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2016-Bettadapura-LCCGBH.pdf},
      title    = {Leveraging Contextual Cues for Generating
          Basketball Highlights},
      url    = {http://www.vbettadapura.com/highlights/basketball/index.htm},
      year    = {2016}
    }

Abstract

2016-Bettadapura-LCCGBH

Leveraging Contextual Cues for Generating Basketball Highlights

The massive growth of sports videos has resulted in a need for automatic generation of sports highlights that are comparable in quality to the hand-edited highlights produced by broadcasters such as ESPN. Unlike previous works that mostly use audio-visual cues derived from the video, we propose an approach that additionally leverages contextual cues derived from the environment that the game is being played in. The contextual cues provide information about the excitement levels in the game, which can be ranked and selected to automatically produce high-quality basketball highlights. We introduce a new dataset of 25 NCAA games along with their play-by-play stats and the ground-truth excitement data for each basket. We explore the informativeness of five different cues derived from the video and from the environment through user studies. Our experiments show that for our study participants, the highlights produced by our system are comparable to the ones produced by ESPN for the same games.

AddThis Social Bookmark Button

Announcing the new Interdisciplinary Research Center for Machine Learning at Georgia Tech (ML@GT)

October 6th, 2016 Irfan Essa Posted in In The News, Interesting, Machine Learning No Comments »

Announcement from Georgia Tech’s College of Computing about a new Interdisciplinary Research Center for Machine Learning (ML@GT) that I will be serving as the Inaugural Director for.ML@GT

Machine Learning @ Georgia Tech Based in the College of Computing, ML@GT represents all of Georgia Tech. It is tasked with pushing forward the ability for computers to learn from observations and data. As one of the fastest growing research areas in computing, machine learning spans many disciplines that use data to discover scientific principles, infer patterns, and extract meaningful knowledge.

According to School of Interactive Computing Professor Irfan Essa, inaugural director of ML@GT, machine learning (ML) has reached a new level of maturity and is now impacting all aspects of computing, engineering, science, and business. “We are in the era of aggregation, of collecting data,” said Essa. “However, machine learning is now propelling data analysis, and the whole concept of interpreting that data, toward a new era of making sense of the data, using it to make meaningful connections between information, and acting upon it in innovative ways that bring the most benefit to the most people.”

The new center begins with more than 100 affiliated faculty members from five Georgia Tech colleges and the Georgia Tech Research Institute, as well as some jointly affiliated with Emory University.

Source: Two New Interdisciplinary Research Centers Shaping Future of Computing | Georgia Tech – College of Computing

AddThis Social Bookmark Button

Paper in IJCARS (2016) on “Automated video-based assessment of surgical skills for training and evaluation in medical schools”

September 2nd, 2016 Irfan Essa Posted in Activity Recognition, Aneeq Zia, Computer Vision, Eric Sarin, Mark Clements, Medical, MICCAI, Thomas Ploetz, Vinay Bettadapura, Yachna Sharma No Comments »

Paper

  • A. Zia, Y. Sharma, V. Bettadapura, E. L. Sarin, T. Ploetz, M. A. Clements, and I. Essa (2016), “Automated video-based assessment of surgical skills for training and evaluation in medical schools,” International Journal of Computer Assisted Radiology and Surgery, vol. 11, iss. 9, pp. 1623-1636, 2016. [WEBSITE] [DOI] [BIBTEX]
    @Article{    2016-Zia-AVASSTEMS,
      author  = {Zia, Aneeq and Sharma, Yachna and Bettadapura,
          Vinay and Sarin, Eric L and Ploetz, Thomas and
          Clements, Mark A and Essa, Irfan},
      doi    = {10.1007/s11548-016-1468-2},
      journal  = {International Journal of Computer Assisted
          Radiology and Surgery},
      month    = {September},
      number  = {9},
      pages    = {1623--1636},
      publisher  = {Springer Berlin Heidelberg},
      title    = {Automated video-based assessment of surgical skills
          for training and evaluation in medical schools},
      url    = {http://link.springer.com/article/10.1007/s11548-016-1468-2},
      volume  = {11},
      year    = {2016}
    }

Abstract

2016-Zia-AVASSTEMS

Sample frames from our video dataset

Purpose: Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in- person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches, however, are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities.

Method : We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. Results: We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos.

Conclusion: Our evaluations show that frequency features perform better than motion texture features, which in turn perform better than symbol/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.

AddThis Social Bookmark Button

Research Blog: Motion Stills – Create beautiful GIFs from Live Photos

June 7th, 2016 Irfan Essa Posted in Computational Photography and Video, Computer Vision, In The News, Interesting, Matthias Grundmann, Projects No Comments »

Kudos to the team from Machine Perception at Google Research that just launched the Motion Still App to generate novel photos on an iOS device. This work is in part aimed at combining efforts like Video Textures and Video Stabilization and a lot more.

Today we are releasing Motion Stills, an iOS app from Google Research that acts as a virtual camera operator for your Apple Live Photos. We use our video stabilization technology to freeze the background into a still photo or create sweeping cinematic pans. The resulting looping GIFs and movies come alive, and can easily be shared via messaging or on social media.

Source: Research Blog: Motion Stills – Create beautiful GIFs from Live Photos

AddThis Social Bookmark Button

Paper (WACV 2016) “Discovering Picturesque Highlights from Egocentric Vacation Videos”

March 7th, 2016 Irfan Essa Posted in Computational Photography and Video, Computer Vision, Daniel Castro, PAMI/ICCV/CVPR/ECCV, Vinay Bettadapura No Comments »

Paper

  • D. Castro, V. Bettadapura, and I. Essa (2016), “Discovering Picturesque Highlights from Egocentric Vacation Video,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2016. [PDF] [WEBSITE] [arXiv] [BIBTEX]
    @InProceedings{    2016-Castro-DPHFEVV,
      arxiv    = {http://arxiv.org/abs/1601.04406},
      author  = {Daniel Castro and Vinay Bettadapura and Irfan
          Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      month    = {March},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2016-Castro-DPHFEVV.pdf},
      title    = {Discovering Picturesque Highlights from Egocentric
          Vacation Video},
      url    = {http://www.cc.gatech.edu/cpl/projects/egocentrichighlights/},
      year    = {2016}
    }

Abstract

2016-Castro-DPHFEVVWe present an approach for identifying picturesque highlights from large amounts of egocentric video data. Given a set of egocentric videos captured over the course of a vacation, our method analyzes the videos and looks for images that have good picturesque and artistic properties. We introduce novel techniques to automatically determine aesthetic features such as composition, symmetry, and color vibrancy in egocentric videos and rank the video frames based on their photographic qualities to generate highlights. Our approach also uses contextual information such as GPS, when available, to assess the relative importance of each geographic location where the vacation videos were shot. Furthermore, we specifically leverage the properties of egocentric videos to improve our highlight detection. We demonstrate results on a new egocentric vacation dataset which includes 26.5 hours of videos taken over a 14-day vacation that spans many famous tourist destinations and also provide results from a user-study to access our results.

 

AddThis Social Bookmark Button

Spring 2016 Teaching

January 10th, 2016 Irfan Essa Posted in Computational Photography, Computational Photography and Video, Computer Vision, Computer Vision No Comments »

My teaching activities for Spring 2016 areBB1162B4-4F87-480C-A850-00C54FAA0E21

AddThis Social Bookmark Button

Paper in MICCAI (2015): “Automated Assessment of Surgical Skills Using Frequency Analysis”

October 6th, 2015 Irfan Essa Posted in Activity Recognition, Aneeq Zia, Eric Sarin, Mark Clements, Medical, MICCAI, Papers, Vinay Bettadapura, Yachna Sharma No Comments »

Paper

  • A. Zia, Y. Sharma, V. Bettadapura, E. Sarin, M. Clements, and I. Essa (2015), “Automated Assessment of Surgical Skills Using Frequency Analysis,” in International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI), 2015. [PDF] [BIBTEX]
    @InProceedings{    2015-Zia-AASSUFA,
      author  = {A. Zia and Y. Sharma and V. Bettadapura and E.
          Sarin and M. Clements and I. Essa},
      booktitle  = {International Conference on Medical Image Computing
          and Computer Assisted Interventions (MICCAI)},
      month    = {October},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Zia-AASSUFA.pdf},
      title    = {Automated Assessment of Surgical Skills Using
          Frequency Analysis},
      year    = {2015}
    }

Abstract

We present an automated framework for a visual assessment of the expertise level of surgeons using the OSATS (Objective Structured Assessment of Technical Skills) criteria. Video analysis technique for extracting motion quality via  frequency coefficients is introduced. The framework is tested in a case study that involved analysis of videos of medical students with different expertise levels performing basic surgical tasks in a surgical training lab setting. We demonstrate that transforming the sequential time data into frequency components effectively extracts the useful information differentiating between different skill levels of the surgeons. The results show significant performance improvements using DFT and DCT coefficients over known state-of-the-art techniques.

AddThis Social Bookmark Button

2015 C+J Symposium

October 2nd, 2015 Irfan Essa Posted in Computational Journalism, Nick Diakopoulos No Comments »

logoData and computation drive our world, often without sufficient critical assessment or accountability. Journalism is adapting responsibly—finding and creating new kinds of stories that respond directly to our new societal condition. Join us for a two-day conference exploring the interface between journalism and computing.October 2-3, New York, NY#CJ2015

Source: 2015 C+J Symposium

Participated the 4th Computation+Journalism Symposium, October 2-3, in New York, NY at The Brown Institute for Media Innovation Pulitzer Hall, Columbia University.  Keynotes were Lada Adamic (Facebook) and Chris Wiggins (Columbia, NYT), with 2 curated panels and 5 sessions of peer-reviewed papers.

Past Symposiums were held in

  • Atlanta, GA (CJ 2008, hosted by Georgia Tech),
  • Atlanta, GA (CJ 2013, hosted by Georgia Tech), and
  • NYC, NY (CJ 2014, hosted by Columbia U).
  • Next one is being hosted by Stanford and will be in Palo Alto, CA.
AddThis Social Bookmark Button

Presentation at Max-Planck-Institut für Informatik in Saarbrücken (2015): “Video Analysis and Enhancement”

September 14th, 2015 Irfan Essa Posted in Computational Journalism, Computational Photography and Video, Computer Vision, Presentations, Ubiquitous Computing No Comments »

Video Analysis and Enhancement: Spatio-Temporal Methods for Extracting Content from Videos and Enhancing Video OutputSaarbrücken_St_Johanner_Markt_Brunnen

Irfan Essa (prof.irfanessa.com)

Georgia Institute of Technology
School of Interactive Computing

Hosted by Max-Planck-Institut für Informatik in Saarbrucken (Bernt Schiele, Director of Computer Vision and Multimodal Computing)

Abstract 

In this talk, I will start with describing the pervasiveness of image and video content, and how such content is growing with the ubiquity of cameras.  I will use this to motivate the need for better tools for analysis and enhancement of video content. I will start with some of our earlier work on temporal modeling of video, then lead up to some of our current work and describe two main projects. (1) Our approach for a video stabilizer, currently implemented and running on YouTube, and its extensions. (2) A robust and scaleable method for video segmentation. 

I will describe, in some detail, our Video stabilization method, which generates stabilized videos and is in wide use running on YouTube, with Millions of users. Then I will  describe an efficient and scalable technique for spatiotemporal segmentation of long video sequences using a hierarchical graph-based algorithm. I will describe the videosegmentation.com site that we have developed for making this system available for wide use.

Finally, I will follow up with some recent work on image and video analysis in the mobile domains.  I will also make some observations about the ubiquity of imaging and video in general and need for better tools for video analysis. 

AddThis Social Bookmark Button