TEDx Talk (2017) on “Bridging Human and Artificial Intelligence” at TEDxCentennialParkWomen

November 1st, 2017 Irfan Essa Posted in In The News, Interesting, Machine Learning, Presentations, Videos No Comments »

A TEDx talk that I recently did.
In this talk, the speaker takes you on a journey of how AI systems have evolved over time. DIRECTOR OF MACHINE LEARNING AT GEORGIA INSTITUTE OF TECHNOLOGY Dr. Irfan Essa is a professor in the school of Interactive Computing and the inaugural Director of Machine Learning at Georgia Tech. One of the fastest growing research areas in computing, machine learning spans many disciplines that use data to discover scientific principles, infer patterns and extract meaningful knowledge. Essa directs an interdisciplinary team studying ways machine learning connects information and actions to bring the most benefit to the most people. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx
AddThis Social Bookmark Button

Presentation at the Machine Learning Center at GA Tech on “The New Machine Learning Center at GA Tech: Plans and Aspirations”

March 1st, 2017 Irfan Essa Posted in Machine Learning, Presentations No Comments »

Machine Learning at Georgia Tech Seminar Series

Speaker: Irfan Essa
Date/Time: March 1, 2017, 12n

Abstract

The Interdisciplinary Research Center (IRC) for Machine Learning at Georgia Tech (ML@GT) was established in Summer 2016 to foster research and academic activities in and around the discipline of Machine Learning. This center aims to create a community that leverages true cross-disciplinarity across all units on campus, establishes a home for the thought leaders in the area of Machine Learning, and creates programs to train the next generation of pioneers. In this talk, I will introduce the center, describe how we got here, attempt to outline the goals of this center and lay out it’s foundational, application, and educational thrusts. The primary purpose of this talk is to solicit feedback about these technical thrusts, which will be the areas we hope to focus on in the upcoming years. I will also describe, in brief, the new Ph.D. program that has been proposed and is pending approval. We will discuss upcoming events and plans for the future.

https://mediaspace.gatech.edu/media/essa/1_gfu6t21y

AddThis Social Bookmark Button

Presentation at Max-Planck-Institut für Informatik in Saarbrücken (2015): “Video Analysis and Enhancement”

September 14th, 2015 Irfan Essa Posted in Computational Journalism, Computational Photography and Video, Computer Vision, Presentations, Ubiquitous Computing No Comments »

Video Analysis and Enhancement: Spatio-Temporal Methods for Extracting Content from Videos and Enhancing Video OutputSaarbrücken_St_Johanner_Markt_Brunnen

Irfan Essa (prof.irfanessa.com)

Georgia Institute of Technology
School of Interactive Computing

Hosted by Max-Planck-Institut für Informatik in Saarbrucken (Bernt Schiele, Director of Computer Vision and Multimodal Computing)

Abstract 

In this talk, I will start with describing the pervasiveness of image and video content, and how such content is growing with the ubiquity of cameras.  I will use this to motivate the need for better tools for analysis and enhancement of video content. I will start with some of our earlier work on temporal modeling of video, then lead up to some of our current work and describe two main projects. (1) Our approach for a video stabilizer, currently implemented and running on YouTube, and its extensions. (2) A robust and scaleable method for video segmentation. 

I will describe, in some detail, our Video stabilization method, which generates stabilized videos and is in wide use running on YouTube, with Millions of users. Then I will  describe an efficient and scalable technique for spatiotemporal segmentation of long video sequences using a hierarchical graph-based algorithm. I will describe the videosegmentation.com site that we have developed for making this system available for wide use.

Finally, I will follow up with some recent work on image and video analysis in the mobile domains.  I will also make some observations about the ubiquity of imaging and video in general and need for better tools for video analysis. 

AddThis Social Bookmark Button

Dagstuhl Workshop 2015: “Modeling and Simulation of Sport Games, Sport Movements, and Adaptations to Training”

September 13th, 2015 Irfan Essa Posted in Activity Recognition, Behavioral Imaging, Computer Vision, Human Factors, Modeling and Animation, Presentations No Comments »

Participated in the Dagstuhl Workshop on “Modeling and Simulation of Sport Games, Sport Movements, and Adaptations to Training” at the Dagstuhl Castle, September 13  – 16, 2015.

Motivation

Computational modeling and simulation are essential to analyze human motion and interaction in sports science. Applications range from game analysis, issues in training science like training load-adaptation relationship, motor control & learning, to biomechanical analysis. The motivation of this seminar is to enable an interdisciplinary exchange between sports and computer scientists to advance modeling and simulation technologies in selected fields of applications: sport games, sport movements and adaptations to training. In addition, contributions to the epistemic basics of modeling and simulation are welcome.

Source: Schloss Dagstuhl : Seminar Homepage

Past Seminars on this topic include

AddThis Social Bookmark Button

Presentation at Max-Planck-Institute for Intelligent Systems in Tübingen (2015): “Data-Driven Methods for Video Analysis and Enhancement”

September 10th, 2015 Irfan Essa Posted in Computational Photography and Video, Computer Vision, Machine Learning, Presentations No Comments »

Data-Driven Methods for Video Analysis and EnhancementIMG_3995

Irfan Essa (prof.irfanessa.com)
Georgia Institute of Technology

Thursday, September 10, 2 pm,
Max Planck House Lecture Hall (Spemannstr. 36)
Hosted by Max-Planck-Institute for Intelligent Systems (Michael Black, Director of Percieving Systems)

Abstract

In this talk, I will start with describing the pervasiveness of image and video content, and how such content is growing with the ubiquity of cameras.  I will use this to motivate the need for better tools for analysis and enhancement of video content. I will start with some of our earlier work on temporal modeling of video, then lead up to some of our current work and describe two main projects. (1) Our approach for a video stabilizer, currently implemented and running on YouTube and its extensions. (2) A robust and scalable method for video segmentation.

I will describe, in some detail, our Video stabilization method, which generates stabilized videos and is in wide use. Our method allows for video stabilization beyond the conventional filtering that only suppresses high-frequency jitter. This method also supports the removal of rolling shutter distortions common in modern CMOS cameras that capture the frame one scan-line at a time resulting in non-rigid image distortions such as shear and wobble. Our method does not rely on apriori knowledge and works on video from any camera or on legacy footage. I will showcase examples of this approach and also discuss how this method is launched and running on YouTube, with Millions of users.

Then I will  describe an efficient and scalable technique for spatiotemporal segmentation of long video sequences using a hierarchical graph-based algorithm. This hierarchical approach generates high-quality segmentations and we demonstrate the use of this segmentation as users interact with the video, enabling efficient annotation of objects within the video. I will also show some recent work on how this segmentation and annotation can be used to do dynamic scene understanding.

I will then follow up with some recent work on image and video analysis in the mobile domains.  I will also make some observations about the ubiquity of imaging and video in general and need for better tools for video analysis.

AddThis Social Bookmark Button