EECS 331: Introduction to Computational Photography

Fall 2017 Tu-Thu 3:30-4:50pm - Professor Oliver (Ollie) Cossairt

Location: Loder 023

 

The Lytro Camera captures a 4D light field of a scene, enabling photographs to be digitally refocused after images are captured.

 

Computational illumination is used within the movie industry to render the performances of live actors into digital environments.

 

The Nvidia Tegra Shield is an Android-based tablet that features a 5-megapixel camera with an easy to use camera API.

 

 

Course Goals

To teach the fundamentals of modern camera architectures and give students hand-on experience acquiring, characterizing, and manipulating data captured using a modern camera platform. For example, students will learn how to estimate scene depth from a sequence of captured images.

 

Course Description

This course is the first in a two-part series that explores the emerging new field of Computational Photography. Computational photography combines ideas in computer vision, computer graphics, and image processing to overcome limitations in image quality such as resolution, dynamic range, and defocus/motion blur. This course will first cover the fundamentals of image sensing and modern cameras.  We will then continue to explore more advanced topics in computer vision. We will then use this as a basis to explore recent topics in computational photography such as motion/defocus deblurring cameras, light field cameras, and computational illumination.

 

This course will consist of six homework assignments and no midterm or final exam. We will provide a Nvidia Tegra tablet for each student in the course. Students will write programs that run on the phone to capture photos. Enrollment is limited to 30 students.

 

Prerequisites

EECS 211 and/or 230 or permission from instructor. Students should have experience with C/C++ and MATLAB programming. If you are interested, please contact the instructor to discuss!

 

Coursework and Grading

The course will consist of 6 homework assignments. Each assignment will consist of some camera programming and some image processing. The camera programming will be done in C/C++ and the image processing will be done using MATLAB. 

 

Grading will be based on a 100 point system. The homeworks will constitute the bulk of the course grade (90 points in total). Class attendance will constitute the other 10 points. Instructions for completing each assignment can be found at the following links:

 

HW1: Hello World Application (15 points)

HW2: Measuring Sensor Noise (15 points)

HW3: Flash/No Flash Photography (15 points)

HW4: HDR Imaging (15 points)

HW5: Depth From Focus (15 points)

HW6: Synthetic Aperture Imaging (15 points)

 

A discussion for each homework assignment has been created on Blackboard. Please post all of your questions on the discussion board so that others may learn from your questions as well. Do not email the professor or TA directly with homework questions.

 

All Homeworks are to be submitted via Canvas by 11:59pm on the due date. Each student will be permitted ONE late submission for partial credit. Two points shall be docked from the submission for each 24-hour period. For instance, if the homework is due Tuesday at 11:59pm and it is submitted Wednesday between 12:00am and 11:59pm, 2 points will be docked. If the assignment is submitted on Thursday between 12:00am and 11:59pm, 4 points will be docked, and so on. Only ONE late assignment per student will be awarded partial credit. Any additional late assignments will receive no credit.

 

Course Syllabus

 

Tuesday 9/19/17

Introduction

 

Thursday 9/21/17

Image Formation

 

Tuesday 9/26/17

Image Sensing

Thursday 9/28/17

Image Processing I

 

Tuesday 10/3/17

Image Processing II

HW1 Due

Thursday 10/5/17

Flash and Lighting

 

Tuesday 10/10/16

Radiometry

 

Thursday 10/12/16

No Class

HW2 Due

Tuesday 10/17/17

HDR Imaging

 

Thursday 10/19/17

No Class

Tuesday 10/24/17

Photometric Stereo

HW3 Due

Thursday 10/26/17

Shape from Shading

 

Tuesday 10/31/17

Structured Light

Thursday 11/2/17

Depth from Focus

Tuesday 11/7/17

SIFT

HW4 Due

Thursday 11/9/17

Camera Calibration

 

Tuesday 11/14/17

Stereo

 

Thursday 11/16/17

No Class

 

Tuesday 11/21/17

Light Fields

HW5 Due

Thursday 11/23/17

Thanksgiving

 

Tuesday 11/28/17

Light Transport

 

Thursday 11/30/17

Selected Topics

HW6 Due

 

Texts

Computational photography is a new and exciting field. No standard texts on this topic are available yet. Reading material and class slides will be will be available before each class. Optional texts include:

_       Forsyth and Ponce. Computer Vision: A Modern Approach. Pearson. 2002.

_       Richard SzeliskiComputer Vision: Algorithms and Applications. Springer. 2010.

_       Berthold K. P. Horn. Robot Vision. The MIT Press. 1986.

_       R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision, Cambridge Press, Cambridge, UK, 2000.

 

Course Instructor

Oliver (Ollie) Cossairt,

Office: Rm 3-211 Ford Design Center

Email: Ollie@eecs.northwestern.edu

Office Phone: (847) 491-0895.

Office Hours: Tuesday 2:30-3:30pm

 

Teaching Assistants

Sushobhan Ghosh

Email: sushobhan04gosh@gmail.com

Office: Ford 3-230

Office Hours: Thursday 1:00-3:00pm

 

Useful Links

Similar Courses in Other Universities

_       Computational Photography (Gu, RIT)

_       Computational Photography SIGGRAPH Course (Raskar & Tumblin)

_       Computational Camera and Photography (Raskar, MIT)

_       Digital and Computational Photography (Durand & Freeman, MIT)

_       Computational Photography (Levoy & Wilburn, Stanford)

_       Computational Photography (Belhumeur, Columbia)

_       Computational Photography (Efros, CMU)

_       Computational Photography (Essa, Georgia Tech)

_       Computational Photography (Fergus, NYU)

_       Computer Vision (Seitz, U of Washington)

_       Computer Vision (Zhang, U of Wisconsin)

_       Computer Vision (Snavely, Cornell)

_       Introduction to Visual Computing (Kutulakos, U of Toronto)

More Links

_       What is Computational Camera, Shree Nayar, Columbia

_       Columbia Projects, Shree NayarPeter Belhumeur

_       MIT Projects, Fredo Durand, William Freeman, Edward Adelson, Antonio TorralbaRaskar Ramesh

_       Stanford Projects, Marc Levoy and collaborators

_       USC Projects, Paul Debevec and collaborators

_       CMU Projects, NarasimhanEfros

_       Jack Tumblin's 'Questions' for the field

_       Conferences: ICCP 2011ICCP 2010ICCP 2009, SIGGRAPH, SIGGRAPH Asia, CVPR, ICCV, ECCV, ...

Acknowledgement

Many of the course materials are modified from the excellent class notes of similar courses offered in other schools by Shree Nayar, Marc Levoy, Jinwei Gu, Fredo Durand, and others. The instructor is extremely thankful to the researchers for making their notes available online.