Digital image

Rays of light -> Picture
Illumination, optics, sensor, processing, display

make an image a “computable” object
A digital image

digital image – pixel and image resolution
discrete (matrix) and continuous (function) representations
grayscale and color images
digital image formats

A digital image(W X H)
column and rows
width, height
square image -> 512 x 512 pixels = 262 = 0.26mp image
mp: mega pixel image

Numeric representation in 2-D (x and y)
Referred to as I(x,y) in continuous function from I(i,j) in discrete
Image resolution expressed in terms of width and height of the image

Extend FP / DP

Compared to FP/DP, CP has better specification and support for
Dynamic range
Vary focus point-by-point
Field of view vs. resolution
Exposure time and frame rate
Bursts

Images in News
Kennedy Assassination
Rodney King Beatings in LA
9/11 images
7/7 London bombing
Virginia Tech
Michael Richards
Russian meteor
Beast with a Billion eyes(literally)

Participatory Data
Handheld, citizen, etc.
Institutional Imagery
Satellite, Airborne, Recon, UAV
etc.
Incidental
security cameras, ATMs, etc

Computer Vision and Computer Graphics
images(2D) -> geometry(3D), Photometry(Appearance)

Utimate Camera
object, light rays, lens, film(retina)

Emerging Field of computational photography
what will a camera look like in 10 years? 20 years?
what novel images can we get? what are their uses?

SLR(film)

-Smart phone camera comes into the market, several billion devices
-Number of photos taken each year

DSLR advantages
– more light
– depth of field
– shutter lag
– control field of view
– better glass
– other (flash, man, modes, …)

phone advantages
– computation
– data
– programmers

Film and digital cameras have roughly the same features and controls
– zoom and focus
– aperture and exposure
– shutter release and advance
– one shutter press = one snapshot

For FP/DP we can use, but cp allows us to change
optics, illumination, sensor, movement
exploit wavelength, speed, depth, polarization, etc.

Probes, actuators, network

Why study comp. photography

1.Pervasiveness of photography
2.Computational photography as it relates to other disciplines
3.Computational photography vs. photography

Traditional Film/Digital Camera processes
Novel Camera
Sensor/Detector, lens
generalized optics(ray bender)

Almost everone has a camera
e.g. smaller, ubiquitous
Significant Improvements in option
Field of applied optics has studied
every aspect of the lens

Camera phones
widest selling electronic platform
Google earth, youtube, flickr
text, speech, music, images, video, 3D…
key element for art, research, products, special-computing …

Panorama

3D Scene -> illumination -> Optics -> Sensor -> Processing -> Display -> User

7 pictures,/ 3012 * 2304 (7imp)
11262 * 2691 /(31mp)/ FOV = 15172 * 2446

Taking pictures
These days, robotic cameras
match and merge together pictures
some overlap can thus be merged to create a panorama

detection and matching
basically some features are matched in two pictures
warping

Fade, blend, or cut

Five steps to make a panorama
1.capture images
2.detection and matching
3.warping
4.blending, fading, cutting
5.cropping

Dual Photography

The concept of “dual photography”
Computational Photography(Rays to Pixels)

Novel Illumination
Photocell capture the light

Projector(controllable light source), modulator(controllable aperture)

Novel Camera
modulator(controllable aperture)

Reflective properties of ray of light
light source, light sensor
Reflection of light depends on the kind of surface: spectacular(mirror), diffuse(matte)

projector pattern – camera image

Enables Imaging

Unbounded Dynamic Range
Variable
-Focus
-Depth of Field
-Resolution
-Lighting
-Reflectance

Supports and enhance the medium of photography

3D scene

(1)Illumination
(2)Optics
(3)Sensor
(4)Processing
(5)Display
(6)User
Computation can be embedded in all aspects of these elements to support photography

Rays to Pixels

Aperture
Generalized optics

Novel Camera
Processing, Sensor, Aperture, Generalized Optics

pixel, display

Photography

-mathematics (Linear Algebra, Calculus, Probability)
-Computing
within browser, oepnCV/python/c++ OR, metlab / Octave
-Camera
could be useful(nothing advanced)
images will be provided

Technology-related content
What is computational photography?
Fundamental elements of computational photography

https://en.wikipedia.org/wiki/Photography
-computing
-digital sensors
-modern optics
-actuators
-smart lights
-to escape the limitations of traditional film cameras

Debatable, but…
Chemicals / Darkroom
12-24-36 pictures / roll
No instant gratification
Sensitivity of film

Material

easy to color, shade, texture as well
assigning materials

what is a Material inside of Unity?
A shader and its associated settings

Textures
famous person in computer graphics
Ivan Sutherland, Edwin Catmull, Bui Tuong Phong

project pane -> Assets -> Create -> Shader -> Standard surface shader
HLSL

Shader "Custom/NewSurfaceShader" {
	Properties {
		_Color ("Color", Color) = (1,1,1,1)
		_MainTex ("Albedo (RGB)", 2D) = "white" {}
		_Glossiness ("Smoothness", Range(0,1)) = 0.5
		_Metallic ("Metallic", Range(0,1)) = 0.0
	}
	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200
		
		CGPROGRAM
		// Physically based Standard lighting model, and enable shadows on all light types
		#pragma surface surf Standard fullforwardshadows

		// Use shader model 3.0 target, to get nicer looking lighting
		#pragma target 3.0

		sampler2D _MainTex;

		struct Input {
			float2 uv_MainTex;
		};

		half _Glossiness;
		half _Metallic;
		fixed4 _Color;

		void surf (Input IN, inout SurfaceOutputStandard o) {
			// Albedo comes from a texture tinted by color
			fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
			o.Albedo = c.rgb;
			// Metallic and smoothness come from slider variables
			o.Metallic = _Metallic;
			o.Smoothness = _Glossiness;
			o.Alpha = c.a;
		}
		ENDCG
	}
	FallBack "Diffuse"
}

Unity’s Standard Shader attempts to light objects in a “physically-accurate” way. This technique is called Physically-Based Rendering or PBR for short. Instead of defining how an object looks in one lighting environment, you specify the properties of the object (e.g. how metal or plastic it is)

implement head lotation

using UnityEngine;
using System.Collections;

public class HeadRotation : MonoBehaviour {
	
	void Start(){
		Input.gyro.enabled = true;
	}

	void Update(){
		Quaternion att = Input.gyro.attitude;
		att = Quaternion.Euler(90f, 0f, 0f) * new Quaternion(att.x, att.y, -att.z, -att.w);
		transform.rotation = att;
	}
}