Reconstructing a Signal

Target Signal f^T(t)
f^T(t)=NΣn=1 Acos(nwt)
f1+f2+f3+f4

Periodic function => a weigted sum of sines and cosines of different frequencies
transform f(t) it a F(w)
frequency spectrum of the function f
a reversible operation
For every w from – to ∞(infinity) F(w) holds the amplitude A and phase a sine function

Frequency Domain of a signal
g(t)= sin(2pwt) + 1/3sin(2p(3w)t)

Convolution Theorem and the Fourier Transform
Fourier transform of a convolution of two functions = product of their fourier transforms
F[g * h] = F[g]F[h]
F^-1[gh] = F^-1[g]*F-1[h]

Using the Frequency spectra
low-pass, high-pass, band-pass filtering

Fourier Transform

Using sines and cosines to reconstruct a signal
the Fourier transform
Frequency domain for a signal
Three properties of convolution relating to Fourier Transform

Reconstructing a signal
Repeating impulse function
Target Signal f^T(t)

Basic building block
f(t) = A cos(nwt)
a amplitude, w, frequency

% Cosines of different frequencies
n = 4; % no. of periods (say, seconds)
t = linspace(0, n, n * 90); % time samples
A = 2; % amplitude

f1 = A * cos(1 * (2 * pi) * t);
plot(t, f1);

f2 = A * cos(2 * (2 * pi) * t);
plot(t, f1);

f3 = A * cos(3 * (2 * pi) * t);
plot(t, f1);

f4 = A * cos(4 * (2 * pi) * t);
plot(t, f1);

Sensor

1. photographic processes for digital and film capture
2. Eight layers of color film
3. Five layers of a CCD
4. Differences between a CCD and CMOS Sensor
5. Two benefits of using the Camera Raw Format

Film vs. Digital
There have been significant improvements in actuators, and lenses
Difference is how light is trapped and preserved

Film reaction between light and chemicals
protective layer
UV filter
blue light layer
yellow light layer
green light layer
red light layer
adhesive layer
film base

a sheet of plastic
(polyester, PET, cellulose acetate)

Light -> protective coating, emulsion, adhesive, base, adhesive

Digital: Converting light to data
CCD: Charge-coupled device, a device for converting electrical charge, into a digital value
Pixels are represented by capacitors, which convert and store incoming photo as electron charges
Willard Boyle and George E. Smith, 1969

Digital: converting right to data
micro lens:capture the light and direct it towards light-sensitive areas
Hot mirror: lets visible light pass, but reflects light in the invisible part of the spectrum
Color filter: photodiodes are color bihind. a color filter matrix separates the light into red, green, blue. usually referred to as bayer array
Photodiodes: this is where light energy is converted to electrons, creating a negative charge
Well(depletion layer): where the electrons are collected

Bayer filter on a sensor
blue, two green, red
incoming light -> filter layer -> sensor array
RGB color plane

A 4*4 subset
R G0 G1 B

Actual sensor Information with Bayer Filter

CCD vs CMOS Sensors
CMOS: Complementary metal oxide semiconductor

Camera Raw File Format
-contain minimally processed data from the sensor
-image encoded in a device-dependent color space
-Captures radiometric characteristics of the scene

Exposure

Exposure Triangle
-aperture
-shutter speed
-iso

Focal Length vs. Viewpoint
f=18mm,35mmsensor, f=180mm, 35mm sensor
1st subject 0.5m away, 30m away

Changing focal length allow us to move back, and still capture the scene
Changing viewpoint causes perspective changes

Exposure = irradiance * time
H = E x T

Irradiance(E)
– amount of light falling on a unit area of sensor per second
– Controlled by lens aperture
Exposure Time(T)
– How long the shutter is kept open

Amount of time the sensor is exposed to light
shutter speed
usually denoted in fractions of a second(1/2000,1/1000,1/250,1/60,1/15,15,30,Bulb)

Irradiance on sensor -> the amount of light captured is proportional to the Area of the aperture(opening)
Area = π(f/2N)^2

f is the focal length, what is the diameter of the Aperture?
Aperture number gives irradiance irrespective of the lens in use
f/2.0 on 50mm lens -> aperture = 25mm
f/2.0 on 200mm lens -> aperture = 100mm

f2.8 4, 5.6, 8, 11, 16
More light, less light
area = π(f/2N)^2
Doubling N reduces A by 2C, and therefore reduces light by 4X
from f/2.8 to f/5.6 cuts light by 4X

ISO(sensitivity)
ISO100, ISO1600
Third variable in getting the right exposure
Film sensitivity vs. Grain(of film)

f156
shutter speed:1/10

Pinhole size and image quality

pinhole size = aperture!
pinhole “blur” simulated

Light diffracts
-wave nature of light
-smaller aperture means more diffraction

Larger pinhole = Geometric Blur

For d(pinhole diameter), f(distance from pinhole to sensor), and π(wavelength of light)
d = 2√1/2f*π

Geometrical optics
– parallel rays converge to a point located at focal length, f from lens
– Rays going through center of lens do not deviate (function like a pinhole)

Ray tracing with lense
1/0 + 1/i = 1/f

Traditionally, 15-135mm focal length are used for portraits

Camera

Objectives
1. Rays to pixels
2. A camera without optics
3. Lens in the camera system
4. The lens equation

Context of computational photography
3D scene, illumination, optics, sensor, processing, display, user

Rays -> pixel
optics, sensor, image processing, computer graphics, computer vision

Scene via a 2D array of pixels
rays are fundamental primitives
illumination follows a path from the source to the scene
computation can control the parameters of the optics, sensor and illumination

ViewFinder
Focus or Zoom Ring
Shutter Release
Frontal Glass Lens
Diaphragm
Photographic film
Focal Plane Shutters

Pinhole photograph
-heoretically
no distortion straight lines remain straight
infinite depth of field: everything in focus(but there is optical blurring)

Gradient to Edges

1. smoothing suppress noise
2. Compute gradient
3. Apply Edge “enhancement”
4. Edge localization
1. Edges vs. noise
5. Threshold, Thin

Canny Edge Detector
1. filter image with derivative of Gaussian
2. Find magnitude and orientation of gradient
dx, dy, mag of gradient, theta

Edges

1. compute edges
2. derivatives using kernels and neighborhood operations
3. three methods for computing edges using kernels
4. image noise can complicate the computation of gradient
5. the canny edge detector

Derivatives as a local product
δF(x,y)/δx = F(x+1,y) – F(x,y)
=[-1 1]*[F(x,y), F(x+1,y)]^T

Desired: An “operator” that effectively compute discrete derivative values with cross-correlation(i.e. using finite differences*)

Hx, Hy

Average of “left” and “right” derivatives

Impact of Noise on Gradients
It gets harder to detect an edge when there is significant noise in the signal.
original signal no noise
original signal + 0.1 x noise
original signal + 0.25 x noise
BWB, increse noise

Convolution and Gradients
recall, convolution is G = h * F
derivative of a convolution δG/δx = δ/δx(h*F)
If D is a kernel to compute derivatives and H is the Kernel for smoothing…

Match between images

Features
– parts of an image that encode it in a compact form

Edges
edges in an image
surface normal, depth, surface color, illumination
information theory view: edges encode change, therefore edges efficiently encode an image

Edges appear as ridges in the 3D height map of an image

Edge Detection
Basic Idea
Look for a neighborhood with strong signs of change
Derivatives of F(x,y) to get Edges

Need an operation that when applied to an image returns its derivatives
model these “operatiors” as mask/kernel
then “threshold” this gradient function to select edge pixels

Gradient of an image = measure of change in Image function.
ΔF = [δF/δx,δF/δy]
measure of change in image function (F) in x and y
Gradient direction is
Θ = tan^-1[δF/δx,δF/δy]

Convolution and cross-correlation

1.cross-correlation
2.convolution
3.difference between cross-correlation and convolution
4.properties of these methods!

A mathematical representation for smoothing
F[3,3]
F[i,j]

In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them.
Also known as a sliding dot product or sliding inner-product

Cross-Correlation method
G[i,j] = kΣu=-k*kΣv=-k*h[u,v]F[i+u,j+v]

Box Filter
Gaussian Filter
size:21×21
Values: Gaussian or Normal distribution
σ1, 3, 6, 9
F[i,j], h[i,j], G[i,j]
G[i,j] = kΣu=-k*kΣv=-k*h[u,v]F[i-u,j-v]
Denoted by G = h * F
Flip filter in both dimensions
bottom to top
right to left
Then apply cross-correlation

Linear and shift invariants
behaves the same everywhere
the value of the output depends on the pattern in the image neighborhood, not the position of the neighborhood

identity: unit umpulse
E = […0,0,1,0,0…]
F * E = F
true of cross-correlation

Separable
if the filter is separable, convolve all rows, then convolve

0, 0, 0
0, 1, 0
0, 0, 0
x 2 =
1/9, 1/9, 1/9
1/9, 1/9, 1/9
1/9, 1/9, 1/9