Why study comp. photography

1.Pervasiveness of photography
2.Computational photography as it relates to other disciplines
3.Computational photography vs. photography

Traditional Film/Digital Camera processes
Novel Camera
Sensor/Detector, lens
generalized optics(ray bender)

Almost everone has a camera
e.g. smaller, ubiquitous
Significant Improvements in option
Field of applied optics has studied
every aspect of the lens

Camera phones
widest selling electronic platform
Google earth, youtube, flickr
text, speech, music, images, video, 3D…
key element for art, research, products, special-computing …

Panorama

3D Scene -> illumination -> Optics -> Sensor -> Processing -> Display -> User

7 pictures,/ 3012 * 2304 (7imp)
11262 * 2691 /(31mp)/ FOV = 15172 * 2446

Taking pictures
These days, robotic cameras
match and merge together pictures
some overlap can thus be merged to create a panorama

detection and matching
basically some features are matched in two pictures
warping

Fade, blend, or cut

Five steps to make a panorama
1.capture images
2.detection and matching
3.warping
4.blending, fading, cutting
5.cropping

Dual Photography

The concept of “dual photography”
Computational Photography(Rays to Pixels)

Novel Illumination
Photocell capture the light

Projector(controllable light source), modulator(controllable aperture)

Novel Camera
modulator(controllable aperture)

Reflective properties of ray of light
light source, light sensor
Reflection of light depends on the kind of surface: spectacular(mirror), diffuse(matte)

projector pattern – camera image

Enables Imaging

Unbounded Dynamic Range
Variable
-Focus
-Depth of Field
-Resolution
-Lighting
-Reflectance

Supports and enhance the medium of photography

3D scene

(1)Illumination
(2)Optics
(3)Sensor
(4)Processing
(5)Display
(6)User
Computation can be embedded in all aspects of these elements to support photography

Rays to Pixels

Aperture
Generalized optics

Novel Camera
Processing, Sensor, Aperture, Generalized Optics

pixel, display

Photography

-mathematics (Linear Algebra, Calculus, Probability)
-Computing
within browser, oepnCV/python/c++ OR, metlab / Octave
-Camera
could be useful(nothing advanced)
images will be provided

Technology-related content
What is computational photography?
Fundamental elements of computational photography

https://en.wikipedia.org/wiki/Photography
-computing
-digital sensors
-modern optics
-actuators
-smart lights
-to escape the limitations of traditional film cameras

Debatable, but…
Chemicals / Darkroom
12-24-36 pictures / roll
No instant gratification
Sensitivity of film

Material

easy to color, shade, texture as well
assigning materials

what is a Material inside of Unity?
A shader and its associated settings

Textures
famous person in computer graphics
Ivan Sutherland, Edwin Catmull, Bui Tuong Phong

project pane -> Assets -> Create -> Shader -> Standard surface shader
HLSL

Shader "Custom/NewSurfaceShader" {
	Properties {
		_Color ("Color", Color) = (1,1,1,1)
		_MainTex ("Albedo (RGB)", 2D) = "white" {}
		_Glossiness ("Smoothness", Range(0,1)) = 0.5
		_Metallic ("Metallic", Range(0,1)) = 0.0
	}
	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200
		
		CGPROGRAM
		// Physically based Standard lighting model, and enable shadows on all light types
		#pragma surface surf Standard fullforwardshadows

		// Use shader model 3.0 target, to get nicer looking lighting
		#pragma target 3.0

		sampler2D _MainTex;

		struct Input {
			float2 uv_MainTex;
		};

		half _Glossiness;
		half _Metallic;
		fixed4 _Color;

		void surf (Input IN, inout SurfaceOutputStandard o) {
			// Albedo comes from a texture tinted by color
			fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
			o.Albedo = c.rgb;
			// Metallic and smoothness come from slider variables
			o.Metallic = _Metallic;
			o.Smoothness = _Glossiness;
			o.Alpha = c.a;
		}
		ENDCG
	}
	FallBack "Diffuse"
}

Unity’s Standard Shader attempts to light objects in a “physically-accurate” way. This technique is called Physically-Based Rendering or PBR for short. Instead of defining how an object looks in one lighting environment, you specify the properties of the object (e.g. how metal or plastic it is)

implement head lotation

using UnityEngine;
using System.Collections;

public class HeadRotation : MonoBehaviour {
	
	void Start(){
		Input.gyro.enabled = true;
	}

	void Update(){
		Quaternion att = Input.gyro.attitude;
		att = Quaternion.Euler(90f, 0f, 0f) * new Quaternion(att.x, att.y, -att.z, -att.w);
		transform.rotation = att;
	}
}

Complex Model

to create complex models
– use a program like blender or maya
– download from the internet or the unity asset store

position change

you see changing a rotation and scale

Every Unity Game Object has a Transform
The Transforms scale is a multiplier of an object’s original

child parent relationship between the transform

Why do we mostly use triangles to represent 3D

speed, simplicity, convention
We use triangles because they provide a fast way for a computer to represent surfaces, they’re pretty simple structures, and we’ve been using them for quite a while, so computer hardware is optimized for them.

make a cube
go to gameobject and choose cube.
select move tool to change dimension

now go to GameObject and select sphere
now how about cylinder

Game Object -> 3D Object -> Cube

How information flows

Context (notebook)
-> notification
FetchedResultsController
-> delegate
CoreDataTableViewController
-> update
TableView

saving images(BLOBs) and migrating data model

Importing large sets of objects into the database without blocking the user interface
Downloading new objects from a REST service and inserting them into the db.
Saving datasets in the background since saving can take a user-perceivable amount of time.