Natural Language processing

Language Model
-probabilistic
-word-based
-Learned

P(word1, word2…)
L = {s2, s2, …}

Logical trees hand-coded

P(w1, w2, w3…Wn) = p(w1:n) = πi P(wi|w1:i-1)
Markov assumption
P(wi|w1:i-1) = P(wi|wi-k:i-1)

stationarity assumption p(wi|wi-1) = p(wj|wj-1)
smoothing

classification, clustering, input correction, sentiment analysis, information retrieval, question answering, machine translation, speech recognition, driving a car autonomously

P(the), p(der), p(rba)

Naive Bayes
k-Nearest Neighbor
Support vector machines
logistic regression
sort command
gzip command

(echo ‘cat new EN|gzip|wc -c’ EN;\
echo ‘cat new DE|gzip|wc -c’ DE; \
echo ‘cat new AZ|gzip|wc -c’ AZ) \
|sort -n|head -1

S* = maxP(w|:n) = max√li p(wi|w1:i-1)
S* = maxP(wi)

Robotics

IMU, 6 computers, GPS compass, GPS, E-stop, 5 lasers, camera, rador, control screen, steering monitor

probabilistic localization

Robotics
-> robot agent(sensor data) actions
environment agent

Perception
sensor data, internal state, filter

x’ = x + vΔ+cosΘ
y’ = y + vΔ+sinΘ
Θ’ = Θ + wΔt

3D vision

3D range, depth, distance

two stereo image
x2-x1/f = B/Z -> z = fB/x2-x1

SSD minimization
left -> normalize
right -> normalize => ()2 => Σpixel = value -> ssd

correspondence
cost of match, cost of occlusion

Dynamic programming n2
v(i, g) = max occl + v(i-1,j)

structure from motion
3d world, location of camera

structure from motion
3D world, location of camera

Image Formation

Equal triangles
x/z = x/f

vanishing points
x = X*f/z, y = Y*f/z

lens
1/f = 1/Z – 1/z

Computer vision
-classify object
-3D recognition

Invariance
0…255
+- mask

reasons to blur
-downs sampling
-noise reduction
I * (f * g)
Gaussian kernel

Harris Corner Detector
Σ(Ix)2 -> large
Σ(Iy)2 -> large

Modern feature detector
-localize
-unique signatures

technology

MDPs
POMDPs, Belief Space
Reinforcement Learning
A*; h function; Monte Carlo

chess, go, robot soccer, poker, hide-and-go-seek, card soliaire, minesweeper

s, p, actions(s, p), result(s,a), terminal(s), u(s, p)

deterministic, two-player, zero-sum

def maxValue(s):
m = -∞
for (a, s) in successors(s):
v = value(s’)
m = max(m, v)
return m

complexity
o(b)m

HMMs and Filters

Hidden Markov Model -HMMs
– analyse
– to predict time sence

Applications
– roboitcs
– medical
– finance
– speech
– language technology

HMMs follow bayes network
s1 -> s2 -> s3 -> ..Sn Markov chain
z1 z2 z3 z4

kalman filter, particle filter

localization problem
razor finder

speech recognition -> markov model
transition “I” to “a”

Hidden markov chain
P(R0)= 1
R(s0) = 0
p(R1) = 0.6
p(R2) = 0.44
P(R2) = 0.376

P(A1000)
P(A∞)lim t100 P(At)

stationary distribution
P(at) = P(at-1)
p(at|at-1)p(at-1) + P(at|t-1)P(bt-1)

Planning under uncertainty

Planning under uncertainty and learning
MDP, POMDPs

deterministic, stochastic

fully observable A*, depth filter, deapth first, mdp
partially observable, POMDP

Markov decision process(MDP)
state, actions, state transition,
T(s,a,s’)
reword function R(s)

MDP cridworld
policy π(s)->A

tree too deep

stole
blanching factor large
many states visitied more than once

Planning and Execution

Stochastic
Multi agent
Partial serviceability [A, S, F, B]
– unknown
– hierarchical

[s, r, s][s, while a:r, s]

[a, s, f] result(result(a, a->s), s->f) <- goals s' = result + (s, a) b' = update(redirect(b, a), 0) classical planning state space: k-boolean(2k) world state: complete assignment belief state: complete assignment, partial assignment, arbitrary formula Action(fly(p, x, y)) prerecord : plan(p)^ airport(x) ^ airport(y) ^ a + (p, x) effect: ¬a+(p,x) ^ A +(p, y) at(D, sfo) at(c, sfo) load(c, d1, sfo) Regression vs Progression Action(buy(b),effect:ISBN(b), eff:own(b)) goal(own(0136042597)) situation calculus actions: objects fly(p, x, y) situation: objects successor-state axioms A +(p,x,s)