P(fed raises interest rates)
s -> np vp
np -> n|dn|nn|nnn
vp -> v np|v|v npnp
随机应变 ABCD: Always Be Coding and … : хороший
P(fed raises interest rates)
s -> np vp
np -> n|dn|nn|nnn
vp -> v np|v|v npnp
code here
def splits(characters, longest=12): "All ways to split characters into a first word and remainder." return [(characters[:i], characters[i:]) for i in range(1, 1+min(longest, len(characters)))] def Pwords(words): "probability of a sequence of words." return product(words, key=Pw) @memo def segment(text): "Best segmentation of text into words, by probability." if text == "": return [] candidates = [[first]+segment(rest) for first,rest in splits(text)] return max(candidates, key=Pwords)
spelling correction
c* = argmaxc P(c|w)
= argmaxc P(w|c) p(c)
Language Model
-probabilistic
-word-based
-Learned
P(word1, word2…)
L = {s2, s2, …}
Logical trees hand-coded
P(w1, w2, w3…Wn) = p(w1:n) = πi P(wi|w1:i-1)
Markov assumption
P(wi|w1:i-1) = P(wi|wi-k:i-1)
stationarity assumption p(wi|wi-1) = p(wj|wj-1)
smoothing
classification, clustering, input correction, sentiment analysis, information retrieval, question answering, machine translation, speech recognition, driving a car autonomously
P(the), p(der), p(rba)
Naive Bayes
k-Nearest Neighbor
Support vector machines
logistic regression
sort command
gzip command
(echo ‘cat new EN|gzip|wc -c’ EN;\
echo ‘cat new DE|gzip|wc -c’ DE; \
echo ‘cat new AZ|gzip|wc -c’ AZ) \
|sort -n|head -1
S* = maxP(w|:n) = max√li p(wi|w1:i-1)
S* = maxP(wi)
IMU, 6 computers, GPS compass, GPS, E-stop, 5 lasers, camera, rador, control screen, steering monitor
probabilistic localization
Robotics
-> robot agent(sensor data) actions
environment agent
Perception
sensor data, internal state, filter
x’ = x + vΔ+cosΘ
y’ = y + vΔ+sinΘ
Θ’ = Θ + wΔt
3D range, depth, distance
two stereo image
x2-x1/f = B/Z -> z = fB/x2-x1
SSD minimization
left -> normalize
right -> normalize => ()2 => Σpixel = value -> ssd
correspondence
cost of match, cost of occlusion
Dynamic programming n2
v(i, g) = max occl + v(i-1,j)
structure from motion
3d world, location of camera
structure from motion
3D world, location of camera
Equal triangles
x/z = x/f
vanishing points
x = X*f/z, y = Y*f/z
lens
1/f = 1/Z – 1/z
Computer vision
-classify object
-3D recognition
Invariance
0…255
+- mask
reasons to blur
-downs sampling
-noise reduction
I * (f * g)
Gaussian kernel
Harris Corner Detector
Σ(Ix)2 -> large
Σ(Iy)2 -> large
Modern feature detector
-localize
-unique signatures
MDPs
POMDPs, Belief Space
Reinforcement Learning
A*; h function; Monte Carlo
chess, go, robot soccer, poker, hide-and-go-seek, card soliaire, minesweeper
s, p, actions(s, p), result(s,a), terminal(s), u(s, p)
deterministic, two-player, zero-sum
def maxValue(s):
m = -∞
for (a, s) in successors(s):
v = value(s’)
m = max(m, v)
return m
complexity
o(b)m
Hidden Markov Model -HMMs
– analyse
– to predict time sence
Applications
– roboitcs
– medical
– finance
– speech
– language technology
HMMs follow bayes network
s1 -> s2 -> s3 -> ..Sn Markov chain
z1 z2 z3 z4
kalman filter, particle filter
localization problem
razor finder
speech recognition -> markov model
transition “I” to “a”
Hidden markov chain
P(R0)= 1
R(s0) = 0
p(R1) = 0.6
p(R2) = 0.44
P(R2) = 0.376
P(A1000)
P(A∞)lim t100 P(At)
stationary distribution
P(at) = P(at-1)
p(at|at-1)p(at-1) + P(at|t-1)P(bt-1)
supervised (x1, y1)(x2, y2) … y = f(x)
unsupervised x1, x2, … P(X = x)
Reinforcement s,a,s,a..
sur
speech recognition
star data
lever pressig
MDP Review – Markov Decision Processes
s E S
a E Actions(s)
R(s) -> +100, -100, -3
E[∞Σt=0 γtRt] -> max
value iteration
V(a3, E) = 0.8×100 -3 = 77
V(s) <- [max aγΣs'P(s'(s,a)V(s'))]+ R(s)
back-up theorem convercet