Limit Distribution

0.8p(x2)+ 0.1p(x1)+ 0.1p(x3) = p(x4)

Gain information, loses information
Entropy -Σp(xi)log p(x)

BELiEF = PROBABility
sense = product followed by normalization
move = convolution(=addition)
formal definition
0 <= p(x) <= 1 p(x1) = 0.2 p(x2) = 0.8 bayes rule x = grid cell, z = measurement p(x|z) = p(z|x)p(x)/ p(z) motion - total probability p(xit) = Σj P(xg t-1)・p(xi|xj) p(x)=0.2 p(¬x)=0.8 p(x)=0.2 p(y)=0.2 p(x,y) = 0.04 p(x)=0.2 p(y|x)=0.6 p(y|¬x)=0.6 p(y)=0.6 [python] colors = [['green', 'green', 'green'], ['green', 'red', 'green'], ['green', 'green', 'green']] measurements = ['red', 'red'] motions = [[0, 0], [0, 1]] sensor_right = 1.0 p_move = 1.0 [/python]

def localize(colors, measurements, motions, sensor_right, p_move):
	print = 1.0 / float(len(colors)) / float(len(colors[0]))
	p = [[pinit for row in range(len(colors[0]))] for col in range(len(colors))]

	return p

sensor_wrong = 1.0 – sensor_right
p_stay = 1.0 – p_move

def show(p):
	rows = [‘[‘ + ‘,’.join(map(lambda x: ‘{0:.5f’.format(x), r))+’]’ for r in p]
	print ‘[‘ + ‘,\n ‘.join(rows) + ‘]’

colors = [[‘R’,’G’,’G’,’R’,’R’],
          [‘R’,’R’,’G’,’R’,’R’],
          [‘R’,’R’,’G’,’G’,’R’],
          [‘R’,’R’,’R’,’R’,’R’]]
measurements = [‘G’,’G’,’G’,’G’,’G’]
motions = [[0,0],[0,1],[1,0],[1,0],[0,1]]
p = localize(colors,measurements,motions,sensor_right = 0.7, p_move = 0.8)
show(p)

Sense Function

p=[0.2,0.2,0.2,0.2]
world=['green','red','red','green','green']
pHit = 0.6
pMiss = 0.2

def sense(p, Z)
	q=[]
	for i in range(len(p)):
		hit = (Z == world[i])
		q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
	return q

print sense(p, Z)
p=[0.2,0.2,0.2,0.2]
world=['green','red','red','green','green']
pHit = 0.6
pMiss = 0.2

def sense(p, Z)
	q=[]
	for i in range(len(p)):
		hit = (Z == world[i])
		q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
	sum i in range(len(p)):
		q[i] = q[i] / s
	return q

print sense(p, Z)
p=[0.2,0.2,0.2,0.2]
world=['green','red','red','green','green']
measurements = ['red', 'green']
pHit = 0.6
pMiss = 0.2

def sense(p, Z)
	q=[]
	for i in range(len(p)):
		hit = (Z == world[i])
		q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
	sum i in range(len(p)):
		q[i] = q[i] / s
	return q

for k in range(len(measurements)):
	p = sense(p, measurements[k])

print sense(p, Z)
def move(p, U):
	q = []
	for i in range(len(p)):
		s = pExact * p[(i-U) % len(p)]
		s = s + pOvershoot * p[(i-U-1)%len(p)]
		s = s + pUndershoot * p[(i-U+1)%len(p)]
		q.append(s)
	return q

PCA vs ICA

bss +ica -pca
directional +ica -pca
faces
natural sene ->ica <-edge documents->topics

information theory
x1, x2, x3 -> learner -> y

010101
A 0, B 110, C 111, D 10

Joint Entropy
H(x,y)= -ΣP(x,y)logP(x,y)

Conditional Entropy
Hcy(x) = -Σp(x,y)logP(y|x)

H(y|x)
I(x,y)=H(y)-H(x|y)

D(p||g) = sp(x)log p(x)/q(x)

k-means clustering

k-means clustering
-pick k centers(at random)
-each center “claims” its closest points
-recompute the centers by overaging the clustered points
-repeat until convergence

P+(x): Partition / cluster of object x
c+ : set of all points in cluster i = {x s.t. P(x)=i}
center+i = ΣyeCi y / |Ci|

P+(x) = argmin||x-center+-1||2

K-means as optimization
configurations -center, P
scores – E(P,center) = Σx||centerp(x)-x||22
neighborhood-p,center= {(p’, center)}U {(P, center’)}

Properties of k-means clustering
-each iteration polynomial o(k n)
-finite(exponential) iterations o(kn)
-error decreases(if ties broken consistently)[with one exception]
-can get stuck!