Correlation And Causation

Deep insight
correlation, causation

Sick
In hospital 40, died 4 10%
home 8000, died 20 0.25%

Chances of dying in hospital are 40 times larger than at home

hospital died
sick 36 4 11.1%
health 4 0 0%

At home
sick 40 20 50%
healthy 7960 20 0.251%

P(exactly one head)

P(first flip is only head)
= 4

def test(coins, flips):
	f=FlipPredictor(coins)
	quesses=[]
	for flip in flips:
		f.update(flip)
		quesses.append(f.Pheads())
	return guesses

print test([0.5,0.4,0.3],'HHTH')
from __future__ import division
class FlipPredictor(object):
	def __init__(self,coins):
		self.coins=coins
		n=len(coins)
		self.probs=[1/n]*n
	def Pheads(self):

	def update(self,result):

Density

Probability for continuous spaces

f(x)= 1/360, f(0) < x <= 360 Date * Time you were born P(x)= 0 f(x)= 0.0166 f(x<=noon) = 2*f(x>noon)
a=0.0555 1/18
b=0.0277 1/3*1/12

Cancer

P(c)= p0 = 0.1, p(¬c)=0.9
p(pos|c)= p1 = 0.9, p(pos|¬c)=0.1
p(neg|¬c)= p2 = 0.8, p(neg|c)= 0.2

p(p)= 0.09 + 0.18 = 0.27

def f(p0, p1, p2):
	return p0*p1 + (1-p0)*(1-p2)
print f(0.1, 0.9, 0.8)

program bayes rule

def f(p0, p1, p2):
	return p0*p1 / (p0 * p1 + (1-p0)*(1-p2))
print f(0.1, 0.9, 0.8)
def f(p0,p1,p2):
	return p0 * (1-p1)/(p0 * (1-p1)+(1-p0)*p2)
print f(0.1, 0.9, 0.8)

Flip Two Coins

def f(p1, p2):
	return p1 * p2

print f(0.5, 0.8)

c1 p(H|c1)=p1
c2 p(H|c2)=p2

p(c1)=p0=0.3
p(c2)=1-p0=0.7
p1 = 0.5
p2 = 0.9
0.15+0.7*0.9
p(H)=0.78

def f(p0, p1, p2):
	return p0 * p1 + (1-p0) * p2

print f(0.3, 0.5, 0.9)

Disease Test

P(c)=0.1, P(¬c)=0.9
P(Pos|C)=0.9 P(Neg|C)=0.1
P(Neg|¬C)=0.5 P(Pos|¬C)=0.5

Test = Neg P(c, Neg)=0.01
P(¬c, Neg)=0.45
P(Neg)=0.46
P(c|Neg)=0.0217
P(¬c|Neg)=0.9783

Test = Neg P(c, Pos)=0.09
P(¬c, Neg)=0.45
P(Pos)=0.54
P(c|Pos)=0.167
P(¬c|Pos)=0.833

Probability Given

P(c)=0.01, P(¬c)=0.99
P(Pos|c)=0.9, P(Neg|c)=0.1
P(Neg|¬c)=0.9, P(Pos|¬c)=0.1

P(c,neg)=0.001
P(¬c,Neg)=0.891
P(Neg)=0.892

P(gone)=0.6
p(home)=0.4
P(rain|home)=0.01 P(rain|¬home)=0.99
P(rain|gone)=0.3 P(rain|¬gone)=0.7
P(home|rain)=0.0217
0.4*0.01 / 0.4*0.01 + 0.6*0.3

def f(p):
	return p

print f(0.3)

Bayes Rule

Bayes Rule
Rev Thomas Bayes

Example P(c)=0.01
90% it is positive if you have c
90% it is negative if you don’t have c
test = positive
probability of having cancer: 8.1/3%

Bayes Rule
prior probability + test evidence -> posterior probability

prior
P(c)=0.01
P(pos|c)=0.9
p(neg|¬c)=0.9

posterior
P(c|Pos)= P(c)*P(Pos|C) = 0.009
P(¬C|Pos)= P(¬C)*P(Pos|¬C) = 0.099

normalize
P(pos)= P(c,Pos)+P(¬C,Pos)=0.108

posterior
P(c|Pos)=0.0833
P(¬c|Pos)=0.9167

This is the algorithm of Bayes Rule

Conditional Probability

Dependent things

outcome first is dependent
smart 0.5, dumb 0.5
P(P@S) = 0.001
P(P@S/SMART) = 0.002
P(P@S/DUMB) = 0.000

P(cancer)=0.1
P(¬cancer)=0.9
Dependant is conditional probability
P(positive|cancer)= 0.9
P(negative|cancer)= 0.1
P(positive|¬cancer)=0.2
p(negative|¬cancer)=0.8

Cancer Test P()
Y, P, 0.09(0.1*0.9)
Y, N, 0.01(0.1*0.1)
N, P, 0.18(0.9*0.2)
N, N, 0.72(0.9*0.8)
P(Positive Result)=0.27

P(c) -> P(¬c)
P(P|c) -> P(n|c)
P(P|¬c) -> P(n|¬c)
P(p) = P(p|c)・P(c)+P(p|¬c)・p(¬c)
Total probability

P(test|disease)
P(test)= P(test|disease)*p(disease) + P(test|¬disease)*P(¬disease)