6. Discrete Random Variables II

  Рет қаралды 164,413

MIT OpenCourseWare

MIT OpenCourseWare

11 жыл бұрын

MIT 6.041 Probabilistic Systems Analysis and Applied Probability, Fall 2010
View the complete course: ocw.mit.edu/6-041F10
Instructor: John Tsitsiklis
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Пікірлер: 59
@leduran
@leduran 3 жыл бұрын
These whole series is brilliant. Excellent job by Prof. Tsitsiklis. My respect and admiration.
@nikosnikolakis6991
@nikosnikolakis6991 2 жыл бұрын
I could say he is Greek by his accent lol
@genosimms8816
@genosimms8816 6 жыл бұрын
Clear, concise, efficient, and most importantly doesn't make you take 1000 notes. Love his reviews to start off every lecture... sadly many professors are either too lazy, inept, or inefficient to spend time on review each class. I've watched many many math videos, and this Professor is probably the best math lecturer I am yet too hear. (Though Professor Leonard's calc series is wonderful as well.)
@francisdavecabanting4453
@francisdavecabanting4453 4 жыл бұрын
I have the property 'memorylessness of math concepts' Whatever happened in the lecture has no bearing on my exam.
@michaellewis7861
@michaellewis7861 4 жыл бұрын
So if anyone is interested for an explanation of 38:00-38:31 roughly. x takes on values which are all the Natural numbers 1,2,3,4.... etc. Now if we want to find the probability values after some number of trials we'd obviously have to start after some variable, lets say 'n' amount of numbers. Expressing where we start by x, given the values that x takes would mean we'd have to write x+n whatever n is. In the video it is 1. This implies that the expectation value given that x is all values greater than some n: E[x|x>n] notice how the specification of 'greater than n' is only nontrivial insofar as x takes on values from all the natural numbers. We specify this since we dont change the domain of values that x itself takes. Our problem would be solved if we just suggested that x take on values greater than n in itself, however we do not. Now given this, we basically have to restrict the expectation values to when x>n, we'd have to start calculating expectation values at x+n. So suppose we start at the point n=5, ie. x is always greater than 5 (again, in the video it starts at n=1), then we'd do add 5 to natural number in x and thus we could attain the distribution we we're trying to attain by adjusting the random variables this way from the initial distribution. In other words for any E[x+n] (note it is implicit here however this means E[x+n|initial sample space]). This is a calculation of the expectation values of x greater than n. We can express it in terms of the actual formula to make it more evident. (x+n)p_X(x+n)=E[x+n] again, where x takes on the value of the natural numbers. Meaning that thus for every natural number from 1, we are adding n, making the formula apply on the values that are n greater than what would otherwise be in just E[x]. So as a consequence then E[x|x>n]=E[x+n|initial sample space]=E[x+n].
@enuske
@enuske 10 жыл бұрын
memorylessness can be expressed as P(X > m+n | X > m) = P(X > n)
@jiehe6943
@jiehe6943 7 жыл бұрын
I really like the explanations of memorylessness.
@raviiit6415
@raviiit6415 6 жыл бұрын
Great explanation Kudos professor I initially had doubt at why E(x-1/x-1>0)=E(x)+1 but i did not understand there.But then u explained that he wasted one toss + he started again that make sense to me. Really i'm loving probability because of u Last but not least Thanks for mit for providing such excellent videos
@ankurmazumder5590
@ankurmazumder5590 5 жыл бұрын
Can u explain why +1? How does that had anything to do with wasting of one toss?
@sowaszpieg7528
@sowaszpieg7528 11 ай бұрын
​​@@ankurmazumder5590think of the expected value as the average number of tosses needed to obtain heads for the 1st time. Dude A always sloppily wastes one toss (being certain to get tails) and then proceeds to toss the coin normally. Dude B simply tosses the coin normally without wasting a single toss. Now, consider, on average, what's the relation between the average number of tosses B has to perform and the analogous number for A. It's pretty intuitive that the latter will be greater by one (because it's equivalent to artificially adding one toss each time).
@binaykumar8292
@binaykumar8292 3 жыл бұрын
I like to hear Prof. Tsitsiklis say "equally likely".
@OMK11
@OMK11 2 жыл бұрын
"Equally Likely" just makes everything a lot easier. :))
@SnoozeDog
@SnoozeDog 7 жыл бұрын
"In a previous life" lmaooo #mathGoneSpiritual
@jokomo1406
@jokomo1406 3 жыл бұрын
This made me think of Pascal's wager haha
@joshryan4387
@joshryan4387 11 жыл бұрын
What are the chances that the joint pmf is 4/20 ?
@user-kk3uu7sp8d
@user-kk3uu7sp8d 4 жыл бұрын
69%
@florianwicher
@florianwicher 6 жыл бұрын
OCW really helps me with my studies. I will donate once I have money again :,D
@claudioricciardiello9601
@claudioricciardiello9601 8 жыл бұрын
Thank you! :)
@zhaoxingdeng5264
@zhaoxingdeng5264 Жыл бұрын
Great lecture!
@darkmythos4457
@darkmythos4457 6 жыл бұрын
Thank you!
@supercrazpianomanaic
@supercrazpianomanaic 7 жыл бұрын
OCW is the Best!
@30saransh
@30saransh 4 ай бұрын
At 33:00 I'm confused how X-2 is the remaing number of heads, it should be Tails right as we only have one head and then the experiment stops.
@chiragpalan9780
@chiragpalan9780 8 жыл бұрын
in conditional PMF, what if the probability of each random variable is not same? Then how do we scale each probability?
@Suaru
@Suaru 8 жыл бұрын
+Chirag Palan Then each random variable is multiplied by it's probability and then summed together.
@ShwetankT
@ShwetankT 5 жыл бұрын
Whats the "cheat sheat for exam" thing?
@youcefyahiaoui1465
@youcefyahiaoui1465 9 жыл бұрын
Great lecture Dr. Tsitsiklis. I have a question where you were applying divide and conquer to solve the Geometric example where A1: X=1 and A2: X>1. You said that P(X>1) = (1-p). But, this is the probability that the first toss is a T. The case where the Head appears in the 5th toss for example also corresponds to P(X>1) and this probability is (1-p)^(4).q. It seems to me that its the geometric sum of all the cases corresponding to X>1... May be I am wrong in my thinking or just a little confused as to why you set P(X>1) to be the same as getting T in the first toss.
@yuriw7116
@yuriw7116 9 жыл бұрын
Youcef Yahiaoui i think there is a very subtle difference (I will try to explain this while I, myself is actually confused, lets see if i can persuade myself) First: by Axiom P(X>1) and P(X=1) should add up to 1. A is an event not a PMF of Geometric random variable. Second: Interpret A as a single event, which means A1 = Success, A2 =fail of a Bernoulli random variable Third: Even we describe A as geometric random variable, then the first trial is a failure and all subsequent trails until there is a success has the prob of (1-P)(sum(P(1-P)^k), where k belongs to 1 to infinite. then sun of the geometric term drops off since it is equals to 1...
@yuriw7116
@yuriw7116 9 жыл бұрын
Youcef Yahiaoui another thing is (I guess) probability function has to add up 1, which means that all possible outcomes has to take into consideration and all possible outcomes are mapped out according to prob function, whereas an event is just an event...
@judithcorstjens2650
@judithcorstjens2650 9 жыл бұрын
> Just say to yourself 'either first toss is H, probability p, or first toss is not H (is T), probability 1-p' In effect he is adding up all the probabilities you are worried about by bundling them as 'not H in first toss'.
@jiehe6943
@jiehe6943 7 жыл бұрын
Try to show yourself "X > 1 is equivalent to the first toss is T". You'll get a better understanding of the subject.
@michaellewis7861
@michaellewis7861 4 жыл бұрын
Youcef Yahiaoui the function is effectively renormalized after each toss.
@michaellewis7861
@michaellewis7861 4 жыл бұрын
I dont understand why individuals dont always make the sample space explicit. That way, a nonconditional probability is conditional on the given sample space. It shows that probability is dependant on sample space and there can be varying specification of it.
@miheerprakash4163
@miheerprakash4163 3 жыл бұрын
How does the +1 end up coming in
@thiyagutenysen8058
@thiyagutenysen8058 5 жыл бұрын
it's good to see in 1.25x speed
@nocontextnikhil
@nocontextnikhil Жыл бұрын
"Whatever happened in the past it happened, but has no bearing what's going to happen in future" i guess this is true for life also 🙂
@campodearroz2
@campodearroz2 10 жыл бұрын
Great lecture! But was the memorylessness property correctly formulated? I think it should be P(X+2|X>2). Or maybe I understood his notation incorrectly.
@mazensayed3250
@mazensayed3250 4 жыл бұрын
x-2|x>2 so if you let x-2 = t ,it becomes P(t|t>0)
@yaweli2968
@yaweli2968 2 жыл бұрын
Well, (X-2)=Y assuming X and Y are the same count before the first head. That is somewhat confusing . What you are talking about is Pr(Y+2|Y>2)=Pry(Y).Considering the Probability of getting the first head by timing only the Y person from the 3rd flip and up, but he is considering two people here. Person taken Y to be the random count till first head and X for the other person.
@agarwalarti
@agarwalarti 4 жыл бұрын
At 40:40 why is P(X>1) = (1-p) ? Are we using total probability concept here?
@MrSyedaliraza
@MrSyedaliraza 4 жыл бұрын
All probabilities must add up to 1. We have divided the sample space into two disjoint events A1 and A2. If A1 = p, then A2 = 1-p.
@gainauntu
@gainauntu 4 жыл бұрын
P(X>1) means the first coin flip was tail...and (1-p) is how we represent tails
@ffidmetham
@ffidmetham 4 жыл бұрын
I don't understand why E[X|X>1]=1+E[X] if the first try fails? if the first try fails then how come there is +1 because +1 means that you obtain the HEAD at the first try? And if so how can we add 1 to that?
@vivekdas3807
@vivekdas3807 3 жыл бұрын
If you get it please explain me
@DuyNguyen-lo8xx
@DuyNguyen-lo8xx 3 жыл бұрын
X is the number of tosses until you get the first Head. So X>1 means you already wasted the first toss, after that you need to redo the experiment again. Example: on average, you need 5 tosses to get the first Head, i.e E[X]=5. Now the first toss you knew it was tail, and since the experiment is memoryless, you need to do average additional 5 tosses to get the first Head. So E[X|X>1] = 1 + E[X] = 1 + 5 = 6.
@xp_money7847
@xp_money7847 Жыл бұрын
Cuz you wasted one toss yesterday. We hold a belief or expect that after 30 days I will get a head regardless anytime I start tossing. I couldnt get one yesterday then I would say that I believe that after 30 more days starting from today, I can expect a head, i.e 1+30=31. In general if you wasted N days, then 30+N. We are basically updating.
@puneetgarg4731
@puneetgarg4731 Жыл бұрын
am confused how come memorylessness didn't work in the example of kings sibling example. clearly getting a girl or boy is independent. P( female sibling ) should be equal to 1/2
@noobieexplorer4697
@noobieexplorer4697 Жыл бұрын
it was? The sample space was BB, BG, GB, GG each with probability 1/4. P(GB) = 1/4 because P(B) = 1/2 and P(G) = 1/2 and they are independent. The question was about King's sibling being a girl. This is a conditional probability which assumes that there is already a boy (King).
@TakeaGlimpseatCris
@TakeaGlimpseatCris 11 жыл бұрын
1/16
@vishushukla9884
@vishushukla9884 Жыл бұрын
Cheat sheet was amazing 😂😂
@menelaosperdikeas1353
@menelaosperdikeas1353 6 жыл бұрын
Awesome lectures however I call bullshit to the following: At 32:03 he says the following: "you can make it formal (the argument about the memoryless property) by just manipulating PMFs formally". And then proceeds up to 34:04 by which time he's supposedly "proven formally" the memoryless property of the geometric distribution and declares victory. Except of course he's actually proven absolutely nothing at all. Either that or I am plain thick and have failed to see the "proof".
@ElizaberthUndEugen
@ElizaberthUndEugen 5 жыл бұрын
His explanation is very handwavy indeed and "formal" is realy stretching it. If you unpack his "proof" you get this (I guess... I too had to breed a little over this): \begin{align*} P_{X-2|X>2}(k) &\equiv P(X-2 = k | X > 2)\\ &= P(X = k + 2 | X > 2)\\ &= \frac{P(X = k + 2, X > 2)}{P(X > 2)}\\ &= \frac{P(X = k + 2)}{P(X > 2)}\\ &= \frac{(1-p)^{k+2-1} p}{(1-p)^2}\\ &= (1-p)^{k-1}p\\ &= P_X(k) \end{align*} Render it here: quicklatex.com/
@niil87
@niil87 4 жыл бұрын
@@ElizaberthUndEugen thank you very much for exact computation!
@paragramteke6739
@paragramteke6739 4 жыл бұрын
24:23 THUG LIFE 😎😎😎
7. Discrete Random Variables III
50:42
MIT OpenCourseWare
Рет қаралды 127 М.
The Bayesian Trap
10:37
Veritasium
Рет қаралды 4 МЛН
ЧУТЬ НЕ УТОНУЛ #shorts
00:27
Паша Осадчий
Рет қаралды 10 МЛН
Я обещал подарить ему самокат!
01:00
Vlad Samokatchik
Рет қаралды 7 МЛН
Convolutions | Why X+Y in probability is a beautiful mess
27:25
3Blue1Brown
Рет қаралды 663 М.
5. Discrete Random Variables I
50:35
MIT OpenCourseWare
Рет қаралды 269 М.
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,8 МЛН
Inside Mark Zuckerberg's AI Era | The Circuit
24:02
Bloomberg Originals
Рет қаралды 1,2 МЛН
1. Introduction to 'The Society of Mind'
2:05:54
MIT OpenCourseWare
Рет қаралды 1,4 МЛН
2. Conditioning and Bayes' Rule
51:11
MIT OpenCourseWare
Рет қаралды 383 М.
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,5 МЛН