Double Machine Learning for Causal and Treatment Effects

  Рет қаралды 37,234

Becker Friedman Institute University of Chicago

Becker Friedman Institute University of Chicago

Күн бұрын

Victor Chernozhukov of the Massachusetts Institute of Technology provides a general framework for estimating and drawing inference about a low-dimensional parameter in the presence of a high-dimensional nuisance parameter using a generation of nonparametric statistical (machine learning) methods.

Пікірлер: 15
@ForeverSensei2030
@ForeverSensei2030 7 жыл бұрын
Appreciate your works, Professor.
@mengxiazhang93
@mengxiazhang93 3 жыл бұрын
The presentation is very helpful! Thank you!
@jicao9205
@jicao9205 2 жыл бұрын
The presentation is awesome. Thank you!
@mastafafoufa5121
@mastafafoufa5121 3 жыл бұрын
Aren't we looking at predicting E[Y| (D,Z)] in other words how D and Z jointly influence Y as a first step and then E[D|Z] as a second step? In the slide at 10:52, they predict E[Y|Z] instead of E[Y| (D,Z)] which is a bit confusing as treatment is not controlled and stochastic as well...
@MrTocoral
@MrTocoral 3 жыл бұрын
E[Y|D,Z] would be the ultimate goal (predicting the outcome as a joint function of treatment and covariates). This is what is done for instance by standard ML methods as presentend in the beginning, but in this case doesn't provide a good estimator of the treatment effect. I think the approach here is similar to multilinear regression where we first regress D on Z, then we obtain a residual which we regress on Y to isolate the effect of D independently of Z. So the question is rather here : why do we regress D-E[D|Z] on Y-E[Y|Z] instead of Y ? In Multilinear Regression, the first step ensures that the residual will not be correlated to Z, so regressing this residual on Y or Y-E[Y|Z] is equivalent. But here, since the model is semilinear (I think, but perhaps also because we use ML methods), there may be some effect of g(Z) on Y correlated to D even if we take the residual D-E[D|Z]. So we need to evaluate Y-E[Y|Z] to approach the real treatment effect.
@darthyzhu5767
@darthyzhu5767 7 жыл бұрын
great talk, wondering where to access the slides.
@ruizhenmai1194
@ruizhenmai1194 4 жыл бұрын
@@mathieumaticien Hi the slides have already been removed
@VainCape
@VainCape 2 жыл бұрын
@@ruizhenmai1194 why?
@marcelogallardo9218
@marcelogallardo9218 3 жыл бұрын
Most impressive.
@patrickpower7102
@patrickpower7102 3 жыл бұрын
In "perfectly set-up" randomized control trials, m_0 wouldn't vanish, but rather would be a constant value of 0.5 for all values of Z, no? (6:25)
@PrirodnyiCossack
@PrirodnyiCossack 3 жыл бұрын
Yes, though one can assume that that constant had been partialed out, which would give zero.
@gwillis3323
@gwillis3323 3 жыл бұрын
no, because D isn't binary, D is continuous. D=m(z) + V, where V is a random variable which does not depend on z. In a perfect trial, D=V, so for example, D might be drawn from a Gaussian distribution with sufficient support to make the inferences you wish to make. You could go further and say that in a "perfect" trial, V is a uniform distribution over some sufficiently large domain. I think here, "perfect" just means "not confounded at all"
@user-zj1kz6mh6g
@user-zj1kz6mh6g 4 ай бұрын
I am safe in my knowledge and curiosity
@MrRestorevideos
@MrRestorevideos 3 ай бұрын
Machine learner who worked back in the 30's 🤣
@chockumail
@chockumail 2 ай бұрын
"I resisted to call it ML and I gave up " and Machine learners in 30's :) Hilarious
Solving Heterogeneous Estimating Equations Using Forest Based Algorithms
41:50
Becker Friedman Institute University of Chicago
Рет қаралды 14 М.
Machine Learning: Inference for High-Dimensional Regression
54:54
Becker Friedman Institute University of Chicago
Рет қаралды 15 М.
Sigma Girl Past #funny #sigma #viral
00:20
CRAZY GREAPA
Рет қаралды 23 МЛН
Luck Decides My Future Again 🍀🍀🍀 #katebrush #shorts
00:19
Kate Brush
Рет қаралды 8 МЛН
Stupid Barry Find Mellstroy in Escape From Prison Challenge
00:29
Garri Creative
Рет қаралды 21 МЛН
Conditional Average Treatment Effects: Overview
57:10
Stanford Graduate School of Business
Рет қаралды 15 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 290 М.
Keynote: Judea Pearl - The New Science of Cause and Effect
1:06:09
Susan Athey and Stefan Wager: Estimating Heterogeneous Treatment Effects in R
1:04:43
Online Causal Inference Seminar
Рет қаралды 12 М.
Philipp Bach and Sven Klaassen: Tutorial on DoubleML for double machine learning in Python and R
1:00:47
Toward Causal Machine Learning
57:33
Microsoft Research
Рет қаралды 8 М.
PyMCon Web Series - Bayesian Causal Modeling - Thomas Wiecki
56:29
PyMC Developers
Рет қаралды 5 М.
ITE inference - meta-learners for CATE estimation
32:37
van der Schaar Lab
Рет қаралды 4,8 М.