楼主: wz151400
93 0

[其他] 【英文计量经济学资料】Econometrics with Machine Learning [推广有奖]

已卖:13515份资源
好评率:99%
商家信誉:良好

泰斗

74%

还不是VIP/贵宾

-

TA的文库  其他...

百味图书

威望
0
论坛币
451 个
通用积分
2458.9679
学术水平
177 点
热心指数
208 点
信用等级
105 点
经验
10361 点
帖子
23615
精华
0
在线时间
14036 小时
注册时间
2016-2-10
最后登录
2026-2-21

楼主
wz151400 在职认证  发表于 2026-2-5 17:42:43 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
Econometrics With Machine Learning.pdf (8.66 MB, 需要: RMB 17 元)
内容丰富,385页的大型资料包。
内容最新,2025出来的最新资料。
内容最实用,全部是矢量文字,适合翻译学习!
The resource reflects well the current direction of research focus that is relevant to professionals who are more concerned with ‘partial effects’ in the statistical analysis of data, so-called ‘causal analysis’. Partial effects are of central interest to policy and treatment effect evaluation, as well as optimal decision making in all applied fields, such as market research, evaluation of treatment outcomes in economics and health, finance, in counterfactual analysis, and model building in all areas of science. The holy grail of quantitative economics/econometrics research in the last 100 years has been the identification and development of ‘causal’ models, with a primary focus on conditional expectation of one or more target variables/outcomes, conditioned on several ‘explanatory’ variables, ‘features’ in the ML jargon. This edifice depends crucially on two decisions: correct functional form and a correct set of both target explanatory variables and additional control variables, reflecting various degrees of departure from the gold standard randomized experiment and sampling. This holy grail has been a troubled path since there is little or no guidance in substantive sciences on functional forms, and certainly little or no indication on sampling and experimental failures, such as selection. Most economists would admit, at least privately, that quantitative models fail to perform successfully in a consistent manner. This failure is especially prominent in out of sample forecasting and rare event prediction, that is, in counterfactual analysis, a central activity in policy evaluation and optimal decision making. The dominant linear, additively separable multiple regression, the workhorse of empirical research for so many decades, has likely created a massive reservoir of misleading ‘stylised facts’ which are the artefacts of linearity, and its most obvious flaw, constant partial effect (coefficients). Nonlinear, nonparametric, semiparametric and quantile models and methods have developed at a rapid pace, with related approximate/asymptotic inference theory, to deal with these shortcomings. This development has been facilitated by rapidly expanding computing capacity and speed, and greater availability of rich data samples. These welcome developments and movements are meant to provide more reliable and ‘robust’ empirical findings and inferences. For a long time, however, these techniques have been limited to a small number of conditioning variables, or a small set of ‘moment conditions’, and subject to the curse of dimensionality in nonparametric and other robust methods. The advent of regularization and penalization methods and algorithms has opened the door to allow for model searches in which an impressive degree of allowance may be made for both possible nonlinearity of functional forms (filters, ‘learners’), and potential explanatory/predictive variables, even possibly larger in number than the sample size. Excitement about these new ‘machine learning (ML)’ methods is understandable with generally impressive prediction performance. Stupendously fast and successful ‘predictive text’ search is a common point of contact with this experience for the public. Fast and cheap computing is the principal facilitator, some consider it ‘the reason’ for mass adoption. It turns out that an exclusive focus on prediction criteria has some deliterious that are central to economic analysis, and other substantive areas. Highly related ‘causes’ and features are quite easily removed in ‘sparsity’ techniques, such as LASSO, producing ‘biased’ estimation of partial effects. In my classes, I give an example of a linear model with some exact multicollinear variables, making the identification of some partial effects impossible (‘biased’?), without impacting the estimation of the conditional mean, the ‘goodness of fit’, or the prediction criteria. The conditional meanis anidentified/estimable functionirrespective ofthe degree ofmulticollinearity! The general answer to this problem has been to, one way or other, withhold a set of target features from ‘selection’ and possible elimination by LASSO and other naive ‘model selection’ techniques. Double machine learning (DML), Random Forests, and subset selection and aggregation/averaging are examples of such methods, some being pre-selection approaches and others being various types of model averaging. There are numerous variations to these examples, but I choose the taxonomy of ‘model selection’ vs ‘model averaging’ approaches. These may be guided by somewhat traditional econometrics thinking and approaches, or algorithmic, computer science approaches that are less concerned with rigorous statistical ‘inference’. Econometricians are concerned with rigorous inference and rigorous analysis of identification. Given their history of dealing with the examination of poorly collected ‘observational’ data, on the one hand, and the immediacy of costly failures of models in practical applications and governance, econometricians are well placed to lead in the development of‘big data’techniquesthat accommodate the dual goals of accurate prediction and identification (unbiased?) of partial effects. This volume provides a very timely set of 10 chapters that help readers to appreciate the nature of the challenges and promise of big data methods (data science!?), with a timely and welcome emphasis on ‘debiased’ and robust estimation of partial and treatment effects. The first three chapters are essential reading, helped by a bridge over the ocean of the confusing new naming of old concepts and objects in statistics (see the Appendix on the terminology). Subsequent chapters contribute by delving further into some of the rapidly expanding techniques, and some key application areas, such as the health sciences and treatment effects. Doubly robust ML methods are introduced in several places in this volume, and will surely be methods of choice for sometime to come. The resource lays bare the challenges of causal model identification and analysis which reflect familiar challenges of model uncertainty and sampling variability. It makes clear that larger numbers of variables and moment conditions, as well as greater flexibility in functrional forms, ironically, produce new challenges (e,g., highly dependent features, reporting and summary of partial effects which are no longer artificial constant, increased danger of endogeneity,...). Wise and rigorous model building and expert advice will remain an art, especially as long as we deal with subfield (economics) models which cannot take all other causes into account. Including everything and the kitchen sink turns out to be a hindrance in causal identification and counterfactual analysis, but a boon to black box predictive algorithms. A profit making financial returns algorithm is ‘king’ until it fails. When it fails, we have tough time finding out why! Corrective action will be by trial and error. That is a hard pill when economic policy decisions take so long to take effect, if any

1 Linear Econometric Models with Machine Learning . . . . . . . . . . . . . . 1
Felix Chan and László Mátyás
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Shrinkage Estimators and Regularizers . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 𝐿𝛾 norm, Bridge, LASSO and Ridge . . . . . . . . . . . . . . . . . 6
1.2.2 Elastic Net and SCAD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Adaptive LASSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.4 Group LASSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Computation and Least Angular Regression . . . . . . . . . . . 13
1.3.2 Cross Validation and Tuning Parameters . . . . . . . . . . . . . . 14
1.4 Asymptotic Properties of Shrinkage Estimators . . . . . . . . . . . . . . . . 15
1.4.1 Oracle Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4.2 Asymptotic Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4.3 Partially Penalized (Regularized) Estimator . . . . . . . . . . . . 20
1.5 Monte Carlo Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.5.1 Inference on Unpenalized Parameters . . . . . . . . . . . . . . . . . 23
1.5.2 Variable Transformations and Selection Consistency . . . . 25
1.6 Econometrics Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.6.1 Distributed Lag Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6.2 Panel Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.6.3 Structural Breaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Proof of Proposition 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2 Nonlinear Econometric Models with Machine Learning . . . . . . . . . . . 41
Felix Chan, Mark N. Harris, Ranjodh B. Singh and Wei (Ben) Ern Yeo
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.2 Regularization for Nonlinear Econometric Models. . . . . . . . . . . . . . 43
2.2.1 Regularization with Nonlinear Least Squares . . . . . . . . . . 44
2.2.2 Regularization with Likelihood Function . . . . . . . . . . . . . . 46
Continuous Response Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Discrete Response Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.2.3 Estimation, Tuning Parameter and Asymptotic Properties 50
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Tuning Parameter and Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Asymptotic Properties and Statistical Inference . . . . . . . . . . . . . . . . . . . . . . 52
2.2.4 Monte Carlo Experiments – Binary Model with shrinkage 56
2.2.5 Applications to Econometrics . . . . . . . . . . . . . . . . . . . . . . . 61
2.3 Overview of Tree-based Methods - Classification Trees and
Random Forest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.3.1 Conceptual Example of a Tree . . . . . . . . . . . . . . . . . . . . . . . 66
2.3.2 Bagging and Random Forests . . . . . . . . . . . . . . . . . . . . . . . 68
2.3.3 Applications and Connections to Econometrics. . . . . . . . . 70
Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Proof of Proposition 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Proof of Proposition 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3 The Use of Machine Learning in Treatment Effect Estimation . . . . . . 79
Robert P. Lieli, Yu-Chin Hsu and Ágoston Reguly
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.2 The Role of Machine Learning in Treatment Effect Estimation: a
Selection-on-Observables Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.3 Using Machine Learning to Estimate Average Treatment Effects . . 84
3.3.1 Direct versus Double Machine Learning . . . . . . . . . . . . . . 84
3.3.2 Why Does Double Machine Learning Work and Direct
Machine Learning Does Not? . . . . . . . . . . . . . . . . . . . . . . . 87
3.3.3 DML in a Method of Moments Framework . . . . . . . . . . . . 89
3.3.4 Extensions and Recent Developments in DML . . . . . . . . . 90
3.4 Using Machine Learning to Discover Treatment Effect Heterogeneity 92
3.4.1 The Problem of Estimating the CATE Function . . . . . . . . 92
3.4.2 The Causal Tree Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.4.3 Extensions and Technical Variations on the Causal Tree
Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.4.4 The Dimension Reduction Approach . . . . . . . . . . . . . . . . . 99
3.5 Empirical Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4 Forecasting with Machine Learning Methods. . . . . . . . . . . . . . . . . . . . 111
Marcelo C. Medeiros
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.1.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.1.2 Organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.2 Modeling Framework and Forecast Construction . . . . . . . . . . . . . . . 113
4.2.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.2.2 Forecasting Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.2.3 Backtesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.2.4 Model Choice and Estimation . . . . . . . . . . . . . . . . . . . . . . . 117
4.3 Forecast Evaluation and Model Comparison . . . . . . . . . . . . . . . . . . . 120
4.3.1 The Diebold-Mariano Test . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.3.2 Li-Liao-Quaedvlieg Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.3.3 Model Confidence Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.4 Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.4.1 Factor Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.4.2 Bridging Sparse and Dense Models . . . . . . . . . . . . . . . . . . 127
4.4.3 Ensemble Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.5 Nonlinear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.5.1 Feedforward Neural Networks . . . . . . . . . . . . . . . . . . . . . . . 131
4.5.2 Long Short Term Memory Networks . . . . . . . . . . . . . . . . . 136
4.5.3 Convolution Neural Networks . . . . . . . . . . . . . . . . . . . . . . . 139
4.5.4 Autoenconders: Nonlinear Factor Regression . . . . . . . . . . 145
4.5.5 Hybrid Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5 Causal Estimation of Treatment Effects From Observational Health
Care Data Using Machine Learning Methods . . . . . . . . . . . . . . . . . . . 151
William Crown
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.2 Naïve Estimation of Causal Effects in Outcomes Models with
Binary Treatment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.3 Is Machine Learning Compatible with Causal Inference? . . . . . . . . 154
5.4 The Potential Outcomes Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5.5 Modeling the Treatment Exposure Mechanism–Propensity Score
Matching and Inverse Probability Treatment Weights . . . . . . . . . . . 157
5.6 Modeling Outcomes and Exposures: Doubly Robust Methods . . . . 158
5.7 Targeted Maximum Likelihood Estimation (TMLE) for Causal
Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.8 Empirical Applications of TMLE in Health Outcomes Studies . . . . 163
5.8.1 Use of Machine Learning to Estimate TMLE Models. . . . 163
5.9 Extending TMLE to Incorporate Instrumental Variables . . . . . . . . . 164
5.10 Some Practical Considerations on the Use of IVs . . . . . . . . . . . . . . . 165
5.11 Alternative Definitions of Treatment Effects . . . . . . . . . . . . . . . . . . . 166
5.12 A Final Word on the Importance of Study Design in Mitigating Bias168
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6 Econometrics of Networks with Machine Learning . . . . . . . . . . . . . . . 177
Oliver Kiss and Gyorgy Ruzicska
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.2 Structure, Representation, and Characteristics of Networks. . . . . . . 179
6.3 The Challenges of Working with Network Data . . . . . . . . . . . . . . . . 182
6.4 Graph Dimensionality Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.4.1 Types of Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.4.2 Algorithmic Foundations of Embeddings . . . . . . . . . . . . . . 187
6.5 Sampling Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6.5.1 Node Sampling Approaches . . . . . . . . . . . . . . . . . . . . . . . . . 190
6.5.2 Edge Sampling Approaches . . . . . . . . . . . . . . . . . . . . . . . . . 191
6.5.3 Traversal-Based Sampling Approaches. . . . . . . . . . . . . . . . 192
6.6 Applications of Machine Learning in the Econometrics of Networks196
6.6.1 Applications of Machine Learning in Spatial Models . . . . 196
6.6.2 Gravity Models for Flow Prediction . . . . . . . . . . . . . . . . . . 203
6.6.3 The Geographically Weighted Regression Model and ML 205
6.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
7 Fairness in Machine Learning and Econometrics. . . . . . . . . . . . . . . . . 217
Samuele Centorrino, Jean-Pierre Florens and Jean-Michel Loubes
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
7.2 Examples in Econometrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
7.2.1 Linear IV Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
7.2.2 A Nonlinear IV Model with Binary Sensitive Attribute . . 223
7.2.3 Fairness and Structural Econometrics . . . . . . . . . . . . . . . . . 223
7.3 Fairness for Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
7.4 Full Fairness IV Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
7.4.1 Projection onto Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
7.4.2 Fair Solution of the Structural IV Equation . . . . . . . . . . . . 230
7.4.3 Approximate Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
7.5 Estimation with an Exogenous Binary Sensitive Attribute . . . . . . . . 240
7.6 An Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
8 Graphical Models and their Interactions with Machine Learning in
the Context of Economics and Finance . . . . . . . . . . . . . . . . . . . . . . . . . 251
Ekaterina Seregina
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
8.1.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
8.2 Graphical Models: Methodology and Existing Approaches . . . . . . . 253
8.2.1 Graphical LASSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
8.2.2 Nodewise Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
8.2.3 CLIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
8.2.4 Solution Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
8.3 Graphical Models in the Context of Finance . . . . . . . . . . . . . . . . . . . 262
8.3.1 The No-Short-Sale Constraint and Shrinkage . . . . . . . . . . 267
8.3.2 The 𝐴-Norm Constraint and Shrinkage . . . . . . . . . . . . . . . . 270
8.3.3 Classical Graphical Models for Finance . . . . . . . . . . . . . . . 272
8.3.4 Augmented Graphical Models for Finance Applications . 273
8.4 Graphical Models in the Context of Economics . . . . . . . . . . . . . . . . 278
8.4.1 Forecast Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
8.4.2 Vector Autoregressive Models . . . . . . . . . . . . . . . . . . . . . . . 280
8.5 Further Integration of Graphical Models with Machine Learning . . 283
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
9 Poverty, Inequality and Development Studies with Machine Learning 291
Walter Sosa-Escudero, Maria Victoria Anauati and Wendy Brau
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
9.2 Measurement and Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
9.2.1 Combining Sources to Improve Data Availability . . . . . . . 294
9.2.2 More Granular Measurements . . . . . . . . . . . . . . . . . . . . . . . 298
9.2.3 Dimensionality Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . 304
9.2.4 Data Imputation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
9.2.5 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
9.3 Causal Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
9.3.1 Heterogeneous Treatment Effects . . . . . . . . . . . . . . . . . . . . 307
9.3.2 Optimal Treatment Assignment . . . . . . . . . . . . . . . . . . . . . . 312
9.3.3 Handling High-Dimensional Data and Debiased ML . . . . 313
9.3.4 Machine-Building Counterfactuals . . . . . . . . . . . . . . . . . . . 315
9.3.5 New Data Sources for Outcomes and Treatments . . . . . . . 316
9.3.6 Combining Observational and Experimental Data . . . . . . 319
9.4 Computing Power and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
9.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
10 Machine Learning for Asset Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Jantje Sönksen
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
10.2 How Machine Learning Techniques Can Help Identify Stochastic
Discount Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
10.3 How Machine Learning Techniques Can Test/Evaluate Asset
Pricing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
10.4 How Machine Learning Techniques Can Estimate Linear Factor
Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
10.4.1 Gagliardini, Ossola, and Scaillet’s (2016) Econometric
Two-Pass Approach for Assessing Linear Factor Models . 349

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:econometrics Econometric Learning machine earning

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
扫码
拉您进交流群
GMT+8, 2026-2-22 08:58