摘要翻译:
利用生成型制度转换框架,我们对风险价值阈值估计的资产收益进行蒙特卡罗模拟。以全球、美国、欧元区和英国的股票市场和长期债券作为测试资产,在截至2018年8月的1250周样本范围内,我们沿着三个设计步骤研究神经网络,这些步骤涉及(i)神经网络的初始化,(ii)神经网络训练所依据的激励功能,以及(iii)我们提供的数据量。首先,我们将随机播种的神经网络与通过最佳建立的模型(即隐马尔可夫)的估计初始化的网络进行比较。我们发现后者在VaR突破的频率(即实现的回报低于估计的VaR阈值)方面优于后者。其次,我们通过在训练指令中添加第二个目标来平衡我们网络损失函数的激励结构,以便神经网络在保持经验上现实的制度分布(即牛市与熊市频率)的同时优化精度。特别是,这一设计特征使得平衡激励递归神经网络(RNN)在统计和经济上显著水平上优于单一激励RNN以及任何其他神经网络或既定方法。第三,我们将2000天的训练数据集减半。我们发现,当我们的网络被喂食大量的数据(即1000天)时,它的性能明显更差,这凸显了神经网络依赖于非常大的数据集的一个关键弱点...
---
英文标题:
《Neural Networks and Value at Risk》
---
作者:
Alexander Arimond, Damian Borth, Andreas Hoepner, Michael Klawunn and
Stefan Weisheit
---
最新提交年份:
2020
---
分类信息:
一级分类:Quantitative Finance 数量金融学
二级分类:Risk Management 风险管理
分类描述:Measurement and management of financial risks in trading, banking, insurance, corporate and other applications
衡量和管理贸易、银行、保险、企业和其他应用中的金融风险
--
一级分类:Computer Science 计算机科学
二级分类:Machine Learning 机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
一级分类:Economics 经济学
二级分类:Econometrics 计量经济学
分类描述:Econometric Theory, Micro-Econometrics, Macro-Econometrics, Empirical Content of Economic Relations discovered via New Methods, Methodological Aspects of the Application of Statistical Inference to Economic Data.
计量经济学理论,微观计量经济学,宏观计量经济学,通过新方法发现的经济关系的实证内容,统计推论应用于经济数据的方法论方面。
--
---
英文摘要:
Utilizing a generative regime switching framework, we perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation. Using equity markets and long term bonds as test assets in the global, US, Euro area and UK setting over an up to 1,250 weeks sample horizon ending in August 2018, we investigate neural networks along three design steps relating (i) to the initialization of the neural network, (ii) its incentive function according to which it has been trained and (iii) the amount of data we feed. First, we compare neural networks with random seeding with networks that are initialized via estimations from the best-established model (i.e. the Hidden Markov). We find latter to outperform in terms of the frequency of VaR breaches (i.e. the realized return falling short of the estimated VaR threshold). Second, we balance the incentive structure of the loss function of our networks by adding a second objective to the training instructions so that the neural networks optimize for accuracy while also aiming to stay in empirically realistic regime distributions (i.e. bull vs. bear market frequencies). In particular this design feature enables the balanced incentive recurrent neural network (RNN) to outperform the single incentive RNN as well as any other neural network or established approach by statistically and economically significant levels. Third, we half our training data set of 2,000 days. We find our networks when fed with substantially less data (i.e. 1,000 days) to perform significantly worse which highlights a crucial weakness of neural networks in their dependence on very large data sets ...
---
PDF链接:
https://arxiv.org/pdf/2005.01686


雷达卡



京公网安备 11010802022788号







