英文标题:
《A proof that artificial neural networks overcome the curse of
dimensionality in the numerical approximation of Black-Scholes partial
differential equations》
---
作者:
Philipp Grohs, Fabian Hornung, Arnulf Jentzen, Philippe von
Wurstemberger
---
最新提交年份:
2018
---
英文摘要:
Artificial neural networks (ANNs) have very successfully been used in numerical simulations for a series of computational problems ranging from image classification/image recognition, speech recognition, time series analysis, game intelligence, and computational advertising to numerical approximations of partial differential equations (PDEs). Such numerical simulations suggest that ANNs have the capacity to very efficiently approximate high-dimensional functions and, especially, such numerical simulations indicate that ANNs seem to admit the fundamental power to overcome the curse of dimensionality when approximating the high-dimensional functions appearing in the above named computational problems. There are also a series of rigorous mathematical approximation results for ANNs in the scientific literature. Some of these mathematical results prove convergence without convergence rates and some of these mathematical results even rigorously establish convergence rates but there are only a few special cases where mathematical results can rigorously explain the empirical success of ANNs when approximating high-dimensional functions. The key contribution of this article is to disclose that ANNs can efficiently approximate high-dimensional functions in the case of numerical approximations of Black-Scholes PDEs. More precisely, this work reveals that the number of required parameters of an ANN to approximate the solution of the Black-Scholes PDE grows at most polynomially in both the reciprocal of the prescribed approximation accuracy $\\varepsilon > 0$ and the PDE dimension $d \\in \\mathbb{N}$ and we thereby prove, for the first time, that ANNs do indeed overcome the curse of dimensionality in the numerical approximation of Black-Scholes PDEs.
---
中文摘要:
人工神经网络(ANN)已成功应用于一系列计算问题的数值模拟中,这些问题包括图像分类/图像识别、语音识别、时间序列分析、游戏智能和计算广告,以及偏微分方程(PDE)的数值近似。此类数值模拟表明,神经网络具有非常有效地逼近高维函数的能力,尤其是此类数值模拟表明,当逼近上述计算问题中出现的高维函数时,神经网络似乎具有克服维数灾难的基本能力。科学文献中也有一系列人工神经网络的严格数学近似结果。其中一些数学结果证明了收敛,但没有收敛速度,一些数学结果甚至严格地确定了收敛速度,但只有少数特殊情况下,数学结果可以严格地解释人工神经网络在逼近高维函数时的经验成功。本文的主要贡献在于揭示了人工神经网络在Black-Scholes偏微分方程数值逼近的情况下可以有效地逼近高维函数。更准确地说,这项工作揭示了神经网络近似Black-Scholes偏微分方程解所需的参数数量在规定的近似精度$\\varepsilon>0$和偏微分方程维数$\\d\\in\\mathbb{N}$的倒数上以多项式形式增长,因此我们首次证明,在Black-Scholes偏微分方程的数值近似中,人工神经网络确实克服了维数灾难。
---
分类信息:
一级分类:Mathematics 数学
二级分类:Numerical Analysis 数值分析
分类描述:Numerical algorithms for problems in analysis and algebra, scientific computation
分析和代数问题的数值算法,科学计算
--
一级分类:Computer Science 计算机科学
二级分类:Machine Learning 机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
一级分类:Mathematics 数学
二级分类:Probability 概率
分类描述:Theory and applications of probability and stochastic processes: e.g. central limit theorems, large deviations, stochastic differential equations, models from statistical mechanics, queuing theory
概率论与随机过程的理论与应用:例如中心极限定理,大偏差,随机微分方程,统计力学模型,排队论
--
一级分类:Quantitative Finance 数量金融学
二级分类:Mathematical Finance 数学金融学
分类描述:Mathematical and analytical methods of finance, including stochastic, probabilistic and functional analysis, algebraic, geometric and other methods
金融的数学和分析方法,包括随机、概率和泛函分析、代数、几何和其他方法
--
---
PDF下载:
-->