楼主: SleepyTom
2293 1

[学术资料] Books for Stochastic Control and financial application [推广有奖]

  • 3关注
  • 12粉丝

已卖:13696份资源

教授

23%

还不是VIP/贵宾

-

威望
0
论坛币
191393 个
通用积分
1810.3575
学术水平
134 点
热心指数
181 点
信用等级
154 点
经验
718 点
帖子
777
精华
0
在线时间
448 小时
注册时间
2007-5-8
最后登录
2025-12-23

楼主
SleepyTom 发表于 2015-10-4 04:08:10 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
Stochastic Controls Hamiltonian Systems and HJB Equations
Authors: Yong, Jiongmin, Zhou, Xun Yu

As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol- lowing: (Q) What is the relationship betwccn the maximum principlc and dy- namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa- tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or- der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.




Stochastic Control Theory Dynamic Programming Principle 2nd Edition
Authors: Nisio, Makiko

This book offers a systematic introduction to the optimal stochastic control theory via the dynamic programming principle, which is a powerful tool to analyze control problems.

First we consider completely observable control problems with finite horizons. Using a time discretization we construct a nonlinear semigroup related to the dynamic programming principle (DPP), whose generator provides the Hamilton–Jacobi–Bellman (HJB) equation, and we characterize the value function via the nonlinear semigroup, besides the viscosity solution theory. When we control not only the dynamics of a system but also the terminal time of its evolution, control-stopping problems arise. This problem is treated in the same frameworks, via the nonlinear semigroup. Its results are applicable to the American option price problem.

Zero-sum two-player time-homogeneous stochastic differential games and viscosity solutions of the Isaacs equations arising from such games are studied via a nonlinear semigroup related to DPP (the min-max principle, to be precise). Using semi-discretization arguments, we construct the nonlinear semigroups whose generators provide lower and upper Isaacs equations.

Concerning partially observable control problems, we refer to stochastic parabolic equations driven by colored Wiener noises, in particular, the Zakai equation. The existence and uniqueness of solutions and regularities as well as Itô's formula are stated. A control problem for the Zakai equations has a nonlinear semigroup whose generator provides the HJB equation on a Banach space. The value function turns out to be a unique viscosity solution for the HJB equation under mild conditions.

This edition provides a more generalized treatment of the topic than does the earlier book Lectures on Stochastic Control Theory (ISI Lecture Notes 9), where time-homogeneous cases are dealt with. Here, for finite time-horizon control problems, DPP was formulated as a one-parameter nonlinear semigroup, whose generator provides the HJB equation, by using a time-discretization method. The semigroup corresponds to the value function and is characterized as the envelope of Markovian transition semigroups of responses for constant control processes. Besides finite time-horizon controls, the book discusses control-stopping problems in the same frameworks.




Controlled Diffusion Processes

Authors: Krylov, N. V.

This book deals with the optimal control of solutions of fully observable Itô-type stochastic differential equations. The validity of the Bellman differential equation for payoff functions is proved and rules for optimal control strategies are developed.

Topics include optimal stopping; one dimensional controlled diffusion; the Lp-estimates of stochastic integral distributions; the existence theorem for stochastic equations; the Itô formula for functions; and the Bellman principle, equation, and normalized equation.




Controlled Markov Processes and Viscosity Solutions 2nd Edition

Authors: Fleming, Wendell H., Soner, Halil Mete

This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. Stochastic control problems are treated using the dynamic programming approach. The authors approach stochastic control problems by the method of dynamic programming. The fundamental equation of dynamic programming is a nonlinear evolution equation for the value function. For controlled Markov diffusion processes, this becomes a nonlinear partial differential equation of second order, called a Hamilton-Jacobi-Bellman (HJB) equation. Typically, the value function is not smooth enough to satisfy the HJB equation in a classical sense. Viscosity solutions provide framework in which to study HJB equations, and to prove continuous dependence of solutions on problem data. The theory is illustrated by applications from engineering, management science, and financial economics.

In this second edition, new material on applications to mathematical finance has been added. Concise introductions to risk-sensitive control theory, nonlinear H-infinity control and differential games are also included.

Review of the earlier edition:

"This book is highly recommended to anyone who wishes to learn the dinamic principle applied to optimal stochastic control for diffusion processes. Without any doubt, this is a fine book and most likely it is going to become a classic on the area... ."

SIAM Review, 1994




二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Application Stochastic financial Stochast inancial literature developed principal problems dynamic

f4058gj40sk2mv.rar
下载链接: https://bbs.pinggu.org/a-1887436.html

2.81 MB

需要: 10 个论坛币  [购买]

Controlled Markov Processes and Viscosity Solutions

本附件包括:

  • Controlled Markov Processes and Viscosity Solutions 2nd Edition by Fleming and Soner.pdf

10385ugfj8kf.rar

12.39 MB

需要: 10 个论坛币  [购买]

Controlled Diffusion Processes

本附件包括:

  • Controlled Diffusion Processes by N. V. Krylov.pdf

932pofanl.rar

47.71 MB

需要: 10 个论坛币  [购买]

Stochastic Controls Hamiltonian Systems and HJB Equations

本附件包括:

  • Stochastic Controls Hamiltonian Systems and HJB Equations by J. Yong, X. Y. Zhou.pdf

3epasdpi349.rar

1.67 MB

需要: 10 个论坛币  [购买]

Stochastic Control Theory Dynamic Programming Principle

本附件包括:

  • Stochastic Control Theory Dynamic Programming Principle 2nd Edition by Makiko Nisio.pdf

本帖被以下文库推荐

沙发
王之博(未真实交易用户) 发表于 2015-10-4 23:46:34
竟然有我老师的书,可是跟金融没关系啊

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jr
拉您进交流群
GMT+8, 2025-12-24 12:41