Thèse de doctorat en Mathématiques appliquées
Soutenue en 2006
à Paris 6 .
Pas de résumé disponible.
Numerical analysis of stoshastic control problems
L'objet de la thèse est l'étude des approximations numériques de différentes équations HJB associées à des problèmes de contrôle optimal stochastique. Dans la première partie on a étendu la thèorie des estimations d'erreurs à un problème de jeux différentiels et au cas du problème avec impulsions. Ce dernier a fait l'objet d'une mise en oeuvre numérique. Dans toute cette partie l'ensemble de contrôles est borné. Ensuite, dans la deuxième partie de la thèse on a étudié des problèmes de contrôle stochastique provenant de la finance et dont l'ensemble des contrôles est non borné, en particulier des problèmes de sur-couverture.
Optimal stochastic control problems have a large number of applications in problems of economy, finance (see e. G. The portfolio selection problem in a market with risky assets and non-risky assets, the investment problem and the super-replication price in a model with uncertain volatility), and management of energy. These are typical situations in where we are faced to a dynamical system which evolves under some conditions of uncertainty, and where we have to take a decision at every time, to optimize an economical criterion. In particular, the control variable acts on the state of the system. Stochastic control problems are historically handled with the Bellman dynamic programming principle, which leads to obtain a characterization of the value function of the optimal control problem as solution of a partial differential equation, said the Hamilton-Jacobi- Bellman equation. In most cases, the value function is not sufficiently smooth to satisfy the HJB-equation in the classical sense. It is for this reason that the notion of viscosity solution, introduced by Crandall and Lions for the deterministic Hamilton-Jacobi-Bellman equation, has been extended to the second order problem by Lions. The theory of viscosity solutions, provided an extremely convenient framework for dealing with the lack of smoothness of the value function of the optimal stochastic control problem. In some situations, the value function could be smooth : this is the case, for example, of the Merton portfolio selection problem, for which a classical solution of the correspondent HJB-equation can be performed. However, in the general case, the HJB-equation can not be solved explicitly, hence it is necessary to analyze it numerically. In particular a discretization of the HJB-equation via Markov chain approximation is considered, and an approximate solution is computed. It is then necessary to guarantee that the numerical solution is a good approximation of the viscosity solution, and for this reason a theory of error estimate has been developed. This theory leads to obtain a theoretical estimate of the differences between the viscosity solution and the discrete solution. The thesis is divided in two parts. In the first part we give error estimates for a problem on stochastic game theory, and stochastic impulse control problem. Both these problems have some particular difficulties, and classical results on error estimate can not be applied directly. The second part concerns a study of some algorithms to implement, in particular for two problems : a stochastic impulse control problem, and a problem with unbounded control.