F Nous contacter


0

Documents  91G60 | enregistrements trouvés : 3

O
     

-A +A

Sélection courante (0) : Tout sélectionner / Tout déselectionner

P Q

Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as clustering. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as clustering. In both case it appears as an optimal way to produce a set ...

62L20 ; 93E25 ; 94A12 ; 91G60 ; 65C05

The valuation of American options (a widespread type of financial contract) requires the numerical solution of an optimal stopping problem. Numerical methods for such problems have been widely investigated. Monte-Carlo methods are based on the implementation of dynamic programming principles coupled with regression techniques. In lower dimension, one can choose to tackle the related free boundary PDE with deterministic schemes.
Pricing of American options will therefore be inevitably heavier than the one of European options, which only requires the computation of a (linear) expectation. The calibration (fitting) of a stochastic model to market quotes for American options is therefore an a priori demanding task. Yet, often this cannot be avoided: on exchange markets one is typically provided only with market quotes for American options on single stocks (as opposed to large stock indexes - e.g. S&P500 - for which large amounts of liquid European options are typically available).
In this talk, we show how one can derive (approximate, but accurate enough) explicit formulas - therefore replacing other numerical methods, at least in a low-dimensional case - based on asymptotic calculus for diffusions.
More precisely: based on a suitable representation of the PDE free boundary, we derive an approximation of this boundary close to final time that refines the expansions known so far in the literature. Via the early premium formula, this allows to derive semi-closed expressions for the price of the American put/call. The final product is a calibration recipe of a Dupire's local volatility to American option data.
Based on joint work with Pierre Henry-Labordère.
The valuation of American options (a widespread type of financial contract) requires the numerical solution of an optimal stopping problem. Numerical methods for such problems have been widely investigated. Monte-Carlo methods are based on the implementation of dynamic programming principles coupled with regression techniques. In lower dimension, one can choose to tackle the related free boundary PDE with deterministic schemes.
Pricing of ...

93E20 ; 91G60

Multi angle  Cubature methods and applications
Crisan, Dan (Auteur de la Conférence) | CIRM (Editeur )

The talk will have two parts: In the first part, I will go over some of the basic feature of cubature methods for approximating solutions of classical SDEs and how they can be adapted to solve Backward SDEs. In the second part, I will introduce some recent results on the use of cubature method for approximating solutions of McKean-Vlasov SDEs.

65C30 ; 60H10 ; 34F05 ; 60H35 ; 91G60

Nuage de mots clefs ici

Z