Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
When solving wave scattering problems with the Boundary Element Method (BEM), one usually faces the problem of storing a dense matrix of huge size which size is proportional to the (square of) the number N of unknowns on the boundary of the scattering object. Several methods, among which the Fast Multipole Method (FMM) or the H-matrices are celebrated, were developed to circumvent this obstruction. In both cases an approximation of the matrix is obtained with a O(N log(N)) storage and the matrix-vector product has the same complexity. This permits to solve the problem, replacing the direct solver with an iterative method.
The aim of the talk is to present an alternative method which is based on an accurate version of the Fourier based convolution. Based on the non-uniform FFT, the method, called the sparse cardinal sine decomposition (SCSD) ends up to have the same complexity than the FMM for much less complexity in the implementation. We show in practice how the method works, and give applications in as different domains as Laplace, Helmholtz, Maxwell or Stokes equations.
This is a joint work with Matthieu Aussal.
[-]
When solving wave scattering problems with the Boundary Element Method (BEM), one usually faces the problem of storing a dense matrix of huge size which size is proportional to the (square of) the number N of unknowns on the boundary of the scattering object. Several methods, among which the Fast Multipole Method (FMM) or the H-matrices are celebrated, were developed to circumvent this obstruction. In both cases an approximation of the matrix is ...
[+]
65T50 ; 65R10 ; 65T40
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Can modern signal processing be used to overcome the diffraction limit? The classical diffraction limit states that the resolution of a linear imaging system is fundamentally limited by one half of the wavelength of light. This implies that conventional light microscopes cannot distinguish two objects placed within a distance closer than 0.5 × 400 = 200nm (blue) or 0.5 × 700 = 350nm (red). This significantly impedes biomedical discovery by restricting our ability to observe biological structure and processes smaller than 100nm. Recent progress in sparsity-driven signal processing has created a powerful paradigm for increasing both the resolution and overall quality of imaging by promoting model-based image acquisition and reconstruction. This has led to multiple influential results demonstrating super-resolution in practical imaging systems. To date, however, the vast majority of work in signal processing has neglected the fundamental nonlinearity of the object-light interaction and its potential to lead to resolution enhancement. As a result, modern theory heavily focuses on linear measurement models that are truly effective only when object-light interactions are weak. Without a solid signal processing foundation for understanding such nonlinear interactions, we undervalue their impact on information transfer in the image formation. This ultimately limits our capability to image a large class of objects, such as biological tissue, that generally are in large-volumes and interact strongly and nonlinearly with light.
The goal of this talk is to present the recent progress in model-based imaging under multiple scattering. We will discuss several key applications including optical diffraction tomography, Fourier Ptychography, and large-scale Holographic microscopy. We will show that all these application can benefit from models, such as the Rytov approximation and beam propagation method, that take light scattering into account. We will discuss the integration of such models into the state-of-the-art optimization algorithms such as FISTA and ADMM. Finally, we will describe the most recent work that uses learned-priors for improving the quality of image reconstruction under multiple scattering.
[-]
Can modern signal processing be used to overcome the diffraction limit? The classical diffraction limit states that the resolution of a linear imaging system is fundamentally limited by one half of the wavelength of light. This implies that conventional light microscopes cannot distinguish two objects placed within a distance closer than 0.5 × 400 = 200nm (blue) or 0.5 × 700 = 350nm (red). This significantly impedes biomedical discovery by ...
[+]
94A12 ; 94A08 ; 65T50 ; 65N21 ; 65K10 ; 62H35
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Si $f$ est une fonction somme d'une séries trigonométrique lacunaire, elle est bien définie quand on donne sa restriction à un petit intervalle. Mais comment l'obtenir à partir de cette restriction ? C'est possible par un procédé d'analyse convexe, à savoir le prolongement minimal dans l'algèbre de Wiener. Ce prolongement minimal est la clé de l'echantillonnage parcimonieux (compressed sensing) exposé par Emmanuel Candès dans l'ICM de Zurich 2006 et dans un article de Candès, Romberg et Tao de la même année ; je donnerai un aperçu de variantes dans les méthodes et les résultats que j'ai publiés en 2013 dans les Annales de l'Institut Fourier.
[-]
Si $f$ est une fonction somme d'une séries trigonométrique lacunaire, elle est bien définie quand on donne sa restriction à un petit intervalle. Mais comment l'obtenir à partir de cette restriction ? C'est possible par un procédé d'analyse convexe, à savoir le prolongement minimal dans l'algèbre de Wiener. Ce prolongement minimal est la clé de l'echantillonnage parcimonieux (compressed sensing) exposé par Emmanuel Candès dans l'ICM de Zurich ...
[+]
42A38 ; 42A55 ; 42A61 ; 65T50 ; 94A12 ; 94A20
Déposez votre fichier ici pour le déplacer vers cet enregistrement.