Dobigeon, Nicolas (1981....; enseignantchercheur en traitement du signal)
Overview
Works:  27 works in 27 publications in 2 languages and 29 library holdings 

Roles:  Opponent, Thesis advisor, Other, Author, Contributor 
Publication Timeline
.
Most widely held works by
Nicolas Dobigeon
Unsupervised Bayesian linear unmixing of gene expression microarrays by
Cécile Bazot(
)
1 edition published in 2013 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2013 in English and held by 2 WorldCat member libraries worldwide
Sur quelques applications du codage parcimonieux et sa mise en oeuvre by
Bertrand Coppa(
)
1 edition published in 2013 in French and held by 2 WorldCat member libraries worldwide
Compressed sensing allows to reconstruct a signal from a few linear projections, under the assumption that the signal can be sparsely represented, that is, with only a few coefficients, on a known dictionary. Coding is very simple and all the complexity is gathered on the reconstruction. After more detailed explanations of the principle of compressed sensing, some theoretic resultats from literature and a few simulations allowing to get an idea of expected performances, we focusson three problems: First, the study for the building of a system using compressed sensing with a binary matrix and the obtained benefits. Then, we have a look at the building of a dictionary for sparse representations of the signal. And lastly, we discuss the possibility of processing signal without reconstruction, with an example in classification
1 edition published in 2013 in French and held by 2 WorldCat member libraries worldwide
Compressed sensing allows to reconstruct a signal from a few linear projections, under the assumption that the signal can be sparsely represented, that is, with only a few coefficients, on a known dictionary. Coding is very simple and all the complexity is gathered on the reconstruction. After more detailed explanations of the principle of compressed sensing, some theoretic resultats from literature and a few simulations allowing to get an idea of expected performances, we focusson three problems: First, the study for the building of a system using compressed sensing with a binary matrix and the obtained benefits. Then, we have a look at the building of a dictionary for sparse representations of the signal. And lastly, we discuss the possibility of processing signal without reconstruction, with an example in classification
Inférence et décomposition modale de réseaux dynamiques en neurosciences by
Gaëtan Frusque(
)
1 edition published in 2020 in French and held by 1 WorldCat member library worldwide
Les graphes dynamiques permettent de comprendre l'évolution de systèmes complexes qui évoluent dans le temps. Ce type de graphe a récemment fait l'objet d'une attention considérable. Cependant, il n'existe pas de consensus sur les manières d'inférer et d'étudier ces graphes. Dans cette thèse, on propose des méthodes d'analyse de graphes dynamiques spécifiques. Ceuxci peuvent être vues comme une succession de graphes complets partageant les mêmes nœuds, mais dont les poids associés à chaque lien évoluent dans le temps. Les méthodes proposées peuvent avoir des applications en neurosciences ou dans l'étude des réseaux sociaux comme Twitter et Facebook par exemple. L'enjeu applicatif de cette thèse est l'épilepsie, l'une des maladies neurologiques les plus rependues dans le monde affectant environ 1% de la population.La première partie concerne l'inférence de graphe dynamique à partir de signaux neurophysiologiques. Cette inférence est généralement réalisée à l'aide de mesures de connectivité fonctionnelle permettant d'évaluer la similarité entre deux signaux. La comparaison de ces mesures est donc d'un grand intérêt pour comprendre les caractéristiques des graphes obtenus. On compare alors des mesures de connectivité fonctionnelle impliquant la phase et l'amplitude instantanée des signaux. On s'intéresse en particulier à une mesure nommée PhaseLockingValue (PLV) qui quantifie la synchronie des phases entre deux signaux. On propose ensuite, afin d'inférer des graphes dynamiques robustes et interprétables, deux nouvelles mesures de PLV conditionnées et régulariséesLa seconde partie présente des méthodes de décomposition de graphes dynamiques. L'objectif est de proposer une méthode semiautomatique afin de caractériser les informations les plus importantes du réseau pathologique de plusieurs crises d'un même patient. Dans un premier temps on considère des crises qui ont des durées et des évolutions temporelles similaires. Une décomposition tensorielle spécifique est alors appliquée. Dans un second temps, on considère des crises qui ont des durées hétérogènes. Plusieurs stratégies sont proposées et comparées. Ce sont des méthodes qui en plus d'extraire les sousgraphes caractéristiques communs à toutes les crises, permettent d'observer leurs profils d'activation temporelle spécifiques à chaque crise. Finalement, on utilise la méthode retenue pour une application clinique. Les décompositions obtenues sont comparées à l'interprétation visuelle du clinicien. Dans l'ensemble, on constate que les sousgraphes extraits correspondent aux régions du cerveau impliquées dans la crise d'épilepsie. De plus l'évolution de l'activation de ces sousgraphes est cohérente avec l'interprétation visuelle
1 edition published in 2020 in French and held by 1 WorldCat member library worldwide
Les graphes dynamiques permettent de comprendre l'évolution de systèmes complexes qui évoluent dans le temps. Ce type de graphe a récemment fait l'objet d'une attention considérable. Cependant, il n'existe pas de consensus sur les manières d'inférer et d'étudier ces graphes. Dans cette thèse, on propose des méthodes d'analyse de graphes dynamiques spécifiques. Ceuxci peuvent être vues comme une succession de graphes complets partageant les mêmes nœuds, mais dont les poids associés à chaque lien évoluent dans le temps. Les méthodes proposées peuvent avoir des applications en neurosciences ou dans l'étude des réseaux sociaux comme Twitter et Facebook par exemple. L'enjeu applicatif de cette thèse est l'épilepsie, l'une des maladies neurologiques les plus rependues dans le monde affectant environ 1% de la population.La première partie concerne l'inférence de graphe dynamique à partir de signaux neurophysiologiques. Cette inférence est généralement réalisée à l'aide de mesures de connectivité fonctionnelle permettant d'évaluer la similarité entre deux signaux. La comparaison de ces mesures est donc d'un grand intérêt pour comprendre les caractéristiques des graphes obtenus. On compare alors des mesures de connectivité fonctionnelle impliquant la phase et l'amplitude instantanée des signaux. On s'intéresse en particulier à une mesure nommée PhaseLockingValue (PLV) qui quantifie la synchronie des phases entre deux signaux. On propose ensuite, afin d'inférer des graphes dynamiques robustes et interprétables, deux nouvelles mesures de PLV conditionnées et régulariséesLa seconde partie présente des méthodes de décomposition de graphes dynamiques. L'objectif est de proposer une méthode semiautomatique afin de caractériser les informations les plus importantes du réseau pathologique de plusieurs crises d'un même patient. Dans un premier temps on considère des crises qui ont des durées et des évolutions temporelles similaires. Une décomposition tensorielle spécifique est alors appliquée. Dans un second temps, on considère des crises qui ont des durées hétérogènes. Plusieurs stratégies sont proposées et comparées. Ce sont des méthodes qui en plus d'extraire les sousgraphes caractéristiques communs à toutes les crises, permettent d'observer leurs profils d'activation temporelle spécifiques à chaque crise. Finalement, on utilise la méthode retenue pour une application clinique. Les décompositions obtenues sont comparées à l'interprétation visuelle du clinicien. Dans l'ensemble, on constate que les sousgraphes extraits correspondent aux régions du cerveau impliquées dans la crise d'épilepsie. De plus l'évolution de l'activation de ces sousgraphes est cohérente avec l'interprétation visuelle
Optimization framework for largescale sparse blind source separation by
Christophe Kervazo(
)
1 edition published in 2019 in English and held by 1 WorldCat member library worldwide
Lors des dernières décennies, la Séparation Aveugle de Sources (BSS) est devenue un outil de premier plan pour le traitement de données multivaluées. L'objectif de ce doctorat est cependant d'étudier les cas grande échelle, pour lesquels la plupart des algorithmes classiques obtiennent des performances dégradées. Ce document s'articule en quatre parties, traitant chacune un aspect du problème: i) l'introduction d'algorithmes robustes de BSS parcimonieuse ne nécessitant qu'un seul lancement (malgré un choix d'hyperparamètres délicat) et fortement étayés mathématiquement; ii) la proposition d'une méthode permettant de maintenir une haute qualité de séparation malgré un nombre de sources important: iii) la modification d'un algorithme classique de BSS parcimonieuse pour l'application sur des données de grandes tailles; et iv) une extension au problème de BSS parcimonieuse nonlinéaire. Les méthodes proposées ont été amplement testées, tant sur données simulées que réalistes, pour démontrer leur qualité. Des interprétations détaillées des résultats sont proposées
1 edition published in 2019 in English and held by 1 WorldCat member library worldwide
Lors des dernières décennies, la Séparation Aveugle de Sources (BSS) est devenue un outil de premier plan pour le traitement de données multivaluées. L'objectif de ce doctorat est cependant d'étudier les cas grande échelle, pour lesquels la plupart des algorithmes classiques obtiennent des performances dégradées. Ce document s'articule en quatre parties, traitant chacune un aspect du problème: i) l'introduction d'algorithmes robustes de BSS parcimonieuse ne nécessitant qu'un seul lancement (malgré un choix d'hyperparamètres délicat) et fortement étayés mathématiquement; ii) la proposition d'une méthode permettant de maintenir une haute qualité de séparation malgré un nombre de sources important: iii) la modification d'un algorithme classique de BSS parcimonieuse pour l'application sur des données de grandes tailles; et iv) une extension au problème de BSS parcimonieuse nonlinéaire. Les méthodes proposées ont été amplement testées, tant sur données simulées que réalistes, pour démontrer leur qualité. Des interprétations détaillées des résultats sont proposées
Fusion rapide d'images multispectrales et hyperspectrales en astronomie infrarouge. by
Claire Guilloteau(
)
1 edition published in 2021 in French and held by 1 WorldCat member library worldwide
The James Webb Space Telescope (JWST) will be launched in 2021 and will provide multispectral images (with low spectral resolution) on wide fields of view (with high spatial resolution) and hyperspectral images (with high spectral resolution) on small fields of view (with lower spatial resolution). This Ph.D. thesis aims at developing fusion methods that will combine those images to reconstruct the astrophysical scene at high spatial and spectral resolutions. The fused product will make data analysis significantly easier. This Ph.D. project is part of the Early Release Science observing program "Radiative Feedback of Massive Stars" which will be conducted in the first wave of the JWST scientific mission in September 2022. Fusing images of different spatial and spectral resolutions has been deeply investigated for remote sensing in Earth observation. The most powerful methods are based on the resolution of an inverse problem, i.e., by minimizing a cost function composed of a data fidelity term complemented by a regularization term. The data fidelity term is formulated from a forward model of the observation instruments. The regularization term can be interpreted as a prior information on the fused image. The main challenges of data fusion for the JWST are due to the very large scale of the fused data, considerably larger than the size encountered in remote sensing, as well as complexity of both instruments. In a first part of this thesis, one proposes a generic framework which allows observations to be simulated as they would have been provided by two instruments on board the JWST: the NIRCam multispectral imager and the NIRSpec spectrometer. This protocol is mainly based on a reference image with high spatial and spectral resolutions and on the modeling of the instruments considered. In this work, the reference image is synthetically created by exploiting a realistic factorization of the spatial and spectral characteristics of a photodissociation region. To simulate multi and hyperspectral images, one derives an accurate observation model that satisfies the specifications of the NIRCam and NIRSpec instruments. This direct model takes into account the specificities of astrophysical observation instruments, namely a spectrally varying blur for each of the instruments, and their particular noise characteristics. This generic framework, inspired by the famous protocol of Wald emph{et al.} (2005), allows realistic data sets to be simulated, that will be subsequently used to evaluate the performance of the fusion algorithms. Then, one exploits the direct model previously established to formulate the fusion task as an inverse problem. In addition to the data fitting term, various regularizations are explored. First, a spectral regularization is defined by following a low rank hypothesis on the fused image. Then, the following spatial regularizations are envisioned: Sobolev, weighted Sobolev, patchbased representations and dictionary learning. To overcome the complexity of the instrumental models as well as the very large data volume, a fast implementation is proposed, by solving the problem in the Fourier spatial domain and in a spectral subspace. Particular importance is given to taking into account the uncertainties associated with the problem: errors in telescope jitter and in image misregistration
1 edition published in 2021 in French and held by 1 WorldCat member library worldwide
The James Webb Space Telescope (JWST) will be launched in 2021 and will provide multispectral images (with low spectral resolution) on wide fields of view (with high spatial resolution) and hyperspectral images (with high spectral resolution) on small fields of view (with lower spatial resolution). This Ph.D. thesis aims at developing fusion methods that will combine those images to reconstruct the astrophysical scene at high spatial and spectral resolutions. The fused product will make data analysis significantly easier. This Ph.D. project is part of the Early Release Science observing program "Radiative Feedback of Massive Stars" which will be conducted in the first wave of the JWST scientific mission in September 2022. Fusing images of different spatial and spectral resolutions has been deeply investigated for remote sensing in Earth observation. The most powerful methods are based on the resolution of an inverse problem, i.e., by minimizing a cost function composed of a data fidelity term complemented by a regularization term. The data fidelity term is formulated from a forward model of the observation instruments. The regularization term can be interpreted as a prior information on the fused image. The main challenges of data fusion for the JWST are due to the very large scale of the fused data, considerably larger than the size encountered in remote sensing, as well as complexity of both instruments. In a first part of this thesis, one proposes a generic framework which allows observations to be simulated as they would have been provided by two instruments on board the JWST: the NIRCam multispectral imager and the NIRSpec spectrometer. This protocol is mainly based on a reference image with high spatial and spectral resolutions and on the modeling of the instruments considered. In this work, the reference image is synthetically created by exploiting a realistic factorization of the spatial and spectral characteristics of a photodissociation region. To simulate multi and hyperspectral images, one derives an accurate observation model that satisfies the specifications of the NIRCam and NIRSpec instruments. This direct model takes into account the specificities of astrophysical observation instruments, namely a spectrally varying blur for each of the instruments, and their particular noise characteristics. This generic framework, inspired by the famous protocol of Wald emph{et al.} (2005), allows realistic data sets to be simulated, that will be subsequently used to evaluate the performance of the fusion algorithms. Then, one exploits the direct model previously established to formulate the fusion task as an inverse problem. In addition to the data fitting term, various regularizations are explored. First, a spectral regularization is defined by following a low rank hypothesis on the fused image. Then, the following spatial regularizations are envisioned: Sobolev, weighted Sobolev, patchbased representations and dictionary learning. To overcome the complexity of the instrumental models as well as the very large data volume, a fast implementation is proposed, by solving the problem in the Fourier spatial domain and in a spectral subspace. Particular importance is given to taking into account the uncertainties associated with the problem: errors in telescope jitter and in image misregistration
Modèles bayésiens hiérarchiques pour le traitement multicapteur by
Nicolas Dobigeon(
Book
)
1 edition published in 2007 in French and held by 1 WorldCat member library worldwide
In order to extract relevant information coming from multiple sensors, new signal processing techniques have to be developed. The first part of this PhD thesis studies hierarchical Bayesian estimation algorithms for the joint segmentation of multiple time series. The proposed algorithms exploit the multidimensional nature of the segmentation problem, which provides better performance than using segmentations applied to each signal independently of the others. The use of Markov chain Monte Carlo methods allows one to overcome the difficulties related to the computational complexity of these inference methods. The second part of the thesis studies the problem referred to as unmixing of hyperspectral images. The unmixing of hyperspectral images can be formulated as an inverse problem with appropriate constraints. The hierarchical Bayesian algorithms initially developed for the segmentation of multiple time series are adapted to this unmixing problem
1 edition published in 2007 in French and held by 1 WorldCat member library worldwide
In order to extract relevant information coming from multiple sensors, new signal processing techniques have to be developed. The first part of this PhD thesis studies hierarchical Bayesian estimation algorithms for the joint segmentation of multiple time series. The proposed algorithms exploit the multidimensional nature of the segmentation problem, which provides better performance than using segmentations applied to each signal independently of the others. The use of Markov chain Monte Carlo methods allows one to overcome the difficulties related to the computational complexity of these inference methods. The second part of the thesis studies the problem referred to as unmixing of hyperspectral images. The unmixing of hyperspectral images can be formulated as an inverse problem with appropriate constraints. The hierarchical Bayesian algorithms initially developed for the segmentation of multiple time series are adapted to this unmixing problem
Modèles bayésiens pour l'identification de représentations antiparcimonieuses et l'analyse en composantes principales bayésienne
non paramétrique by
Clément Elvira(
)
1 edition published in 2017 in French and held by 1 WorldCat member library worldwide
Cette thèse étudie deux modèles paramétriques et non paramétriques pour le changement de représentation. L'objectif des deux modèles diffère. Le premier cherche une représentation en plus grande dimension pour gagner en robustesse. L'objectif est de répartir uniformément l'information d'un signal sur toutes les composantes de sa représentation en plus grande dimension. La recherche d'un tel code s'exprime comme un problème inverse impliquant une régularisation de type norme infinie. Nous proposons une formulation bayésienne du problème impliquant une nouvelle loi de probabilité baptisée démocratique, qui pénalise les fortes amplitudes. Deux algorithmes MCMC proximaux sont présentés pour approcher des estimateurs bayésiens. La méthode non supervisée présentée est appelée BAC1. Des expériences numériques illustrent les performances de l'approche pour la réduction de facteur de crête. Le second modèle identifie un sousespace pertinent de dimension réduite à des fins de modélisation. Mais les méthodes probabilistes proposées nécessitent généralement de fixer à l'avance la dimension du sousespace. Ce travail introduit BNPPCA, une version bayésienne non paramétrique de l'analyse en composantes principales. La méthode couple une loi uniforme sur les bases orthonormales à un a priori non paramétrique de type buffet indien pour favoriser une utilisation parcimonieuse des composantes principales et aucun réglage n'est nécessaire. L'inférence est réalisée à l'aide des méthodes MCMC. L'estimation de la dimension du sousespace et le comportement numérique de BNPPCA sont étudiés. Nous montrons la flexibilité de BNPPCA sur deux applications
1 edition published in 2017 in French and held by 1 WorldCat member library worldwide
Cette thèse étudie deux modèles paramétriques et non paramétriques pour le changement de représentation. L'objectif des deux modèles diffère. Le premier cherche une représentation en plus grande dimension pour gagner en robustesse. L'objectif est de répartir uniformément l'information d'un signal sur toutes les composantes de sa représentation en plus grande dimension. La recherche d'un tel code s'exprime comme un problème inverse impliquant une régularisation de type norme infinie. Nous proposons une formulation bayésienne du problème impliquant une nouvelle loi de probabilité baptisée démocratique, qui pénalise les fortes amplitudes. Deux algorithmes MCMC proximaux sont présentés pour approcher des estimateurs bayésiens. La méthode non supervisée présentée est appelée BAC1. Des expériences numériques illustrent les performances de l'approche pour la réduction de facteur de crête. Le second modèle identifie un sousespace pertinent de dimension réduite à des fins de modélisation. Mais les méthodes probabilistes proposées nécessitent généralement de fixer à l'avance la dimension du sousespace. Ce travail introduit BNPPCA, une version bayésienne non paramétrique de l'analyse en composantes principales. La méthode couple une loi uniforme sur les bases orthonormales à un a priori non paramétrique de type buffet indien pour favoriser une utilisation parcimonieuse des composantes principales et aucun réglage n'est nécessaire. L'inférence est réalisée à l'aide des méthodes MCMC. L'estimation de la dimension du sousespace et le comportement numérique de BNPPCA sont étudiés. Nous montrons la flexibilité de BNPPCA sur deux applications
From representation learning to thematic classification  Application to hierarchical analysis of hyperspectral images by
Adrien Lagrange(
)
1 edition published in 2019 in English and held by 1 WorldCat member library worldwide
Numerous frameworks have been developed in order to analyze the increasing amount of available image data. Among those methods, supervised classification has received considerable attention leading to the development of stateoftheart classification methods. These methods aim at inferring the class of each observation given a specific class nomenclature by exploiting a set of labeled observations. Thanks to extensive research efforts of the community, classification methods have become very efficient. Nevertheless, the results of a classification remains a highlevel interpretation of the scene since it only gives a single class to summarize all information in a given pixel. Contrary to classification methods, representation learning methods are modelbased approaches designed especially to handle highdimensional data and extract meaningful latent variables. By using physicbased models, these methods allow the user to extract very meaningful variables and get a very detailed interpretation of the considered image. The main objective of this thesis is to develop a unified framework for classification and representation learning. These two methods provide complementary approaches allowing to address the problem using a hierarchical modeling approach. The representation learning approach is used to build a lowlevel model of the data whereas classification is used to incorporate supervised information and may be seen as a highlevel interpretation of the data. Two different paradigms, namely Bayesian models and optimization approaches, are explored to set up this hierarchical model. The proposed models are then tested in the specific context of hyperspectral imaging where the representation learning task is specified as a spectral unmixing problem
1 edition published in 2019 in English and held by 1 WorldCat member library worldwide
Numerous frameworks have been developed in order to analyze the increasing amount of available image data. Among those methods, supervised classification has received considerable attention leading to the development of stateoftheart classification methods. These methods aim at inferring the class of each observation given a specific class nomenclature by exploiting a set of labeled observations. Thanks to extensive research efforts of the community, classification methods have become very efficient. Nevertheless, the results of a classification remains a highlevel interpretation of the scene since it only gives a single class to summarize all information in a given pixel. Contrary to classification methods, representation learning methods are modelbased approaches designed especially to handle highdimensional data and extract meaningful latent variables. By using physicbased models, these methods allow the user to extract very meaningful variables and get a very detailed interpretation of the considered image. The main objective of this thesis is to develop a unified framework for classification and representation learning. These two methods provide complementary approaches allowing to address the problem using a hierarchical modeling approach. The representation learning approach is used to build a lowlevel model of the data whereas classification is used to incorporate supervised information and may be seen as a highlevel interpretation of the data. Two different paradigms, namely Bayesian models and optimization approaches, are explored to set up this hierarchical model. The proposed models are then tested in the specific context of hyperspectral imaging where the representation learning task is specified as a spectral unmixing problem
Méthodes bayésiennes pour l'analyse génétique by
Cécile Bazot(
)
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
In the past few years, genomics has received growing scientic interest, particularly since the map of the human genome was completed and published in early 2000's. Currently, medical teams are facing a new challenge: processing the signals issued by DNA microarrays. These signals, often of voluminous size, allow one to discover the level of a gene expression in a given tissue at any time, under specic conditions (phenotype, treatment, ...). The aim of this research is to identify characteristic temporal gene expression proles of host response to a pathogen, in order to detect or even prevent a disease in a group of observed patients. The solutions developed in this thesis consist of the decomposition of these signals into elementary factors (genetic signatures) following a Bayesian linear mixing model, allowing for joint estimation of these factors and their relative contributions to each sample. The use of Markov chain Monte Carlo methods is particularly suitable for the proposed hierarchical Bayesian models. Indeed they allow one to overcome the diculties related to their computational complexity
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
In the past few years, genomics has received growing scientic interest, particularly since the map of the human genome was completed and published in early 2000's. Currently, medical teams are facing a new challenge: processing the signals issued by DNA microarrays. These signals, often of voluminous size, allow one to discover the level of a gene expression in a given tissue at any time, under specic conditions (phenotype, treatment, ...). The aim of this research is to identify characteristic temporal gene expression proles of host response to a pathogen, in order to detect or even prevent a disease in a group of observed patients. The solutions developed in this thesis consist of the decomposition of these signals into elementary factors (genetic signatures) following a Bayesian linear mixing model, allowing for joint estimation of these factors and their relative contributions to each sample. The use of Markov chain Monte Carlo methods is particularly suitable for the proposed hierarchical Bayesian models. Indeed they allow one to overcome the diculties related to their computational complexity
Modèles bayésiens hiérarchiques pour le traitement multicapteur by
Nicolas Dobigeon(
)
1 edition published in 2008 in French and held by 1 WorldCat member library worldwide
In order to extract relevant information coming from multiple sensors, new signal processing techniques have to be developed. The first part of this PhD thesis studies hierarchical Bayesian estimation algorithms for the joint segmentation of multiple time series. The proposed algorithms exploit the multidimensional nature of the segmentation problem, which provides better performance than using segmentations applied to each signal independently of the others. The use of Markov chain Monte Carlo methods allows one to overcome the difficulties related to the computational complexity of these inference methods. The second part of the thesis studies the problem referred to as unmixing of hyperspectral images. The unmixing of hyperspectral images can be formulated as an inverse problem with appropriate constraints. The hierarchical Bayesian algorithms initially developed for the segmentation of multiple time series are adapted to this unmixing problem
1 edition published in 2008 in French and held by 1 WorldCat member library worldwide
In order to extract relevant information coming from multiple sensors, new signal processing techniques have to be developed. The first part of this PhD thesis studies hierarchical Bayesian estimation algorithms for the joint segmentation of multiple time series. The proposed algorithms exploit the multidimensional nature of the segmentation problem, which provides better performance than using segmentations applied to each signal independently of the others. The use of Markov chain Monte Carlo methods allows one to overcome the difficulties related to the computational complexity of these inference methods. The second part of the thesis studies the problem referred to as unmixing of hyperspectral images. The unmixing of hyperspectral images can be formulated as an inverse problem with appropriate constraints. The hierarchical Bayesian algorithms initially developed for the segmentation of multiple time series are adapted to this unmixing problem
Analyse massive d'images multiangulaires hyperspectrales de la planète Mars par régression inverse de modèles physiques by
Benoit Kugler(
)
1 edition published in 2021 in French and held by 1 WorldCat member library worldwide
The objective of the thesis is to develop a statistical learning technique suitable for the inversion of complex physical models. The two main difficulties addressed are on the one hand the massive amount of observations to be reversed and on the other hand the need to quantify the uncertainty on the inversion, which can come from the physical model or from the measurements. In a Bayesian inversion framework, we therefore propose a twostep approach: a learning step of a parametric statistical model (GLLiM), common to all the observations, then a prediction step, repeated for each measurement, but fast enough to support a large dataset. In addition, we explore sampling techniques to refine the tradeoff between computation time and inversion precision.Although general, the proposed approach is applied mainly to a complex inverse problem in planetary remote sensing. This involves using a semiempirical spectrophotometric model (Hapke's model) to analyze reflectance measurements and indirectly find the textural characterization of the material examined. Several datasets are studied, both from laboratory measurements and massive satellite images.Finally, we exploit the versatility of the GLLiM model to explore several issues related to Bayesian inversion. In particular, we propose an indicator to assess the influence of the choice of the direct model on the quality of the inversion. We also extend the GLLiM model to take into account a priori information, making it suitable for solving data assimilation problems
1 edition published in 2021 in French and held by 1 WorldCat member library worldwide
The objective of the thesis is to develop a statistical learning technique suitable for the inversion of complex physical models. The two main difficulties addressed are on the one hand the massive amount of observations to be reversed and on the other hand the need to quantify the uncertainty on the inversion, which can come from the physical model or from the measurements. In a Bayesian inversion framework, we therefore propose a twostep approach: a learning step of a parametric statistical model (GLLiM), common to all the observations, then a prediction step, repeated for each measurement, but fast enough to support a large dataset. In addition, we explore sampling techniques to refine the tradeoff between computation time and inversion precision.Although general, the proposed approach is applied mainly to a complex inverse problem in planetary remote sensing. This involves using a semiempirical spectrophotometric model (Hapke's model) to analyze reflectance measurements and indirectly find the textural characterization of the material examined. Several datasets are studied, both from laboratory measurements and massive satellite images.Finally, we exploit the versatility of the GLLiM model to explore several issues related to Bayesian inversion. In particular, we propose an indicator to assess the influence of the choice of the direct model on the quality of the inversion. We also extend the GLLiM model to take into account a priori information, making it suitable for solving data assimilation problems
Reconstruction rapide d'images multibandes partiellement échantillonnées en spectromicroscopie EELS by
Etienne Monier(
)
1 edition published in 2020 in French and held by 1 WorldCat member library worldwide
In electron energy loss spectroscopy (EELS), the sample to be analyzed is exposed to an electron beam, and the measure of the energy loss after passing through the material informs about the chemical composition. For samples particularly sensitive to electronic irradiation damages, such as organic materials, the experimenter is constrained to reduce the total electron dose received by the sample while obtaining a satisfying signaltonoise ratio. With the recent development of sampling modules adapted to scanning transmission electron microscopes (STEM), the initial raster acquisition (i.e., linebyline) has become highly configurable. Henceforth, it is now possible to visit any set of spatial positions during the acquisition. Based on these technical advances, a lot of works proposed optimized acquisition schemes for preserving sensitive samples. For a global electron dose equivalent to standard sampling, these strategies consist in visiting less spatial positions, i.e., to perform partial sampling. As a consequence, a higher electron dose per spatial position is allowed, which permits to increase the signaltonoise ratio for each sampled spectrum. Yet, a postprocessing step is required to infer the missing spectra. Among the reconstruction techniques used in the literature, the interpolation methods are fast but rather inaccurate ; they are particularly efficient for displaying the full image along the acquisition process. On the contrary, the dictionary learningbased methods are very performant, but are memory and computation demanding. They are chosen in priority to refine the reconstructed image after experimenting. Finally, only a few works attempt to fill this gap. The main objective of this Ph.D. thesis is to propose fast and accurate reconstruction algorithms for STEMEELS imaging. Similarly to the interpolation methods, they should be fast enough to visualize the reconstructed image along the acquisition. Meanwhile, they should also achieve better reconstruction performances than those reached by interpolation, close to those of dictionary learningbased methods. To that end, regularized least square methods are proposed in the context of spatially smooth samples or of periodic crystalline samples. The proposed algorithms are then tested based on synthetic as well as real data experiments. The interest of partialsampling based methods and the performances with respect to other reconstruction methods are evaluated
1 edition published in 2020 in French and held by 1 WorldCat member library worldwide
In electron energy loss spectroscopy (EELS), the sample to be analyzed is exposed to an electron beam, and the measure of the energy loss after passing through the material informs about the chemical composition. For samples particularly sensitive to electronic irradiation damages, such as organic materials, the experimenter is constrained to reduce the total electron dose received by the sample while obtaining a satisfying signaltonoise ratio. With the recent development of sampling modules adapted to scanning transmission electron microscopes (STEM), the initial raster acquisition (i.e., linebyline) has become highly configurable. Henceforth, it is now possible to visit any set of spatial positions during the acquisition. Based on these technical advances, a lot of works proposed optimized acquisition schemes for preserving sensitive samples. For a global electron dose equivalent to standard sampling, these strategies consist in visiting less spatial positions, i.e., to perform partial sampling. As a consequence, a higher electron dose per spatial position is allowed, which permits to increase the signaltonoise ratio for each sampled spectrum. Yet, a postprocessing step is required to infer the missing spectra. Among the reconstruction techniques used in the literature, the interpolation methods are fast but rather inaccurate ; they are particularly efficient for displaying the full image along the acquisition process. On the contrary, the dictionary learningbased methods are very performant, but are memory and computation demanding. They are chosen in priority to refine the reconstructed image after experimenting. Finally, only a few works attempt to fill this gap. The main objective of this Ph.D. thesis is to propose fast and accurate reconstruction algorithms for STEMEELS imaging. Similarly to the interpolation methods, they should be fast enough to visualize the reconstructed image along the acquisition. Meanwhile, they should also achieve better reconstruction performances than those reached by interpolation, close to those of dictionary learningbased methods. To that end, regularized least square methods are proposed in the context of spatially smooth samples or of periodic crystalline samples. The proposed algorithms are then tested based on synthetic as well as real data experiments. The interest of partialsampling based methods and the performances with respect to other reconstruction methods are evaluated
Approche bayésienne pour la sélection de modèles : application à la restauration d'image by
Benjamin Harroue(
)
1 edition published in 2020 in French and held by 1 WorldCat member library worldwide
Inversing main goal is about reconstructing objects from data. Here, we focus on the special case of image restauration in convolution problems. The data are acquired through a altering observation system and additionnaly distorted by errors. The problem becomes illposed due to the loss of information. One way to tackle it is to exploit Bayesian approach in order to regularize the problem. Introducing prior information about the unknown quantities osset the loss, and it relies on stochastic models. We have to test all the candidate models, in order to select the best one. But some questions remain : how do you choose the best model? Which features or quantities should we rely on ? In this work, we propose a method to automatically compare and choose the model, based on Bayesion decision theory : objectively compare the models based on their posterior probabilities. These probabilities directly depend on the marginal likelihood or “evidence” of the models. The evidence comes from the marginalization of the jointe law according to the unknow image and the unknow hyperparameters. This a difficult integral calculation because of the complex dependancies between the quantities and the high dimension of the image. That way, we have to work with computationnal methods and approximations. There are several methods on the test stand as Harmonic Mean, Laplace method, discrete integration, Chib from Gibbs approximation or the power posteriors. Comparing is those methods is significative step to determine which ones are the most competent in image restauration. As a first lead of research, we focus on the family of Gaussian models with circulant covariance matrices to lower some difficulties
1 edition published in 2020 in French and held by 1 WorldCat member library worldwide
Inversing main goal is about reconstructing objects from data. Here, we focus on the special case of image restauration in convolution problems. The data are acquired through a altering observation system and additionnaly distorted by errors. The problem becomes illposed due to the loss of information. One way to tackle it is to exploit Bayesian approach in order to regularize the problem. Introducing prior information about the unknown quantities osset the loss, and it relies on stochastic models. We have to test all the candidate models, in order to select the best one. But some questions remain : how do you choose the best model? Which features or quantities should we rely on ? In this work, we propose a method to automatically compare and choose the model, based on Bayesion decision theory : objectively compare the models based on their posterior probabilities. These probabilities directly depend on the marginal likelihood or “evidence” of the models. The evidence comes from the marginalization of the jointe law according to the unknow image and the unknow hyperparameters. This a difficult integral calculation because of the complex dependancies between the quantities and the high dimension of the image. That way, we have to work with computationnal methods and approximations. There are several methods on the test stand as Harmonic Mean, Laplace method, discrete integration, Chib from Gibbs approximation or the power posteriors. Comparing is those methods is significative step to determine which ones are the most competent in image restauration. As a first lead of research, we focus on the family of Gaussian models with circulant covariance matrices to lower some difficulties
Méthodes Bayésiennes pour le démélange d'images hyperspectrales by
Olivier Eches(
)
1 edition published in 2010 in French and held by 1 WorldCat member library worldwide
L'imagerie hyperspectrale est très largement employée en télédétection pour diverses applications, dans le domaine civil comme dans le domaine militaire. Une image hyperspectrale est le résultat de l'acquisition d'une seule scène observée dans plusieurs longueurs d'ondes. Par conséquent, chacun des pixels constituant cette image est représenté par un vecteur de mesures (généralement des réflectances) appelé spectre. Une étape majeure dans l'analyse des données hyperspectrales consiste à identifier les composants macroscopiques (signatures) présents dans la région observée et leurs proportions correspondantes (abondances). Les dernières techniques développées pour ces analyses ne modélisent pas correctement ces images. En effet, habituellement ces techniques supposent l'existence de pixels purs dans l'image, c'estàdire des pixels constitué d'un seul matériau pur. Or, un pixel est rarement constitué d'éléments purs distincts l'un de l'autre. Ainsi, les estimations basées sur ces modèles peuvent tout à fait s'avérer bien loin de la réalité. Le but de cette étude est de proposer de nouveaux algorithmes d'estimation à l'aide d'un modèle plus adapté aux propriétés intrinsèques des images hyperspectrales. Les paramètres inconnus du modèle sont ainsi déduits dans un cadre Bayésien. L'utilisation de méthodes de Monte Carlo par Chaînes de Markov (MCMC) permet de surmonter les difficultés liées aux calculs complexes de ces méthodes d'estimation
1 edition published in 2010 in French and held by 1 WorldCat member library worldwide
L'imagerie hyperspectrale est très largement employée en télédétection pour diverses applications, dans le domaine civil comme dans le domaine militaire. Une image hyperspectrale est le résultat de l'acquisition d'une seule scène observée dans plusieurs longueurs d'ondes. Par conséquent, chacun des pixels constituant cette image est représenté par un vecteur de mesures (généralement des réflectances) appelé spectre. Une étape majeure dans l'analyse des données hyperspectrales consiste à identifier les composants macroscopiques (signatures) présents dans la région observée et leurs proportions correspondantes (abondances). Les dernières techniques développées pour ces analyses ne modélisent pas correctement ces images. En effet, habituellement ces techniques supposent l'existence de pixels purs dans l'image, c'estàdire des pixels constitué d'un seul matériau pur. Or, un pixel est rarement constitué d'éléments purs distincts l'un de l'autre. Ainsi, les estimations basées sur ces modèles peuvent tout à fait s'avérer bien loin de la réalité. Le but de cette étude est de proposer de nouveaux algorithmes d'estimation à l'aide d'un modèle plus adapté aux propriétés intrinsèques des images hyperspectrales. Les paramètres inconnus du modèle sont ainsi déduits dans un cadre Bayésien. L'utilisation de méthodes de Monte Carlo par Chaînes de Markov (MCMC) permet de surmonter les difficultés liées aux calculs complexes de ces méthodes d'estimation
Inversion for textured images : unsupervised myopic deconvolution, model selection, deconvolutionsegmentation by
Cornelia Paula Văcar(
)
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
This thesis is addressing a series of inverse problems of major importance in the fieldof image processing (image segmentation, model choice, parameter estimation, deconvolution)in the context of textured images. In all of the aforementioned problems theobservations are indirect, i.e., the textured images are affected by a blur and by noise. Thecontributions of this work belong to three main classes: modeling, methodological andalgorithmic. From the modeling standpoint, the contribution consists in the development of a newnonGaussian model for textures. The Fourier coefficients of the textured images are modeledby a Scale Mixture of Gaussians Random Field. The Power Spectral Density of thetexture has a parametric form, driven by a set of parameters that encode the texture characteristics.The methodological contribution is threefold and consists in solving three image processingproblems that have not been tackled so far in the context of indirect observationsof textured images. All the proposed methods are Bayesian and are based on the exploitingthe information encoded in the a posteriori law. The first method that is proposed is devotedto the myopic deconvolution of a textured image and the estimation of its parameters.The second method achieves joint model selection and model parameters estimation froman indirect observation of a textured image. Finally, the third method addresses the problemof joint deconvolution and segmentation of an image composed of several texturedregions, while estimating at the same time the parameters of each constituent texture.Last, but not least, the algorithmic contribution is represented by the development ofa new efficient version of the Metropolis Hastings algorithm, with a directional componentof the proposal function based on the”Newton direction” and the Fisher informationmatrix. This particular directional component allows for an efficient exploration of theparameter space and, consequently, increases the convergence speed of the algorithm.To summarize, this work presents a series of methods to solve three image processingproblems in the context of blurry and noisy textured images. Moreover, we present twoconnected contributions, one regarding the texture models andone meant to enhance theperformances of the samplers employed for all of the three methods
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
This thesis is addressing a series of inverse problems of major importance in the fieldof image processing (image segmentation, model choice, parameter estimation, deconvolution)in the context of textured images. In all of the aforementioned problems theobservations are indirect, i.e., the textured images are affected by a blur and by noise. Thecontributions of this work belong to three main classes: modeling, methodological andalgorithmic. From the modeling standpoint, the contribution consists in the development of a newnonGaussian model for textures. The Fourier coefficients of the textured images are modeledby a Scale Mixture of Gaussians Random Field. The Power Spectral Density of thetexture has a parametric form, driven by a set of parameters that encode the texture characteristics.The methodological contribution is threefold and consists in solving three image processingproblems that have not been tackled so far in the context of indirect observationsof textured images. All the proposed methods are Bayesian and are based on the exploitingthe information encoded in the a posteriori law. The first method that is proposed is devotedto the myopic deconvolution of a textured image and the estimation of its parameters.The second method achieves joint model selection and model parameters estimation froman indirect observation of a textured image. Finally, the third method addresses the problemof joint deconvolution and segmentation of an image composed of several texturedregions, while estimating at the same time the parameters of each constituent texture.Last, but not least, the algorithmic contribution is represented by the development ofa new efficient version of the Metropolis Hastings algorithm, with a directional componentof the proposal function based on the”Newton direction” and the Fisher informationmatrix. This particular directional component allows for an efficient exploration of theparameter space and, consequently, increases the convergence speed of the algorithm.To summarize, this work presents a series of methods to solve three image processingproblems in the context of blurry and noisy textured images. Moreover, we present twoconnected contributions, one regarding the texture models andone meant to enhance theperformances of the samplers employed for all of the three methods
SubspaceBased Bayesian Blind Source Separation for Hyperspectral Imagery(
)
1 edition published in 2009 in English and held by 1 WorldCat member library worldwide
In this paper, a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery is introduced. Following the linear mixing model, each pixel spectrum of the hyperspectral image is decomposed as a linear combination of pure endmember spectra. The estimation of the unknown endmember spectra and the corresponding abundances is conducted in a unified manner by generating the posterior distribution of the unknown parameters under a hierarchical Bayesian model. The proposed model accounts for nonnegativity and fulladditivity constraints, and exploits the fact that the endmember spectra lie on a lower dimensional space. A Gibbs algorithm is proposed to generate samples distributed according to the posterior of interest. Simulation results illustrate the accuracy of the proposed joint Bayesian estimator
1 edition published in 2009 in English and held by 1 WorldCat member library worldwide
In this paper, a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery is introduced. Following the linear mixing model, each pixel spectrum of the hyperspectral image is decomposed as a linear combination of pure endmember spectra. The estimation of the unknown endmember spectra and the corresponding abundances is conducted in a unified manner by generating the posterior distribution of the unknown parameters under a hierarchical Bayesian model. The proposed model accounts for nonnegativity and fulladditivity constraints, and exploits the fact that the endmember spectra lie on a lower dimensional space. A Gibbs algorithm is proposed to generate samples distributed according to the posterior of interest. Simulation results illustrate the accuracy of the proposed joint Bayesian estimator
Factor analysis of dynamic PET images by
Yanna Cruz Cavalcanti(
)
1 edition published in 2018 in English and held by 1 WorldCat member library worldwide
1 edition published in 2018 in English and held by 1 WorldCat member library worldwide
Caractérisation des ions d'oxygène dans les mémoires résistives soumises à polarisation électrique par techniques de
TEM avancées by
Édouard Villepreux(
)
1 edition published in 2020 in French and held by 1 WorldCat member library worldwide
Les besoins en capacité de stockage étant toujours plus important, les recherches sur les technologies émergentes de dispositifs mémoires et leur développement sont en plein essor. Parmi les mémoires émergentes, ces travaux de thèse s'intéressent aux mémoires à accès aléatoire résistives à base d'oxyde (OxRRAM). Le mouvement des ions d'oxygène pendant la commutation électrique de ce type de mémoire est encore très mal connu, et le connaître permettrait d'améliorer et d'optimiser ces dispositifs. Le MET, associé à la spectroscopie de perte d'énergie des électrons (EELS), permet l'observation des variations de distributions d'ions d'oxygènes dans ce type d'empilement. Les derniers développements techniques permettent aussi une polarisation in situ. Les porteobjet, dont dépend la préparation d'échantillon utilisée, ainsi que les artefacts à prendre en compte au moment des commutations operando, présentent des spécificités qui leur sont propre. Dans cette thèse, trois porteobjet dédiés aux polarisations électriques en TEM ont été présentés, celui de NanoFactory, de Hummingbird, et enfin de Protochips. Ces deux premiers porteobjet sont à pointe, tant dis que le dernier est un porteobjet à puce.Un protocole traitement d'images hyperspectrales EELS basé sur l'algorithme de VCA a été développé et appliqué à deux types d'empilements mémoire. Le premier est un empilement mémoire de référence à base de SrTiO3 sur lequel deux études ont été faites. Une première étude de ce type d'empilement a été menée sur des données acquises précédemment et dont les résultats sont déjà publiés avec un empilement à base de SrTiO3 cristallin. Cette première analyse a permis de confirmer le bon fonctionnement du protocole de traitement d'images hyperspectrales. Une deuxième analyse a été faite sur un empilement mémoire à base de SrTiO3 polycristallin, peu connu. L'analyse en STEMEELS operando de ce second échantillon s'est effectuée via l'utilisation du porteobjet à puce Protochips associé au traitement de données développé, permettant d'en apprendre davantage. Cette seconde analyse à montrer que l'utilisation de ce traitement de données basée sur l'algorithme de VCA peut donner des informations complémentaires au traitement conventionnel. Le second type d'empilement étudié est un dispositif mémoire à base de La2NiO4, conçu pour des applications dans le domaine des applications neuromorphique en raison de son comportement volatile. Les protocoles de caractérisation et de traitement de données développés au cours de cette thèse peuvent ainsi servir de supports aux études d'autres dispositifs mémoires de taille micrométrique
1 edition published in 2020 in French and held by 1 WorldCat member library worldwide
Les besoins en capacité de stockage étant toujours plus important, les recherches sur les technologies émergentes de dispositifs mémoires et leur développement sont en plein essor. Parmi les mémoires émergentes, ces travaux de thèse s'intéressent aux mémoires à accès aléatoire résistives à base d'oxyde (OxRRAM). Le mouvement des ions d'oxygène pendant la commutation électrique de ce type de mémoire est encore très mal connu, et le connaître permettrait d'améliorer et d'optimiser ces dispositifs. Le MET, associé à la spectroscopie de perte d'énergie des électrons (EELS), permet l'observation des variations de distributions d'ions d'oxygènes dans ce type d'empilement. Les derniers développements techniques permettent aussi une polarisation in situ. Les porteobjet, dont dépend la préparation d'échantillon utilisée, ainsi que les artefacts à prendre en compte au moment des commutations operando, présentent des spécificités qui leur sont propre. Dans cette thèse, trois porteobjet dédiés aux polarisations électriques en TEM ont été présentés, celui de NanoFactory, de Hummingbird, et enfin de Protochips. Ces deux premiers porteobjet sont à pointe, tant dis que le dernier est un porteobjet à puce.Un protocole traitement d'images hyperspectrales EELS basé sur l'algorithme de VCA a été développé et appliqué à deux types d'empilements mémoire. Le premier est un empilement mémoire de référence à base de SrTiO3 sur lequel deux études ont été faites. Une première étude de ce type d'empilement a été menée sur des données acquises précédemment et dont les résultats sont déjà publiés avec un empilement à base de SrTiO3 cristallin. Cette première analyse a permis de confirmer le bon fonctionnement du protocole de traitement d'images hyperspectrales. Une deuxième analyse a été faite sur un empilement mémoire à base de SrTiO3 polycristallin, peu connu. L'analyse en STEMEELS operando de ce second échantillon s'est effectuée via l'utilisation du porteobjet à puce Protochips associé au traitement de données développé, permettant d'en apprendre davantage. Cette seconde analyse à montrer que l'utilisation de ce traitement de données basée sur l'algorithme de VCA peut donner des informations complémentaires au traitement conventionnel. Le second type d'empilement étudié est un dispositif mémoire à base de La2NiO4, conçu pour des applications dans le domaine des applications neuromorphique en raison de son comportement volatile. Les protocoles de caractérisation et de traitement de données développés au cours de cette thèse peuvent ainsi servir de supports aux études d'autres dispositifs mémoires de taille micrométrique
Performances et méthodes pour l'échantillonnage comprimé : Robustesse à la méconnaissance du dictionnaire et optimisation
du noyau d'échantillonnage. by
Stéphanie Bernhardt(
)
1 edition published in 2016 in French and held by 1 WorldCat member library worldwide
In this thesis, we are interested in two different low rate sampling schemes that challenge Shannon's theory: the sampling of finite rate of innovation signals and compressed sensing.Recently it has been shown that using appropriate sampling kernel, finite rate of innovation signals can be perfectly sampled even though they are nonbandlimited. In the presence of noise, reconstruction is achieved by a modelbased estimation procedure. In this thesis, we consider the estimation of the amplitudes and delays of a finite stream of Dirac pulses using an arbitrary kernel and the estimation of a finite stream of arbitrary pulses using the Sum of Sincs (SoS) kernel. In both scenarios, we derive the Bayesian CramérRao Bound (BCRB) for the parameters of interest. The SoS kernel is an interesting kernel since it is totally configurable by a vector of weights. In the first scenario, based on convex optimization tools, we propose a new kernel minimizing the BCRB on the delays, while in the second scenario we propose a family of kernels which maximizes the Bayesian Fisher Information, i.e., the total amount of information about each of the parameter in the measures. The advantage of the proposed family is that it can be useradjusted to favor either of the estimated parameters.Compressed sensing is a promising emerging domain which outperforms the classical limit of the Shannon sampling theory if the measurement vector can be approximated as the linear combination of few basis vectors extracted from a redundant dictionary matrix. Unfortunately, in realistic scenario, the knowledge of this basis or equivalently of the entire dictionary is often uncertain, i.e. corrupted by a Basis Mismatch (BM) error. The related estimation problem is based on the matching of continuous parameters of interest to a discretized parameter set over a regular grid. Generally, the parameters of interest do not lie in this grid and there exists an estimation error even at high Signal to Noise Ratio (SNR). This is the offgrid (OG) problem. The consequence of the BM and the OG mismatch problems is that the estimation accuracy in terms of Bayesian Mean Square Error (BMSE) of popular sparsebased estimators collapses even if the support is perfectly estimated and in the high Signal to Noise Ratio (SNR) regime. This saturation effect considerably limits the effective viability of these estimation schemes.In this thesis, the BCRB is derived for CS model with unstructured BM and OG. We show that even though both problems share a very close formalism, they lead to different performances. In the biased dictionary based estimation context, we propose and study analytical expressions of the Bayesian Mean Square Error (BMSE) on the estimation of the grid error at high SNR. We also show that this class of estimators is efficient and thus reaches the Bayesian CramérRao Bound (BCRB) at high SNR. The proposed results are illustrated in the context of line spectra analysis for several popular sparse estimator. We also study the Expected CramérRao Bound (ECRB) on the estimation of the amplitude for a small OG error and show that it follows well the behavior of practical estimators in a wide SNR range.In the context of BM and OG errors, we propose two new estimation schemes called BiasCorrection Estimator (BiCE) and OffGrid Error Correction (OGEC) respectively and study their statistical properties in terms of theoretical bias and variances. Both estimators are essentially based on an oblique projection of the measurement vector and act as a postprocessing estimation layer for any sparsebased estimator and mitigate considerably the BM (OG respectively) degradation. The proposed estimators are generic since they can be associated to any sparsebased estimator, fast, and have good statistical properties. To illustrate our results and propositions, they are applied in the challenging context of the compressive sampling of finite rate of innovation signals
1 edition published in 2016 in French and held by 1 WorldCat member library worldwide
In this thesis, we are interested in two different low rate sampling schemes that challenge Shannon's theory: the sampling of finite rate of innovation signals and compressed sensing.Recently it has been shown that using appropriate sampling kernel, finite rate of innovation signals can be perfectly sampled even though they are nonbandlimited. In the presence of noise, reconstruction is achieved by a modelbased estimation procedure. In this thesis, we consider the estimation of the amplitudes and delays of a finite stream of Dirac pulses using an arbitrary kernel and the estimation of a finite stream of arbitrary pulses using the Sum of Sincs (SoS) kernel. In both scenarios, we derive the Bayesian CramérRao Bound (BCRB) for the parameters of interest. The SoS kernel is an interesting kernel since it is totally configurable by a vector of weights. In the first scenario, based on convex optimization tools, we propose a new kernel minimizing the BCRB on the delays, while in the second scenario we propose a family of kernels which maximizes the Bayesian Fisher Information, i.e., the total amount of information about each of the parameter in the measures. The advantage of the proposed family is that it can be useradjusted to favor either of the estimated parameters.Compressed sensing is a promising emerging domain which outperforms the classical limit of the Shannon sampling theory if the measurement vector can be approximated as the linear combination of few basis vectors extracted from a redundant dictionary matrix. Unfortunately, in realistic scenario, the knowledge of this basis or equivalently of the entire dictionary is often uncertain, i.e. corrupted by a Basis Mismatch (BM) error. The related estimation problem is based on the matching of continuous parameters of interest to a discretized parameter set over a regular grid. Generally, the parameters of interest do not lie in this grid and there exists an estimation error even at high Signal to Noise Ratio (SNR). This is the offgrid (OG) problem. The consequence of the BM and the OG mismatch problems is that the estimation accuracy in terms of Bayesian Mean Square Error (BMSE) of popular sparsebased estimators collapses even if the support is perfectly estimated and in the high Signal to Noise Ratio (SNR) regime. This saturation effect considerably limits the effective viability of these estimation schemes.In this thesis, the BCRB is derived for CS model with unstructured BM and OG. We show that even though both problems share a very close formalism, they lead to different performances. In the biased dictionary based estimation context, we propose and study analytical expressions of the Bayesian Mean Square Error (BMSE) on the estimation of the grid error at high SNR. We also show that this class of estimators is efficient and thus reaches the Bayesian CramérRao Bound (BCRB) at high SNR. The proposed results are illustrated in the context of line spectra analysis for several popular sparse estimator. We also study the Expected CramérRao Bound (ECRB) on the estimation of the amplitude for a small OG error and show that it follows well the behavior of practical estimators in a wide SNR range.In the context of BM and OG errors, we propose two new estimation schemes called BiasCorrection Estimator (BiCE) and OffGrid Error Correction (OGEC) respectively and study their statistical properties in terms of theoretical bias and variances. Both estimators are essentially based on an oblique projection of the measurement vector and act as a postprocessing estimation layer for any sparsebased estimator and mitigate considerably the BM (OG respectively) degradation. The proposed estimators are generic since they can be associated to any sparsebased estimator, fast, and have good statistical properties. To illustrate our results and propositions, they are applied in the challenging context of the compressive sampling of finite rate of innovation signals
Nonlinear unmixing of Hyperspectral images by
Yoann Altmann(
)
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
Spectral unmixing is one the major issues arising when analyzing hyperspectral images. It consists of identifying the macroscopic materials present in a hyperspectral image and quantifying the proportions of these materials in the image pixels. Most unmixing techniques rely on a linear mixing model which is often considered as a first approximation of the actual mixtures. However, the linear model can be inaccurate for some specific images (for instance images of scenes involving multiple reflections) and more complex nonlinear models must then be considered to analyze such images. The aim of this thesis is to study new nonlinear mixing models and to propose associated algorithms to analyze hyperspectral images. First, a ostnonlinear model is investigated and efficient unmixing algorithms based on this model are proposed. The prior knowledge about the components present in the observed image, their proportions and the nonlinearity parameters is considered using Bayesian inference. The second model considered in this work is based on the approximation of the nonlinear manifold which contains the observed pixels using Gaussian processes. The proposed algorithm estimates the relation between the observations and the unknown material proportions without explicit dependency on the material spectral signatures, which are estimated subsequentially. Considering nonlinear effects in hyperspectral images usually requires more complex unmixing strategies than those assuming linear mixtures. Since the linear mixing model is often sufficient to approximate accurately most actual mixtures, it is interesting to detect pixels or regions where the linear model is accurate. This nonlinearity detection can be applied as a preprocessing step and nonlinear unmixing strategies can then be applied only to pixels requiring the use of nonlinear models. The last part of this thesis focuses on new nonlinearity detectors based on linear and nonlinear models to identify pixels or regions where nonlinear effects occur in hyperspectral images. The proposed nonlinear unmixing algorithms improve the characterization of hyperspectral images compared to methods based on a linear model. These methods allow the reconstruction errors to be reduced. Moreover, these methods provide better spectral signature and abundance estimates when the observed pixels result from nonlinear mixtures. The simulation results conducted on synthetic and real images illustrate the advantage of using nonlinearity detectors for hyperspectral image analysis. In particular, the proposed detectors can identify components which are present in few pixels (and hardly distinguishable) and locate areas where significant nonlinear effects occur (shadow, relief, ...). Moreover, it is shown that considering spatial correlation in hyperspectral images can improve the performance of nonlinear unmixing and nonlinearity detection algorithms
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
Spectral unmixing is one the major issues arising when analyzing hyperspectral images. It consists of identifying the macroscopic materials present in a hyperspectral image and quantifying the proportions of these materials in the image pixels. Most unmixing techniques rely on a linear mixing model which is often considered as a first approximation of the actual mixtures. However, the linear model can be inaccurate for some specific images (for instance images of scenes involving multiple reflections) and more complex nonlinear models must then be considered to analyze such images. The aim of this thesis is to study new nonlinear mixing models and to propose associated algorithms to analyze hyperspectral images. First, a ostnonlinear model is investigated and efficient unmixing algorithms based on this model are proposed. The prior knowledge about the components present in the observed image, their proportions and the nonlinearity parameters is considered using Bayesian inference. The second model considered in this work is based on the approximation of the nonlinear manifold which contains the observed pixels using Gaussian processes. The proposed algorithm estimates the relation between the observations and the unknown material proportions without explicit dependency on the material spectral signatures, which are estimated subsequentially. Considering nonlinear effects in hyperspectral images usually requires more complex unmixing strategies than those assuming linear mixtures. Since the linear mixing model is often sufficient to approximate accurately most actual mixtures, it is interesting to detect pixels or regions where the linear model is accurate. This nonlinearity detection can be applied as a preprocessing step and nonlinear unmixing strategies can then be applied only to pixels requiring the use of nonlinear models. The last part of this thesis focuses on new nonlinearity detectors based on linear and nonlinear models to identify pixels or regions where nonlinear effects occur in hyperspectral images. The proposed nonlinear unmixing algorithms improve the characterization of hyperspectral images compared to methods based on a linear model. These methods allow the reconstruction errors to be reduced. Moreover, these methods provide better spectral signature and abundance estimates when the observed pixels result from nonlinear mixtures. The simulation results conducted on synthetic and real images illustrate the advantage of using nonlinearity detectors for hyperspectral image analysis. In particular, the proposed detectors can identify components which are present in few pixels (and hardly distinguishable) and locate areas where significant nonlinear effects occur (shadow, relief, ...). Moreover, it is shown that considering spatial correlation in hyperspectral images can improve the performance of nonlinear unmixing and nonlinearity detection algorithms
more
fewer
Audience Level
0 

1  
General  Special 
Related Identities
 Institut national polytechnique (Toulouse / 1969....). Degree grantor
 Tourneret, JeanYves (19......; enseignantchercheur en traitement du signal) Opponent Thesis advisor Contributor
 École doctorale Mathématiques, informatique et télécommunications (Toulouse) Other
 Institut de Recherche en Informatique de Toulouse (1995....). Other
 Forbes, Florence Other Opponent Thesis advisor
 Moussaoui, Saïd (1977....). Other Opponent
 Giovannelli, JeanFrançois (1966....). Other Opponent Thesis advisor
 Michel, Olivier (1963....; auteur en traitement du signal) Other Opponent Thesis advisor
 Oberlin, Thomas (19......; enseignantchercheur en informatique) Opponent Thesis advisor
 École doctorale des sciences physiques et de l'ingénieur (Talence, Gironde) Other