École doctorale Mathématiques, Sciences et Technologies de l'Information et de la Communication (ChampssurMarne, SeineetMarne / 20102015)
Overview
Works:  229 works in 229 publications in 2 languages and 314 library holdings 

Roles:  996 
Publication Timeline
.
Most widely held works by
Sciences et Technologies de l'Information et de la Communication (ChampssurMarne, SeineetMarne / 20102015) École doctorale
Mathématiques
Quelques modèles mathématiques en chimie quantique et propagation d'incertitudes by
Virginie Ehrlacher(
)
1 edition published in 2012 in English and held by 2 WorldCat member libraries worldwide
The contributions of this thesis work are two fold. The first part deals with the study of local defects in crystalline materials. Chapter 1 gives a brief overview of the main models used in quantum chemistry for electronic structure calculations. In Chapter 2, an exact variational model for the description of local defects in a periodic crystal in the framework of the ThomasFermivon Weisz"acker theory is presented. It is justified by means of thermodynamic limit arguments. In particular, it is proved that the defects modeled within this theory are necessarily neutrally charged. Chapters 3 and 4 are concerned with the socalled spectral pollution phenomenon. Indeed, when an operator is discretized, spurious eigenvalues which do not belong to the spectrum of the initial operator may appear. In Chapter 3, we prove that standard Galerkin methods with finite elements discretization for the approximation of perturbed periodic Schrödinger operators are prone to spectral pollution. Besides, the eigenvectors associated with spurious eigenvalues can be characterized as surface states. It is possible to circumvent this problem by using augmented finite element spaces, constructed with the Wannier functions of the periodic unperturbed Schr"odinger operator. We also prove that the supercell method, which consists in imposing periodic boundary conditions on a large simulation domain containing the defect, does not produce spectral pollution. In Chapter 4, we give a priori error estimates for the supercell method. It is proved in particular that the rate of convergence of the method scales exponentiall with respect to the size of the supercell. The second part of this thesis is devoted to the study of greedy algorithms for the resolution of highdimensional uncertainty quantification problems. Chapter 5 presents the most classical numerical methods used in the field of uncertainty quantification and an introduction to greedy algorithms. In Chapter 6, we prove that these algorithms can be applied to the minimization of strongly convex nonlinear energy functionals and that their convergence rate is exponential in the finitedimensional case. We illustrate these results on obstacle problems with uncertainty via penalized formulations
1 edition published in 2012 in English and held by 2 WorldCat member libraries worldwide
The contributions of this thesis work are two fold. The first part deals with the study of local defects in crystalline materials. Chapter 1 gives a brief overview of the main models used in quantum chemistry for electronic structure calculations. In Chapter 2, an exact variational model for the description of local defects in a periodic crystal in the framework of the ThomasFermivon Weisz"acker theory is presented. It is justified by means of thermodynamic limit arguments. In particular, it is proved that the defects modeled within this theory are necessarily neutrally charged. Chapters 3 and 4 are concerned with the socalled spectral pollution phenomenon. Indeed, when an operator is discretized, spurious eigenvalues which do not belong to the spectrum of the initial operator may appear. In Chapter 3, we prove that standard Galerkin methods with finite elements discretization for the approximation of perturbed periodic Schrödinger operators are prone to spectral pollution. Besides, the eigenvectors associated with spurious eigenvalues can be characterized as surface states. It is possible to circumvent this problem by using augmented finite element spaces, constructed with the Wannier functions of the periodic unperturbed Schr"odinger operator. We also prove that the supercell method, which consists in imposing periodic boundary conditions on a large simulation domain containing the defect, does not produce spectral pollution. In Chapter 4, we give a priori error estimates for the supercell method. It is proved in particular that the rate of convergence of the method scales exponentiall with respect to the size of the supercell. The second part of this thesis is devoted to the study of greedy algorithms for the resolution of highdimensional uncertainty quantification problems. Chapter 5 presents the most classical numerical methods used in the field of uncertainty quantification and an introduction to greedy algorithms. In Chapter 6, we prove that these algorithms can be applied to the minimization of strongly convex nonlinear energy functionals and that their convergence rate is exponential in the finitedimensional case. We illustrate these results on obstacle problems with uncertainty via penalized formulations
Grammaires de graphes et langages formels by
Trong Hiêu Dinh(
)
1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide
Pas de résumé en anglais
1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide
Pas de résumé en anglais
Filtering of thin objects : applications to vascular image analysis by
Olena Tankyevych(
)
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
The motivation of this work is filtering of elongated curvilinear objects in digital images. Their narrowness presents difficulties for their detection. In addition, they are prone to disconnections due to noise, image acquisition artefacts and occlusions by other objects. This work is focused on thin objects detection and linkage. For these purposes, a hybrid secondorder derivativebased and morphological linear filtering method is proposed within the framework of scalespace theory. The theory of spatiallyvariant morphological filters is discussed and efficient algorithms are presented. From the application point of view, our work is motivated by the diagnosis, treatment planning and followup of vascular diseases. The first application is aimed at the assessment of arteriovenous malformations (AVM) of cerebral vasculature. The small size and the complexity of the vascular structures, coupled to noise, image acquisition artefacts, and blood signal heterogeneity make the analysis of such data a challenging task. This work is focused on cerebral angiographic image enhancement, segmentation and vascular network analysis with the final purpose to further assist the study of cerebral AVM. The second medical application concerns the processing of low dose Xray images used in interventional radiology therapies observing insertion of guidewires in the vascular system of patients. Such procedures are used in aneurysm treatment, tumour embolization and other clinical procedures. Due to low signaltonoise ratio of such data, guidewire detection is needed for their visualization and reconstruction. Here, we compare the performance of several line detection algorithms. The purpose of this work is to select a few of the most promising line detection methods for this medical application
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
The motivation of this work is filtering of elongated curvilinear objects in digital images. Their narrowness presents difficulties for their detection. In addition, they are prone to disconnections due to noise, image acquisition artefacts and occlusions by other objects. This work is focused on thin objects detection and linkage. For these purposes, a hybrid secondorder derivativebased and morphological linear filtering method is proposed within the framework of scalespace theory. The theory of spatiallyvariant morphological filters is discussed and efficient algorithms are presented. From the application point of view, our work is motivated by the diagnosis, treatment planning and followup of vascular diseases. The first application is aimed at the assessment of arteriovenous malformations (AVM) of cerebral vasculature. The small size and the complexity of the vascular structures, coupled to noise, image acquisition artefacts, and blood signal heterogeneity make the analysis of such data a challenging task. This work is focused on cerebral angiographic image enhancement, segmentation and vascular network analysis with the final purpose to further assist the study of cerebral AVM. The second medical application concerns the processing of low dose Xray images used in interventional radiology therapies observing insertion of guidewires in the vascular system of patients. Such procedures are used in aneurysm treatment, tumour embolization and other clinical procedures. Due to low signaltonoise ratio of such data, guidewire detection is needed for their visualization and reconstruction. Here, we compare the performance of several line detection algorithms. The purpose of this work is to select a few of the most promising line detection methods for this medical application
Étude probabiliste de systèmes de particules en interaction : applications à la simulation moléculaire by
Raphaël Roux(
)
1 edition published in 2010 in French and held by 2 WorldCat member libraries worldwide
This work presents some results on stochastically interacting particle systems and probabilistic interpretations of partial differential equations with applications to molecular dynamics and quantum chemistry. We present a particle method allowing to analyze the adaptive biasing force process, used in molecular dynamics for the computation of free energy differences. We also study the sensitivity of stochastic dynamics with respect to some parameter, aiming at the computation of forces in the BornOppenheimer approximation for determining the fundamental quantum state of molecules. Finally, we present a numerical scheme based on a particle system for the resolution of scalar conservation laws with an anomalous diffusion term, corresponding to a jump dynamics on the particles
1 edition published in 2010 in French and held by 2 WorldCat member libraries worldwide
This work presents some results on stochastically interacting particle systems and probabilistic interpretations of partial differential equations with applications to molecular dynamics and quantum chemistry. We present a particle method allowing to analyze the adaptive biasing force process, used in molecular dynamics for the computation of free energy differences. We also study the sensitivity of stochastic dynamics with respect to some parameter, aiming at the computation of forces in the BornOppenheimer approximation for determining the fundamental quantum state of molecules. Finally, we present a numerical scheme based on a particle system for the resolution of scalar conservation laws with an anomalous diffusion term, corresponding to a jump dynamics on the particles
Congestion games with playerspecific cost functions by
Thomas Pradeau(
)
1 edition published in 2014 in English and held by 2 WorldCat member libraries worldwide
We consider congestion games on graphs. In nonatomic games, we are given a set of infinitesimal players. Each player wants to go from one vertex to another by taking a route of minimal cost, the cost of a route depending on the number of players using it. In atomic splittable games, we are given a set of players with a nonnegligible demand. Each player wants to ship his demand from one vertex to another by dividing it among different routes. In these games, we reach a Nash equilibrium when every player has chosen a minimalcost strategy. The existence of a Nash equilibrium is ensured under mild conditions. The main issues are the uniqueness, the computation, the efficiency and the sensitivity of the Nash equilibrium. Many results are known in the specific case where all players are impacted in the same way by the congestion. The goal of this thesis is to generalize these results in the case where we allow playerspecific cost functions. We obtain results on uniqueness of the equilibrium in nonatomic games. We give two algorithms able to compute a Nash equilibrium in nonatomic games when the cost functions are affine. We find a bound on the price of anarchy for some atomic splittable games, and prove that it is unbounded in general, even when the cost functions are affine. Finally we find results on the sensitivity of the equilibrium to the demand in atomic splittable games
1 edition published in 2014 in English and held by 2 WorldCat member libraries worldwide
We consider congestion games on graphs. In nonatomic games, we are given a set of infinitesimal players. Each player wants to go from one vertex to another by taking a route of minimal cost, the cost of a route depending on the number of players using it. In atomic splittable games, we are given a set of players with a nonnegligible demand. Each player wants to ship his demand from one vertex to another by dividing it among different routes. In these games, we reach a Nash equilibrium when every player has chosen a minimalcost strategy. The existence of a Nash equilibrium is ensured under mild conditions. The main issues are the uniqueness, the computation, the efficiency and the sensitivity of the Nash equilibrium. Many results are known in the specific case where all players are impacted in the same way by the congestion. The goal of this thesis is to generalize these results in the case where we allow playerspecific cost functions. We obtain results on uniqueness of the equilibrium in nonatomic games. We give two algorithms able to compute a Nash equilibrium in nonatomic games when the cost functions are affine. We find a bound on the price of anarchy for some atomic splittable games, and prove that it is unbounded in general, even when the cost functions are affine. Finally we find results on the sensitivity of the equilibrium to the demand in atomic splittable games
Étude des équations des milieux poreux et des modèles de cloques by
Ghada Chmaycem(
)
1 edition published in 2014 in French and held by 2 WorldCat member libraries worldwide
In this thesis, we study two completely independent problems. The first one focuses on a simple mathematical model of thin films delamination and blistering analysis. In the second one, we are interested in the study of the porous medium equation motivated by seawater intrusion problems. In the first part of this work, we consider a simple onedimensional variational model, describing the delamination of thin films under cooling. We characterize the global minimizers, which correspond to films of three possible types : non delaminated, partially delaminated (called blisters), or fully delaminated. Two parameters play an important role : the length of the film and the cooling parameter. In the phase plane of those two parameters, we classify all the minimizers. As a consequence of our analysis, we identify explicitly the smallest possible blisters for this model. In the second part, we answer a long standing open question about the existence of new contractions for porous medium type equations. For m>0, we consider nonnegative solutions U(t,x) of the following equationU_t=Delta U^m.For 0<m<2, we present a new family of contractions for this equation in any dimension, which extends the L^1 contraction properties. Our contraction can be seen as the fourth known contraction for this equation. Even for the case m=1, our approach leads to new results for the standard heat equation. A second work focuses on the same problem but using a differential approach based on geodesic distances. This original and general method is used to produce families of contractions for nonlinear partial differential equations, of evolution or stationary type. We present in this part various applications of this original method. In particular, we are concerned with the porous medium and the doubly nonlinear equations
1 edition published in 2014 in French and held by 2 WorldCat member libraries worldwide
In this thesis, we study two completely independent problems. The first one focuses on a simple mathematical model of thin films delamination and blistering analysis. In the second one, we are interested in the study of the porous medium equation motivated by seawater intrusion problems. In the first part of this work, we consider a simple onedimensional variational model, describing the delamination of thin films under cooling. We characterize the global minimizers, which correspond to films of three possible types : non delaminated, partially delaminated (called blisters), or fully delaminated. Two parameters play an important role : the length of the film and the cooling parameter. In the phase plane of those two parameters, we classify all the minimizers. As a consequence of our analysis, we identify explicitly the smallest possible blisters for this model. In the second part, we answer a long standing open question about the existence of new contractions for porous medium type equations. For m>0, we consider nonnegative solutions U(t,x) of the following equationU_t=Delta U^m.For 0<m<2, we present a new family of contractions for this equation in any dimension, which extends the L^1 contraction properties. Our contraction can be seen as the fourth known contraction for this equation. Even for the case m=1, our approach leads to new results for the standard heat equation. A second work focuses on the same problem but using a differential approach based on geodesic distances. This original and general method is used to produce families of contractions for nonlinear partial differential equations, of evolution or stationary type. We present in this part various applications of this original method. In particular, we are concerned with the porous medium and the doubly nonlinear equations
Sûreté temporelle pour les systèmes temps réel multiprocesseurs by
Frédéric Fauberteau(
)
1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide
Les systèmes temps réel à contraintes temporelles strictes sont caractérisés par des ensembles de tâches pour lesquelles sont connus l'échéance, le modèle d'arrivée (fréquence) et la durée d'exécution pire cas (WCET). Nous nous intéressons à l'ordonnancement de ces systèmes sur plateforme multiprocesseur. Garantir le respect des échéances pour un algorithme d'ordonnancement est l'une des problématiques majeures de cette thématique. Nous allons plus loin en nous intéressant à la sûreté temporelle, que nous caractérisons par les propriétés (i) de robustesse et (ii) de viabilité. La robustesse consiste à proposer un intervalle sur les augmentations(ia) de WCET et (ib) de fréquence tel que les échéances soient respectées. La viabilité consiste cette fois à garantir le respect des échéances lors du relâchement des contraintes (iia) de WCET (réduction), (iib) de fréquence (réduction) et (iic) d'échéance(augmentation). La robustesse revient alors à tolérer l'imprévu, tandis que la viabilité est la garantie que l'algorithme d'ordonnancement n'est pas sujet à des anomalies suite à un relâchement de contraintes. Nous considérons l'ordonnancement en priorités fixes, où chaque occurrence d'une tâche est ordonnancée avec la même priorité. Dans un premier temps, nous étudions la propriété de robustesse dans les approches d'ordonnancement horsligne et sans migration (partitionnement). Nous traitons le cas des tâches avec ou sans partage de ressources. Dans un second temps, nous étudions la propriété de viabilité d'une approche d'ordonnancement en ligne avec migrations restreintes et sans partage de ressources
1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide
Les systèmes temps réel à contraintes temporelles strictes sont caractérisés par des ensembles de tâches pour lesquelles sont connus l'échéance, le modèle d'arrivée (fréquence) et la durée d'exécution pire cas (WCET). Nous nous intéressons à l'ordonnancement de ces systèmes sur plateforme multiprocesseur. Garantir le respect des échéances pour un algorithme d'ordonnancement est l'une des problématiques majeures de cette thématique. Nous allons plus loin en nous intéressant à la sûreté temporelle, que nous caractérisons par les propriétés (i) de robustesse et (ii) de viabilité. La robustesse consiste à proposer un intervalle sur les augmentations(ia) de WCET et (ib) de fréquence tel que les échéances soient respectées. La viabilité consiste cette fois à garantir le respect des échéances lors du relâchement des contraintes (iia) de WCET (réduction), (iib) de fréquence (réduction) et (iic) d'échéance(augmentation). La robustesse revient alors à tolérer l'imprévu, tandis que la viabilité est la garantie que l'algorithme d'ordonnancement n'est pas sujet à des anomalies suite à un relâchement de contraintes. Nous considérons l'ordonnancement en priorités fixes, où chaque occurrence d'une tâche est ordonnancée avec la même priorité. Dans un premier temps, nous étudions la propriété de robustesse dans les approches d'ordonnancement horsligne et sans migration (partitionnement). Nous traitons le cas des tâches avec ou sans partage de ressources. Dans un second temps, nous étudions la propriété de viabilité d'une approche d'ordonnancement en ligne avec migrations restreintes et sans partage de ressources
Méthodes de Galerkin stochastiques adaptatives pour la propagation d'incertitudes paramétriques dans les modèles hyperboliques by
Julie Tryoen(
)
1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide
This work is concerned with stochastic Galerkin methods for hyperbolic systems involving uncertain data with known distribution functions parametrized by random variables. We are interested in problems where a shock appears almost surely in finite time. In this case, the solution exhibits discontinuities in the spatial and in the stochastic domains. A Finite Volume scheme is used for the spatial discretization and a Galerkin projection based on piecewise poynomial approximation is used for the stochastic discretization. A Roetype solver with an entropy correction is proposed for the Galerkin system, using an original technique to approximate the absolute value of the Roe matrix and an adaptation of the Dubois and Mehlman entropy corrector. Although this method deals with complex situations, it remains costly because a very fine stochastic discretization is needed to represent the solution in the vicinity of discontinuities. This fact calls for adaptive strategies. As discontinuities are localized in space and time, stochastic representations depending on space and time are proposed. This methodology is formulated in a multiresolution context based on the concept of binary trees for the stochastic discretization. The adaptive enrichment and coarsening steps are based on multiresolution analysis criteria. In the multidimensional case, an anisotropy of the adaptive procedure is proposed. The method is tested on the Euler equations in a shock tube and on the Burgers equation in one and two stochastic dimensions
1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide
This work is concerned with stochastic Galerkin methods for hyperbolic systems involving uncertain data with known distribution functions parametrized by random variables. We are interested in problems where a shock appears almost surely in finite time. In this case, the solution exhibits discontinuities in the spatial and in the stochastic domains. A Finite Volume scheme is used for the spatial discretization and a Galerkin projection based on piecewise poynomial approximation is used for the stochastic discretization. A Roetype solver with an entropy correction is proposed for the Galerkin system, using an original technique to approximate the absolute value of the Roe matrix and an adaptation of the Dubois and Mehlman entropy corrector. Although this method deals with complex situations, it remains costly because a very fine stochastic discretization is needed to represent the solution in the vicinity of discontinuities. This fact calls for adaptive strategies. As discontinuities are localized in space and time, stochastic representations depending on space and time are proposed. This methodology is formulated in a multiresolution context based on the concept of binary trees for the stochastic discretization. The adaptive enrichment and coarsening steps are based on multiresolution analysis criteria. In the multidimensional case, an anisotropy of the adaptive procedure is proposed. The method is tested on the Euler equations in a shock tube and on the Burgers equation in one and two stochastic dimensions
Numerical methods for homogenization : applications to random media by
Ronan Costaouec(
)
1 edition published in 2011 in English and held by 2 WorldCat member libraries worldwide
In this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a wellknown variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case
1 edition published in 2011 in English and held by 2 WorldCat member libraries worldwide
In this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a wellknown variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case
Aspects algorithmiques de la comparaison d'éléments biologiques by
Florian Sikora(
)
1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide
To investigate the complex links between genotype and phenotype, one can study the relations between different biological entities. It forms a biological network, represented by a graph. In this thesis, we are interested in the occurrence of a motif (a multiset of colors) in a vertexcolored graph, representing a biological network. Such motifs usually correspond to a set of elements realizing a same function, and which may have been evolutionarily preserved. We follow the algorithmic study of this problem, by establishing hard instances and studying possibilities to cope with the hardness (parameterized complexity, preprocessing, approximation...). We also develop a plugin for Cytoscape, in order to solve efficiently this problem and to test it on real data.We are also interested in different problems related to comparative genomics. The scientific method is the same: studying problems arising from biology, specifying the hard instances and giving solutions to cope with the hardness (or proving such solutions are unlikely)
1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide
To investigate the complex links between genotype and phenotype, one can study the relations between different biological entities. It forms a biological network, represented by a graph. In this thesis, we are interested in the occurrence of a motif (a multiset of colors) in a vertexcolored graph, representing a biological network. Such motifs usually correspond to a set of elements realizing a same function, and which may have been evolutionarily preserved. We follow the algorithmic study of this problem, by establishing hard instances and studying possibilities to cope with the hardness (parameterized complexity, preprocessing, approximation...). We also develop a plugin for Cytoscape, in order to solve efficiently this problem and to test it on real data.We are also interested in different problems related to comparative genomics. The scientific method is the same: studying problems arising from biology, specifying the hard instances and giving solutions to cope with the hardness (or proving such solutions are unlikely)
Robust, refined and selective matching for accurate camera pose estimation by
Zhe Liu(
)
1 edition published in 2015 in English and held by 2 WorldCat member libraries worldwide
With the recent progress in photogrammetry, it is now possible to automatically reconstruct a model of a 3D scene from pictures or videos. The model is reconstructed in several stages. First, salient features (often points, but more generally regions) are detected in each image. Second, features that are common in images pairs are matched. Third, matched features are used to estimate the relative pose (position and orientation) of images. The global poses are then computed as well as the 3D location of these features (structure from motion). Finally, a dense 3D model can be estimated. The detection of salient features, their matching, as well as the estimation of camera poses play a crucial role in the reconstruction process. Inaccuracies or errors in these stages have a major impact on the accuracy and robustness of reconstruction for the entire scene. In this thesis, we propose better methods for feature matching and feature selection, which improve the robustness and accuracy of existing methods for camera position estimation. We first introduce a photometric pairwise constraint for feature matches (VLD), which is more reliable than geometric constraints. Then we propose a semilocal matching approach (KVLD) using this photometric match constraint. We show that our method is very robust, not only for rigid scenes but also for nonrigid and repetitive scenes, which can improve the robustness and accuracy of pose estimation methods, such as based on RANSAC. To improve the accuracy in camera position estimation, we study the accuracy of reconstruction and pose estimation in function of the number and quality of matches. We experimentally derive a "quantity vs. quality" relation. Using this relation, we propose a method to select a subset of good matches to produce highly accurate pose estimations. We also aim at refining match position. For this, we propose an improvement of least square matching (LSM) using an irregular sampling grid and image scale exploration. We show that match refinement and match selection independently improve the reconstruction results, and when combined together, the results are further improved
1 edition published in 2015 in English and held by 2 WorldCat member libraries worldwide
With the recent progress in photogrammetry, it is now possible to automatically reconstruct a model of a 3D scene from pictures or videos. The model is reconstructed in several stages. First, salient features (often points, but more generally regions) are detected in each image. Second, features that are common in images pairs are matched. Third, matched features are used to estimate the relative pose (position and orientation) of images. The global poses are then computed as well as the 3D location of these features (structure from motion). Finally, a dense 3D model can be estimated. The detection of salient features, their matching, as well as the estimation of camera poses play a crucial role in the reconstruction process. Inaccuracies or errors in these stages have a major impact on the accuracy and robustness of reconstruction for the entire scene. In this thesis, we propose better methods for feature matching and feature selection, which improve the robustness and accuracy of existing methods for camera position estimation. We first introduce a photometric pairwise constraint for feature matches (VLD), which is more reliable than geometric constraints. Then we propose a semilocal matching approach (KVLD) using this photometric match constraint. We show that our method is very robust, not only for rigid scenes but also for nonrigid and repetitive scenes, which can improve the robustness and accuracy of pose estimation methods, such as based on RANSAC. To improve the accuracy in camera position estimation, we study the accuracy of reconstruction and pose estimation in function of the number and quality of matches. We experimentally derive a "quantity vs. quality" relation. Using this relation, we propose a method to select a subset of good matches to produce highly accurate pose estimations. We also aim at refining match position. For this, we propose an improvement of least square matching (LSM) using an irregular sampling grid and image scale exploration. We show that match refinement and match selection independently improve the reconstruction results, and when combined together, the results are further improved
Lexiquegrammaire et Unitex : analyse sur deux corpus comparables de médecine thermale : quels apports pour une description
terminologique bilingue de qualité ? by
Rosa Cetro(
)
1 edition published in 2013 in French and held by 2 WorldCat member libraries worldwide
Terminology is the science concerned with the study of the terms, those lexical units thatpossess a specialized meaning within a scientific or technical context. Established as ascience in the first half of 20th century, terminology is an interdisciplinary field takingadvantage of contributions from linguistics, logics, and informatics. This latter in particularhas allowed significant developments in terminology. Lexicongrammar is an empirical method of linguistic description inspired by the works of Zellig S. Harris, which has been founded by the French linguist Maurice Gross at the end of the 1960s. Linguistic description has been carried out in parallel with the development of informatics tools able to formalise and exploit linguistic data, including the software Unitex (Paumier, 2002). Both lexicongrammar and Unitex have an interesting, largely unexploited potential for further developments in terminology. In this work, we assess the contributions brought by lexicongrammar and Unitex to a highprofile bilingual terminological description. After defining quality criteria for such terminological description, we carry out our evaluation on two comparable corpora specific of thermal medicine, both in French and in Italian
1 edition published in 2013 in French and held by 2 WorldCat member libraries worldwide
Terminology is the science concerned with the study of the terms, those lexical units thatpossess a specialized meaning within a scientific or technical context. Established as ascience in the first half of 20th century, terminology is an interdisciplinary field takingadvantage of contributions from linguistics, logics, and informatics. This latter in particularhas allowed significant developments in terminology. Lexicongrammar is an empirical method of linguistic description inspired by the works of Zellig S. Harris, which has been founded by the French linguist Maurice Gross at the end of the 1960s. Linguistic description has been carried out in parallel with the development of informatics tools able to formalise and exploit linguistic data, including the software Unitex (Paumier, 2002). Both lexicongrammar and Unitex have an interesting, largely unexploited potential for further developments in terminology. In this work, we assess the contributions brought by lexicongrammar and Unitex to a highprofile bilingual terminological description. After defining quality criteria for such terminological description, we carry out our evaluation on two comparable corpora specific of thermal medicine, both in French and in Italian
Processus matriciels : simulation et mod?lisation de la d?pendance en finance by
Abdelkoddousse Ahdida(
)
1 edition published in 2011 in English and held by 2 WorldCat member libraries worldwide
La premi?re partie de cette th?se est consacr?e ? la simulation des ?quations diff?rentielles stochastiques d?finies sur le c?ne des matrices sym?triques positives. Nous pr?sentons de nouveaux sch?mas de discr?tisation d'ordre ?lev? pour ce type d'?quations diff?rentielles stochastiques, et ?tudions leur convergence faible. Nous nous int?ressons tout particuli?rement au processus de Wishart, souvent utilis? en mod?lisation financi?re. Pour ce processus nous proposons ? la fois un sch?ma exact en loi et des discr?tisations d'ordre ?lev?. A ce jour, cette m?thode est la seule qui soit utilisable quels que soient les param?tres intervenant dans la d?finition de ces mod?les. Nous montrons, par ailleurs, comment on peut r?duire la complexit? algorithmique de ces m?thodes et nous v?rifions les r?sultats th?oriques sur des impl?mentations num?riques. Dans la deuxi?me partie, nous nous int?ressons ? des processus ? valeurs dans l'espace des matrices de corr?lation. Nous proposons une nouvelle classe d'?quations diff?rentielles stochastiques d?finies dans cet espace. Ce mod?le peut ?tre consid?r? comme une extension du mod?le WrightFisher (ou processus Jacobi) ?l'espace des matrice de corr?lation. Nous ?tudions l'existence faible et forte des solutions. Puis, nous explicitons les liens avec les processus de Wishart et les processus de WrightFisher multiall?les. Nous d?montrons le caract?re ergodique du mod?le et donnons des repr?sentations de Girsanov susceptibles d'?tre employ?es en finance. En vue d'une utilisation pratique, nous explicitons deux sch?mas de discr?tisation d'ordre ?lev?. Cette partie se conclut par des r?sultats num?riques illustrant le comportement de la convergence de ces sch?mas. La derni?re partie de cette th?se est consacr?e ? l'utilisation des ces processus pour des questions de mod?lisation multidimensionnelle en finance. Une question importante de mod?lisation, aujourd'hui encore difficile ? traiter, est l'identification d'un type de mod?le permettant de calibrer ? la fois le march? des options sur un indice et sur ses composants. Nous proposons, ici, deux types de mod?les : l'un ? corr?lation locale et l'autre ? corr?lation stochastique. Dans ces deux cas, nous expliquons quelle proc?dure on doit adopter pour obtenir une bonne calibration des donn?es de march?
1 edition published in 2011 in English and held by 2 WorldCat member libraries worldwide
La premi?re partie de cette th?se est consacr?e ? la simulation des ?quations diff?rentielles stochastiques d?finies sur le c?ne des matrices sym?triques positives. Nous pr?sentons de nouveaux sch?mas de discr?tisation d'ordre ?lev? pour ce type d'?quations diff?rentielles stochastiques, et ?tudions leur convergence faible. Nous nous int?ressons tout particuli?rement au processus de Wishart, souvent utilis? en mod?lisation financi?re. Pour ce processus nous proposons ? la fois un sch?ma exact en loi et des discr?tisations d'ordre ?lev?. A ce jour, cette m?thode est la seule qui soit utilisable quels que soient les param?tres intervenant dans la d?finition de ces mod?les. Nous montrons, par ailleurs, comment on peut r?duire la complexit? algorithmique de ces m?thodes et nous v?rifions les r?sultats th?oriques sur des impl?mentations num?riques. Dans la deuxi?me partie, nous nous int?ressons ? des processus ? valeurs dans l'espace des matrices de corr?lation. Nous proposons une nouvelle classe d'?quations diff?rentielles stochastiques d?finies dans cet espace. Ce mod?le peut ?tre consid?r? comme une extension du mod?le WrightFisher (ou processus Jacobi) ?l'espace des matrice de corr?lation. Nous ?tudions l'existence faible et forte des solutions. Puis, nous explicitons les liens avec les processus de Wishart et les processus de WrightFisher multiall?les. Nous d?montrons le caract?re ergodique du mod?le et donnons des repr?sentations de Girsanov susceptibles d'?tre employ?es en finance. En vue d'une utilisation pratique, nous explicitons deux sch?mas de discr?tisation d'ordre ?lev?. Cette partie se conclut par des r?sultats num?riques illustrant le comportement de la convergence de ces sch?mas. La derni?re partie de cette th?se est consacr?e ? l'utilisation des ces processus pour des questions de mod?lisation multidimensionnelle en finance. Une question importante de mod?lisation, aujourd'hui encore difficile ? traiter, est l'identification d'un type de mod?le permettant de calibrer ? la fois le march? des options sur un indice et sur ses composants. Nous proposons, ici, deux types de mod?les : l'un ? corr?lation locale et l'autre ? corr?lation stochastique. Dans ces deux cas, nous expliquons quelle proc?dure on doit adopter pour obtenir une bonne calibration des donn?es de march?
Propriétés syntaxicosémantiques des verbes à complément en e en coréen by
SoYun Kim(
)
1 edition published in 2010 in French and held by 2 WorldCat member libraries worldwide
Cette étude est une classification générale des constructions verbales et une description des propriétés syntaxicosémantiques des verbes à complément essentiel introduit par la postposition e en coréen. Ce travail a pour modèle théorique le lexiquegrammaire, qui a été élaboré par M. Gross (1975), sur la base des principes de Z. S. Harris (1968). Dans cette étude, nous avons examiné les propriétés syntaxicosémantiques des 3000 verbes demandant ce complément et nous les avons regroupés en 8 classes. Ces propriétés constitueront des informations syntaxiques qui servent à l'analyse de structure d'une phrase et au traitement automatique des textes en coréen
1 edition published in 2010 in French and held by 2 WorldCat member libraries worldwide
Cette étude est une classification générale des constructions verbales et une description des propriétés syntaxicosémantiques des verbes à complément essentiel introduit par la postposition e en coréen. Ce travail a pour modèle théorique le lexiquegrammaire, qui a été élaboré par M. Gross (1975), sur la base des principes de Z. S. Harris (1968). Dans cette étude, nous avons examiné les propriétés syntaxicosémantiques des 3000 verbes demandant ce complément et nous les avons regroupés en 8 classes. Ces propriétés constitueront des informations syntaxiques qui servent à l'analyse de structure d'une phrase et au traitement automatique des textes en coréen
Numerical methods for dynamic contact and fracture problems by
David Doyen(
)
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
The present work deals with the numerical solution of dynamic contact and fracture problems. The contact problem is a Signorini problem with or without Coulomb friction. The fracture problem uses a cohesive zone model with a prescribed crack path. These problems are characterized by a nonregular boundary condition and can be formulated with evolutionary variational inequations or differential inclusions. For the numerical solution, we combine, as usual in solid dynamics, a finite element discretization in space and timeintegration schemes. For the contact problem, we begin by comparing the main methods proposed in the literature. We then focus on the socalled modified mass method recently introduced by H. Khenous, P. Laborde et Y. Renard, for which we propose a semiexplicit variant. In addition, we prove a convergence result of the space semidiscrete solutions to a continuous solution in the frictionless viscoelastic case. We also analyze the space semidiscrete and fully discrete problems in the friction Coulomb case. For the dynamic fracture problem, using a fully explicit scheme is impossible or not robust enough. Therefore, we propose timeintegration schemes where the boundary condition is treated in an implicit way. Finally, we present and analyze augmented Lagrangian methods for static fracture problems
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
The present work deals with the numerical solution of dynamic contact and fracture problems. The contact problem is a Signorini problem with or without Coulomb friction. The fracture problem uses a cohesive zone model with a prescribed crack path. These problems are characterized by a nonregular boundary condition and can be formulated with evolutionary variational inequations or differential inclusions. For the numerical solution, we combine, as usual in solid dynamics, a finite element discretization in space and timeintegration schemes. For the contact problem, we begin by comparing the main methods proposed in the literature. We then focus on the socalled modified mass method recently introduced by H. Khenous, P. Laborde et Y. Renard, for which we propose a semiexplicit variant. In addition, we prove a convergence result of the space semidiscrete solutions to a continuous solution in the frictionless viscoelastic case. We also analyze the space semidiscrete and fully discrete problems in the friction Coulomb case. For the dynamic fracture problem, using a fully explicit scheme is impossible or not robust enough. Therefore, we propose timeintegration schemes where the boundary condition is treated in an implicit way. Finally, we present and analyze augmented Lagrangian methods for static fracture problems
Quelques contributions à la sélection de variables et aux tests nonparamétriques by
Laëtitia Comminges(
)
1 edition published in 2012 in French and held by 2 WorldCat member libraries worldwide
Realworld data are often extremely highdimensional, severely under constrained and interspersed with a large number of irrelevant or redundant features. Relevant variable selection is a compelling approach for addressing statistical issues in the scenario of highdimensional and noisy data with small sample size. First, we address the issue of variable selection in the regression model when the number of variables is very large. The main focus is on the situation where the number of relevant variables is much smaller than the ambient dimension. Without assuming any parametric form of the underlying regression function, we get tight conditions making it possible to consistently estimate the set of relevant variables. Secondly, we consider the problem of testing a particular type of composite null hypothesis under a nonparametric multivariate regression model. For a given quadratic functional $Q$, the null hypothesis states that the regression function $f$ satisfies the constraint $Q[f] = 0$, while the alternative corresponds to the functions for which $Q[f]$ is bounded away from zero. We provide minimax rates of testing and the exact separation constants, along with a sharpoptimal testing procedure, for diagonal and nonnegative quadratic functionals. We can apply this to testing the relevance of a variable. Studying minimax rates for quadratic functionals which are neither positive nor negative, makes appear two different regimes: "regular" and "irregular". We apply this to the issue of testing the equality of norms of two functions observed in noisy environments
1 edition published in 2012 in French and held by 2 WorldCat member libraries worldwide
Realworld data are often extremely highdimensional, severely under constrained and interspersed with a large number of irrelevant or redundant features. Relevant variable selection is a compelling approach for addressing statistical issues in the scenario of highdimensional and noisy data with small sample size. First, we address the issue of variable selection in the regression model when the number of variables is very large. The main focus is on the situation where the number of relevant variables is much smaller than the ambient dimension. Without assuming any parametric form of the underlying regression function, we get tight conditions making it possible to consistently estimate the set of relevant variables. Secondly, we consider the problem of testing a particular type of composite null hypothesis under a nonparametric multivariate regression model. For a given quadratic functional $Q$, the null hypothesis states that the regression function $f$ satisfies the constraint $Q[f] = 0$, while the alternative corresponds to the functions for which $Q[f]$ is bounded away from zero. We provide minimax rates of testing and the exact separation constants, along with a sharpoptimal testing procedure, for diagonal and nonnegative quadratic functionals. We can apply this to testing the relevance of a variable. Studying minimax rates for quadratic functionals which are neither positive nor negative, makes appear two different regimes: "regular" and "irregular". We apply this to the issue of testing the equality of norms of two functions observed in noisy environments
Autour des automates : génération aléatoire et contribution à quelques extensions by
Vincent Carnino(
)
1 edition published in 2014 in French and held by 2 WorldCat member libraries worldwide
The subject of this thesis is decided into three parts: two of them are about extensions of the classical model in automata theory, whereas the third one is about a more concrete aspect which consists in randomly generate automata with specific properties. We first give an extension of the universal automaton on finite words to infinite words. To achieve this, we define a normal form in order to take account of the specific acceptance mode of Büchi automata which recognize omegalangages. Then we define two kinds of omegafactorizations, a "regular" one and the "pure" kind, which are both extensions of the classical concept of factorization of a language. This let us define the universal automaton of an omegalanguage. We prove that it has all the required properties: it is the smallest Buchi automaton, in normal form, that recognizes the omegalanguage and which has the universal property. We also give an effective way to compute the "regular" omegafactorizations of a language using a prophetic automaton recognizing the language. In the second part, we deal with twoway automata weighted over a semi ring. First, we give a slightly different version of the computation of a weighted oneway automaton from a weighted twoway automaton and we prove that it preserves the nonambiguity but not the determinism. We prove that nonambiguous weighted twoway automata are equivalent to deterministic weighted oneway automata. In a later part, we focus on tropical semi rings (or min+). We prove that twoway automata on Nmin+ are equivalent to oneway automata on Nmin+. We also prove that the behavior of twoway automata on Zmin+ are not always defined and that this property is decidable whereas it is undecidable whether or not there exists a word on which the behavior is defined. In the last section, we propose an algorithm in order to randomly generate acyclic, accessible and determinist automata and minimal acyclic automata with an almost uniform distribution using Morkov chains. We prove the reliability of both algorithms and we explain how to adapt them in order to fit with constraints on the set of final states
1 edition published in 2014 in French and held by 2 WorldCat member libraries worldwide
The subject of this thesis is decided into three parts: two of them are about extensions of the classical model in automata theory, whereas the third one is about a more concrete aspect which consists in randomly generate automata with specific properties. We first give an extension of the universal automaton on finite words to infinite words. To achieve this, we define a normal form in order to take account of the specific acceptance mode of Büchi automata which recognize omegalangages. Then we define two kinds of omegafactorizations, a "regular" one and the "pure" kind, which are both extensions of the classical concept of factorization of a language. This let us define the universal automaton of an omegalanguage. We prove that it has all the required properties: it is the smallest Buchi automaton, in normal form, that recognizes the omegalanguage and which has the universal property. We also give an effective way to compute the "regular" omegafactorizations of a language using a prophetic automaton recognizing the language. In the second part, we deal with twoway automata weighted over a semi ring. First, we give a slightly different version of the computation of a weighted oneway automaton from a weighted twoway automaton and we prove that it preserves the nonambiguity but not the determinism. We prove that nonambiguous weighted twoway automata are equivalent to deterministic weighted oneway automata. In a later part, we focus on tropical semi rings (or min+). We prove that twoway automata on Nmin+ are equivalent to oneway automata on Nmin+. We also prove that the behavior of twoway automata on Zmin+ are not always defined and that this property is decidable whereas it is undecidable whether or not there exists a word on which the behavior is defined. In the last section, we propose an algorithm in order to randomly generate acyclic, accessible and determinist automata and minimal acyclic automata with an almost uniform distribution using Morkov chains. We prove the reliability of both algorithms and we explain how to adapt them in order to fit with constraints on the set of final states
Méthode de couplage conservative entre un fluide compressible nonvisqueux et une structure tridimensionnelle déformable
pouvant se fragmenter by
Maria Adela Puscas(
)
1 edition published in 2014 in English and held by 2 WorldCat member libraries worldwide
Nous développons une méthode de couplage entre un fluide compressible nonvisqueux et une structure tridimensionnelle mobile. Nous considérons d'abord une structure rigide, puis déformable, et enfin avec fragmentation. Le couplage repose sur une méthode conservative de type frontières immergées en combinaison avec une méthode de Volumes Finis pour le fluide et une méthode d'Éléments Discrets pour la structure. La méthode de couplage assure la conservation de la masse, de la quantité de mouvement et de l'énergie totale du système. Elle présente également des propriétés de consistance, comme l'absence d'effets de rugosité artificielle sur une paroi rigide. La méthode de couplage est explicite en temps dans le cas d'une structure rigide, et semiimplicite dans le cas d'une structure déformable. La méthode semiimplicite en temps évite que des déformations tangentielles de la structure ne se transmettent au fluide, et la résolution itérative jouit d'une convergence géométrique sous une condition CFL non restrictive sur le pas de temps. Nous présentons des résultats numériques montrant la robustesse de la méthode dans le cas d'une sphère rigide mise en mouvement par une onde de choc, une poutre encastrée fléchie par une onde de choc et un cylindre se fragmentant sous l'action d'une explosion interne
1 edition published in 2014 in English and held by 2 WorldCat member libraries worldwide
Nous développons une méthode de couplage entre un fluide compressible nonvisqueux et une structure tridimensionnelle mobile. Nous considérons d'abord une structure rigide, puis déformable, et enfin avec fragmentation. Le couplage repose sur une méthode conservative de type frontières immergées en combinaison avec une méthode de Volumes Finis pour le fluide et une méthode d'Éléments Discrets pour la structure. La méthode de couplage assure la conservation de la masse, de la quantité de mouvement et de l'énergie totale du système. Elle présente également des propriétés de consistance, comme l'absence d'effets de rugosité artificielle sur une paroi rigide. La méthode de couplage est explicite en temps dans le cas d'une structure rigide, et semiimplicite dans le cas d'une structure déformable. La méthode semiimplicite en temps évite que des déformations tangentielles de la structure ne se transmettent au fluide, et la résolution itérative jouit d'une convergence géométrique sous une condition CFL non restrictive sur le pas de temps. Nous présentons des résultats numériques montrant la robustesse de la méthode dans le cas d'une sphère rigide mise en mouvement par une onde de choc, une poutre encastrée fléchie par une onde de choc et un cylindre se fragmentant sous l'action d'une explosion interne
The structure of orders in the pushdown hierarchy by
Laurent Braud(
)
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
Cette thèse étudie les structures dont la théorie au second ordremonadique est décidable, et en particulier la hiérarchie à pile. Onpeut définir celleci comme la hiérarchie pour $n$ des graphesd'automates à piles imbriquées $n$ fois ; une définition externe, partransformations de graphes, est également disponible. Nous nousintéressons à l'exemple des ordinaux. Nous montrons que les ordinauxplus petits que $epsilon_0$ sont dans la hiérarchie, ainsi que des graphesporteurs de plus d'information, que l'on appelle "graphecouvrants''. Nous montrons ensuite l'inverse : tous les ordinaux de lahiérarchie sont plus petits que $epsilon_0$. Ce résultat utilise le fait queles ordres d'un niveau sont en fait isomorphes aux structures desfeuilles des arbres déterministes dans l'ordre lexicographique, aumême niveau. Plus généralement, nous obtenons une caractérisation desordres linéaires dispersés dans la hiérarchie. Dans un troisièmetemps, nous resserons l'intérêt aux ordres de type $omega$  les mots infinis  pour montrer que les mots du niveau 2 sont les motsmorphiques, ce qui nous amène à une nouvelle extension au niveau 3
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
Cette thèse étudie les structures dont la théorie au second ordremonadique est décidable, et en particulier la hiérarchie à pile. Onpeut définir celleci comme la hiérarchie pour $n$ des graphesd'automates à piles imbriquées $n$ fois ; une définition externe, partransformations de graphes, est également disponible. Nous nousintéressons à l'exemple des ordinaux. Nous montrons que les ordinauxplus petits que $epsilon_0$ sont dans la hiérarchie, ainsi que des graphesporteurs de plus d'information, que l'on appelle "graphecouvrants''. Nous montrons ensuite l'inverse : tous les ordinaux de lahiérarchie sont plus petits que $epsilon_0$. Ce résultat utilise le fait queles ordres d'un niveau sont en fait isomorphes aux structures desfeuilles des arbres déterministes dans l'ordre lexicographique, aumême niveau. Plus généralement, nous obtenons une caractérisation desordres linéaires dispersés dans la hiérarchie. Dans un troisièmetemps, nous resserons l'intérêt aux ordres de type $omega$  les mots infinis  pour montrer que les mots du niveau 2 sont les motsmorphiques, ce qui nous amène à une nouvelle extension au niveau 3
Contribution à l'étude du trafic routier sur réseaux à l'aide des équations d'HamiltonJacobi by
Guillaume Costeseque(
)
1 edition published in 2014 in French and held by 2 WorldCat member libraries worldwide
This work focuses on modeling and simulation of traffic flows on a network. Modeling road traffic on a homogeneous section takes its roots in the middle of XXth century and it has generated a substantial literature since then. However, taking into account discontinuities of the network such as junctions, has attracted the attention of the scientific circle more recently. However, these discontinuities are the major sources of traffic congestion, recurring or not, that basically degrades the level of service of road infrastructure. This work therefore aims to provide a unique perspective on this issue, while focusing on scale problems and more precisely on microscopicmacroscopic passage in existing models. The first part of this thesis is devoted to the relationship between microscopic carfollowing models and macroscopic continuous flow models. The asymptotic passage is based on a homogenization technique for HamiltonJacobi equations. In a second part, we focus on the modeling and simulation of vehicular traffic flow through a junction. The considered macroscopic model is built on HamiltonJacobi equations as well. Finally, the third part focuses on finding analytical or semianalytical solutions, through representation formulas aiming to solve HamiltonJacobi equations under adequate assumptions. In this thesis, we are also interested in a generic class of second order macroscopic traffic flow models, the socalled GSOM models
1 edition published in 2014 in French and held by 2 WorldCat member libraries worldwide
This work focuses on modeling and simulation of traffic flows on a network. Modeling road traffic on a homogeneous section takes its roots in the middle of XXth century and it has generated a substantial literature since then. However, taking into account discontinuities of the network such as junctions, has attracted the attention of the scientific circle more recently. However, these discontinuities are the major sources of traffic congestion, recurring or not, that basically degrades the level of service of road infrastructure. This work therefore aims to provide a unique perspective on this issue, while focusing on scale problems and more precisely on microscopicmacroscopic passage in existing models. The first part of this thesis is devoted to the relationship between microscopic carfollowing models and macroscopic continuous flow models. The asymptotic passage is based on a homogenization technique for HamiltonJacobi equations. In a second part, we focus on the modeling and simulation of vehicular traffic flow through a junction. The considered macroscopic model is built on HamiltonJacobi equations as well. Finally, the third part focuses on finding analytical or semianalytical solutions, through representation formulas aiming to solve HamiltonJacobi equations under adequate assumptions. In this thesis, we are also interested in a generic class of second order macroscopic traffic flow models, the socalled GSOM models
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Université ParisEst (20072015) Degree grantor
 Laboratoire d'informatique de l'Institut Gaspard Monge
 Centre d'enseignement et de recherche en mathématiques et calcul scientifique (ChampssurMarne, SeineetMarne)
 Laboratoire Images, Signaux et Systèmes Intelligents (Créteil)
 Laboratoire électronique, systèmes de communication et microsystèmes
 Laboratoire d'Analyse et de Mathématiques Appliquées
 Ern, Alexandre (1967....). Opponent Thesis advisor
 Laporte, Eric (1962....). Opponent Thesis advisor
 Siarry, Patrick (1952....). Opponent Thesis advisor
 Bourouina, Tarik Opponent Thesis advisor
Alternative Names
École doctorale 532
École doctorale MSTIC
ED 532
ED532
Mathématiques, Sciences et Technologies de l'Information et de la Communication
MSTIC
Université ParisEst (ChampssurMarne). École doctorale Mathématiques, Sciences, Technologie de l'Information et de la Communication
Languages