École doctorale de mathématiques et informatique (Talence, Gironde)
Overview
Works:  387 works in 388 publications in 2 languages and 543 library holdings 

Roles:  996, Editor, Degree grantor 
Classifications:  QA7, 510 
Publication Timeline
.
Most widely held works by
École doctorale de mathématiques et informatique (Talence, Gironde)
Leçons de mathématiques d'aujourd'hui(
Book
)
1 edition published in 2007 in French and held by 78 WorldCat member libraries worldwide
1 edition published in 2007 in French and held by 78 WorldCat member libraries worldwide
Leçons de mathématiques d'aujourd'hui(
Book
)
1 edition published in 2010 in French and held by 52 WorldCat member libraries worldwide
1 edition published in 2010 in French and held by 52 WorldCat member libraries worldwide
Leçons de mathématiques d'aujourd'hui(
Book
)
1 edition published in 2012 in French and held by 26 WorldCat member libraries worldwide
1 edition published in 2012 in French and held by 26 WorldCat member libraries worldwide
Ubiquité de la formule de RiemannHurwitz by
Alexis Michel(
Book
)
1 edition published in 1992 in French and held by 2 WorldCat member libraries worldwide
ON DONNE UNE METHODE ALGEBRIQUE POUR ETUDIER LES FORMULES DE TRANSLATIONS DU TYPE RIEMANNHURWITZ D'INVARIANTS ARITHMETIQUES. LES FORMULES DE KUZ'MINKIDA, SUR L'INVARIANT LAMBDA, DE DEURINGSHAFAREVITCH SUR L'INVARIANT DE HASSEWITT ET DE WINGBERG POUR LE GROUPE DE SELMER ENTRENT DANS CE CADRE. ON EN UNIFIE LES PREUVES ET ON OBTIENT DES RESULTATS DANS D'AUTRES SITUATIONS. ON ETEND CETTE DEMARCHE DANS UN CAS NONGALOISIEN
1 edition published in 1992 in French and held by 2 WorldCat member libraries worldwide
ON DONNE UNE METHODE ALGEBRIQUE POUR ETUDIER LES FORMULES DE TRANSLATIONS DU TYPE RIEMANNHURWITZ D'INVARIANTS ARITHMETIQUES. LES FORMULES DE KUZ'MINKIDA, SUR L'INVARIANT LAMBDA, DE DEURINGSHAFAREVITCH SUR L'INVARIANT DE HASSEWITT ET DE WINGBERG POUR LE GROUPE DE SELMER ENTRENT DANS CE CADRE. ON EN UNIFIE LES PREUVES ET ON OBTIENT DES RESULTATS DANS D'AUTRES SITUATIONS. ON ETEND CETTE DEMARCHE DANS UN CAS NONGALOISIEN
Simulation numerique d'un ecoulement diphasique multicomposant en milieu poreux by
Nathalie Laquerie(
Book
)
2 editions published in 1997 in French and held by 2 WorldCat member libraries worldwide
Le but de la thèse est de simuler le lessivage par injection d'eau d'un aquifère pollué par un hydrocarbure. L'accent est mis sur la modélisation où l'on prend en compte de façon explicite les échanges entre les phases, sans faire l'hypothèse d'équilibre local. Ce modèle nous a conduit à étudier un système de convectiondiffusionréaction dans lequel les temps caractéristiques régissant les divers mécanismes mis en jeu sont très différents. Dans un premier temps, nous avons fait l'étude monodimensionnelle, afin de cerner les principales difficultés : non linéarité, dégénérescence, caractère hyperbolique des équations. Afin de traiter aussi précisément que possible ces trois phénomènes, un schéma basé sur la méthode des pas fractionnaires est mis en place. Pour résoudre la partie convection, nous avons utilisé les schémas de Roe et de Harten, puis un schéma basé sur la méthode MUSCL introduite par Van Leer. La partie diffusion est traitée de manière semiimplicite par un schéma aux différences finies centrées. La partie réaction se ramène à l'étude d'un système d'équations différentielles ordinaires, qui est traité de façon semiimplicite. Dans ces deux dernières parties, les nonlinéarités sont résolues à l'aide de la méthode de Newton. Ensuite, le schéma proposé est étendu à la dimension deux. On présente quelques tests numériques pour prouver la faisabilité de la méthode
2 editions published in 1997 in French and held by 2 WorldCat member libraries worldwide
Le but de la thèse est de simuler le lessivage par injection d'eau d'un aquifère pollué par un hydrocarbure. L'accent est mis sur la modélisation où l'on prend en compte de façon explicite les échanges entre les phases, sans faire l'hypothèse d'équilibre local. Ce modèle nous a conduit à étudier un système de convectiondiffusionréaction dans lequel les temps caractéristiques régissant les divers mécanismes mis en jeu sont très différents. Dans un premier temps, nous avons fait l'étude monodimensionnelle, afin de cerner les principales difficultés : non linéarité, dégénérescence, caractère hyperbolique des équations. Afin de traiter aussi précisément que possible ces trois phénomènes, un schéma basé sur la méthode des pas fractionnaires est mis en place. Pour résoudre la partie convection, nous avons utilisé les schémas de Roe et de Harten, puis un schéma basé sur la méthode MUSCL introduite par Van Leer. La partie diffusion est traitée de manière semiimplicite par un schéma aux différences finies centrées. La partie réaction se ramène à l'étude d'un système d'équations différentielles ordinaires, qui est traité de façon semiimplicite. Dans ces deux dernières parties, les nonlinéarités sont résolues à l'aide de la méthode de Newton. Ensuite, le schéma proposé est étendu à la dimension deux. On présente quelques tests numériques pour prouver la faisabilité de la méthode
Conception et analyse de schémas d'ordre très élevé distribuant le résidu : application à la mécanique des fluides by
Adam Larat(
)
1 edition published in 2009 in French and held by 2 WorldCat member libraries worldwide
Numerical simulations are nowadays a major tool in aerodynamic design in aeronautic, automotive, naval industry etc... One of the main challenges to push further the limits of the simulation codes is to increase their accuracy within a fixed set of resources (computational power and/or time). Two possible approaches to deal with this issue are either to contruct discretizations yielding, on a given mesh, very high order accurate solutions, or to construct compact, massively parallelizable schemes to minimize the computational time by means of a high performance parallel implementation. In this thesis, we try to combine both approaches by investigating the contruction and implementation of very high order Residual Distribution Schemes (RDS) with the most possible compact stencil. The manuscript starts with a review of the mathematical theory of hyperbolic Conservation Laws (CLs). The aim of this initial part is to highlight the properties of the analytical solutions we are trying to approximate, in order to be able to link these properties with the ones of the sought discrete solutions. Next, we describe the three main steps toward the construction of a very high order RDS:  The definition of higher order polynomial representations of the solution over polygons and polyhedra;  The design of low order compact conservative RD schemes consistent with a given (high degree) polynomial representation. Among these, particular accest is put on the simplest, given by a generalization of the LaxFriedrich's (\LxF) scheme;  The design of a positivity preserving nonlinear transformation, mapping firstorder linear schemes onto nonlinear very high order schemes. In the manuscript, we show formally that the schemes obtained following this procedure are consistent with the initial CL, that they are stable in $L^{\infty}$ norm, and that they have the proper truncation error. Even though all the theoretical developments are carried out for scalar CLs, remarks on the extension to systems are given whenever possible. Unortunately, when employing the first order \LxF scheme as a basis for the construction of the nonlinear discretization, the final nonlinear algebraic equation is not wellposed in general. In particular, for smoothly varying solutions one observes the appearance of high frequency spurious modes. In order to kill these modes, a streamline dissipation term is added to the scheme. The analytical implications of this modifications, as well as its practical computation, are thouroughly studied. Lastly, we focus on a correct discretization of the boundary conditions for the very high order RDS proposed. The theory is then extensively verified on a variety of scalar two dimensional test cases. Both triangular, and hybrid triangularquadrilateral meshes are used to show the generality of the approach. The results obtained in these tests confirm all the theoretical expectations in terms of accuracy and stability and underline some advantages of the hybrid grids. Next, we consider solutions of the two dimensional Euler equations of gas dynamics. The results obtained are quite satisfactory and yet, we are not able to obtain the desired convergence rates on problems involving solid wall boundaries. Further investigation of this problem is under way. We then discuss the parallel implementation of the schemes, and analyze and illustrate the performance of this implementation on large three dimensional problems. Due to the preliminary character and the complexity of these three dimensional problems, a rather qualitative discussion is made for these tests cases: the overall behavior seems to be the correct one, but more work is necessary to assess the properties of the schemes in three dimensions
1 edition published in 2009 in French and held by 2 WorldCat member libraries worldwide
Numerical simulations are nowadays a major tool in aerodynamic design in aeronautic, automotive, naval industry etc... One of the main challenges to push further the limits of the simulation codes is to increase their accuracy within a fixed set of resources (computational power and/or time). Two possible approaches to deal with this issue are either to contruct discretizations yielding, on a given mesh, very high order accurate solutions, or to construct compact, massively parallelizable schemes to minimize the computational time by means of a high performance parallel implementation. In this thesis, we try to combine both approaches by investigating the contruction and implementation of very high order Residual Distribution Schemes (RDS) with the most possible compact stencil. The manuscript starts with a review of the mathematical theory of hyperbolic Conservation Laws (CLs). The aim of this initial part is to highlight the properties of the analytical solutions we are trying to approximate, in order to be able to link these properties with the ones of the sought discrete solutions. Next, we describe the three main steps toward the construction of a very high order RDS:  The definition of higher order polynomial representations of the solution over polygons and polyhedra;  The design of low order compact conservative RD schemes consistent with a given (high degree) polynomial representation. Among these, particular accest is put on the simplest, given by a generalization of the LaxFriedrich's (\LxF) scheme;  The design of a positivity preserving nonlinear transformation, mapping firstorder linear schemes onto nonlinear very high order schemes. In the manuscript, we show formally that the schemes obtained following this procedure are consistent with the initial CL, that they are stable in $L^{\infty}$ norm, and that they have the proper truncation error. Even though all the theoretical developments are carried out for scalar CLs, remarks on the extension to systems are given whenever possible. Unortunately, when employing the first order \LxF scheme as a basis for the construction of the nonlinear discretization, the final nonlinear algebraic equation is not wellposed in general. In particular, for smoothly varying solutions one observes the appearance of high frequency spurious modes. In order to kill these modes, a streamline dissipation term is added to the scheme. The analytical implications of this modifications, as well as its practical computation, are thouroughly studied. Lastly, we focus on a correct discretization of the boundary conditions for the very high order RDS proposed. The theory is then extensively verified on a variety of scalar two dimensional test cases. Both triangular, and hybrid triangularquadrilateral meshes are used to show the generality of the approach. The results obtained in these tests confirm all the theoretical expectations in terms of accuracy and stability and underline some advantages of the hybrid grids. Next, we consider solutions of the two dimensional Euler equations of gas dynamics. The results obtained are quite satisfactory and yet, we are not able to obtain the desired convergence rates on problems involving solid wall boundaries. Further investigation of this problem is under way. We then discuss the parallel implementation of the schemes, and analyze and illustrate the performance of this implementation on large three dimensional problems. Due to the preliminary character and the complexity of these three dimensional problems, a rather qualitative discussion is made for these tests cases: the overall behavior seems to be the correct one, but more work is necessary to assess the properties of the schemes in three dimensions
Algorithmique distribuée asynchrone avec une majorité de pannes by
David Bonnin(
)
1 edition published in 2015 in French and held by 1 WorldCat member library worldwide
In distributed computing, asynchronous messagepassing model with crashes is wellknown and considered in many articles, because of its realism and it issimple enough to be used and complex enough to represent many real problems.In this model, n processes communicate by exchanging messages, but withoutany bound on communication delays, i.e. a message may take an arbitrarilylong time to reach its destination. Moreover, up to f among the n processesmay crash, and thus definitely stop working. Those crashes are undetectablebecause of the system asynchronism, and restrict the potential results in thismodel.In many cases, known results in those systems must verify the propertyof a strict minority of crashes. For example, this applies to implementationof atomic registers and solving of renaming. This barrier of a majority ofcrashes, explained by the CAP theorem, restricts numerous problems, and theasynchronous messagepassing model with a majority of crashes is thus notwellstudied and rather unknown. Hence, studying what can be done in thiscase of a majority of crashes is interesting.This thesis tries to analyse this model, through two main problems. The first part studies the implementation of shared objects, similar to usual registers,by defining xcolored register banks, and [alpha]registers. The second partextends the renaming problem into kredundant renaming, for both oneshotand longlived versions, and similarly for the shared objects called splitters intoksplitters
1 edition published in 2015 in French and held by 1 WorldCat member library worldwide
In distributed computing, asynchronous messagepassing model with crashes is wellknown and considered in many articles, because of its realism and it issimple enough to be used and complex enough to represent many real problems.In this model, n processes communicate by exchanging messages, but withoutany bound on communication delays, i.e. a message may take an arbitrarilylong time to reach its destination. Moreover, up to f among the n processesmay crash, and thus definitely stop working. Those crashes are undetectablebecause of the system asynchronism, and restrict the potential results in thismodel.In many cases, known results in those systems must verify the propertyof a strict minority of crashes. For example, this applies to implementationof atomic registers and solving of renaming. This barrier of a majority ofcrashes, explained by the CAP theorem, restricts numerous problems, and theasynchronous messagepassing model with a majority of crashes is thus notwellstudied and rather unknown. Hence, studying what can be done in thiscase of a majority of crashes is interesting.This thesis tries to analyse this model, through two main problems. The first part studies the implementation of shared objects, similar to usual registers,by defining xcolored register banks, and [alpha]registers. The second partextends the renaming problem into kredundant renaming, for both oneshotand longlived versions, and similarly for the shared objects called splitters intoksplitters
Segmentation spatiotemporelle et indexation vidéo dans le domaine des représentations hiérarchiques by
Claire Morand(
)
1 edition published in 2009 in French and held by 1 WorldCat member library worldwide
This thesis aims at proposing a solution of scalable objectbased indexing of HD video flow compressed by MJPEG2000. In this context, on the one hand, we work in the hierarchical transform domain of the 9/7 Daubechies' wavelets and, on the other hand, the scalable representation implies to search for multiscale methods, from low to high resolution. The first part of this manuscript is dedicated to the definition of a method for automatic extraction of objects having their own motion. It is based on a combination of a robust global motion estimation with a morphological color segmentation at low resolution. The obtained result is then refined following the data order of the scalable flow. The second part is the definition of an object descriptor which is based on the multiscale histograms of the wavelet coefficients. Finally, the performances of the proposed method are evaluated in the context of scalable contentbased queries
1 edition published in 2009 in French and held by 1 WorldCat member library worldwide
This thesis aims at proposing a solution of scalable objectbased indexing of HD video flow compressed by MJPEG2000. In this context, on the one hand, we work in the hierarchical transform domain of the 9/7 Daubechies' wavelets and, on the other hand, the scalable representation implies to search for multiscale methods, from low to high resolution. The first part of this manuscript is dedicated to the definition of a method for automatic extraction of objects having their own motion. It is based on a combination of a robust global motion estimation with a morphological color segmentation at low resolution. The obtained result is then refined following the data order of the scalable flow. The second part is the definition of an object descriptor which is based on the multiscale histograms of the wavelet coefficients. Finally, the performances of the proposed method are evaluated in the context of scalable contentbased queries
On the dynamics of some complex fluids by
Francesco De Anna(
)
1 edition published in 2016 in English and held by 1 WorldCat member library worldwide
The present thesis is devoted to the dynamics of specific complex fluids. On the one hand we studythe dynamics of the socalled nematic liquid crystals, through the models proposed by Ericksen and Leslie, Beris and Edwards, Qian and Sheng.On the other hand we analyze the dynamics of a temperaturedependent complex fluid, whose dynamics is governed by the Boussinesq system.Nematic liquid crystals are materials exhibiting a state of matter between an ordinary fluid and a solid. In this thesis we are interested in studying the Cauchy problem associated to eachsystem modelling their hydrodynamics. At first, we establish some wellposedness results, such asexistence and uniqueness of globalintime weak or classical solutions. Moreover we also analyzesome dynamical behaviours of these solutions, such as propagations of both higher and lowerregularities.The general framework for the initial data is that of Besov spaces, which extend the most widelyknown classes of Sobolev and Hölder spaces.The EricksenLeslie system is studied in a simplified form proposed by F. Lin and C. Liu,which retains the main difficulties of the original one. We consider both a twodimensional and athreedimensional spacedomain. We assume the density to be no constant, i.e. the inhomogeneouscase, moreover we allow it to present discontinuities along an interface so that we can describe amixture of liquid crystal materials with different densities. We prove the existence of globalintimeweak solutions under smallness conditions on the initial data in critical homogeneous Besov spaces.These solutions are invariant under the scaling behaviour of the system. We also show that theuniqueness holds under a tiny extraregularity for the initial data.The BerisEdwards system is analyzed in a twodimensional spacedomain. We achieve existenceand uniqueness of globalintime weak solutions when the initial data belongs to specific Sobolevspaces (without any smallness condition). The regularity of these functional spaces is suitable inorder to well define a weak solution. We achieve the uniqueness result through a specific analysis,controlling the norm of the difference between to weak solutions and performing a delicate doublelogarithmicestimate. Then, the uniqueness holds thanks to the Osgood lemma. We also achieve aresult about regularity propagation.The QianSheng model is analyzed in a spacedomain with dimension greater or equal than two.In this case, we emphasize some important characteristics of the system, especially the presence ofan inertial term, which generates significant difficulties. We perform the existence of a Lyapunovfunctional and the existence and uniqueness of classical solutions under a smallness condition forthe initial data.Finally we deal with the wellposedness of the Boussinesq system. We prove the existence ofglobalintime weak solutions when the spacedomain has a dimension greater or equal than two.We deal with the case of a viscosity dependent on the temperature. The initial temperature is justsupposed to be bounded, while the initial velocity belongs to some critical Besov Space. The initialdata have a large vertical component while the horizontal components fulfil a specific smallnessconditions: they are exponentially smaller than the vertical component
1 edition published in 2016 in English and held by 1 WorldCat member library worldwide
The present thesis is devoted to the dynamics of specific complex fluids. On the one hand we studythe dynamics of the socalled nematic liquid crystals, through the models proposed by Ericksen and Leslie, Beris and Edwards, Qian and Sheng.On the other hand we analyze the dynamics of a temperaturedependent complex fluid, whose dynamics is governed by the Boussinesq system.Nematic liquid crystals are materials exhibiting a state of matter between an ordinary fluid and a solid. In this thesis we are interested in studying the Cauchy problem associated to eachsystem modelling their hydrodynamics. At first, we establish some wellposedness results, such asexistence and uniqueness of globalintime weak or classical solutions. Moreover we also analyzesome dynamical behaviours of these solutions, such as propagations of both higher and lowerregularities.The general framework for the initial data is that of Besov spaces, which extend the most widelyknown classes of Sobolev and Hölder spaces.The EricksenLeslie system is studied in a simplified form proposed by F. Lin and C. Liu,which retains the main difficulties of the original one. We consider both a twodimensional and athreedimensional spacedomain. We assume the density to be no constant, i.e. the inhomogeneouscase, moreover we allow it to present discontinuities along an interface so that we can describe amixture of liquid crystal materials with different densities. We prove the existence of globalintimeweak solutions under smallness conditions on the initial data in critical homogeneous Besov spaces.These solutions are invariant under the scaling behaviour of the system. We also show that theuniqueness holds under a tiny extraregularity for the initial data.The BerisEdwards system is analyzed in a twodimensional spacedomain. We achieve existenceand uniqueness of globalintime weak solutions when the initial data belongs to specific Sobolevspaces (without any smallness condition). The regularity of these functional spaces is suitable inorder to well define a weak solution. We achieve the uniqueness result through a specific analysis,controlling the norm of the difference between to weak solutions and performing a delicate doublelogarithmicestimate. Then, the uniqueness holds thanks to the Osgood lemma. We also achieve aresult about regularity propagation.The QianSheng model is analyzed in a spacedomain with dimension greater or equal than two.In this case, we emphasize some important characteristics of the system, especially the presence ofan inertial term, which generates significant difficulties. We perform the existence of a Lyapunovfunctional and the existence and uniqueness of classical solutions under a smallness condition forthe initial data.Finally we deal with the wellposedness of the Boussinesq system. We prove the existence ofglobalintime weak solutions when the spacedomain has a dimension greater or equal than two.We deal with the case of a viscosity dependent on the temperature. The initial temperature is justsupposed to be bounded, while the initial velocity belongs to some critical Besov Space. The initialdata have a large vertical component while the horizontal components fulfil a specific smallnessconditions: they are exponentially smaller than the vertical component
Etudes sur les équations de RamanujanNagell et de NagellLjunggren ou semblables by
Benjamin Dupuy(
)
1 edition published in 2009 in French and held by 1 WorldCat member library worldwide
In this thesis, we study two types of diophantine equations. A ?rst part of our study is about the resolution of the RamanujanNagell equations Cx2 + b2mD = yn. A second part of our study is about the NagellLjungren equations xp+yp x+y = pezq including the diagonal case p = q. Our new results will be applied to the diophantine equations of the form xp + yp = Bzq. The FermatCatalan equation (case B = 1) will be the subject of a special study
1 edition published in 2009 in French and held by 1 WorldCat member library worldwide
In this thesis, we study two types of diophantine equations. A ?rst part of our study is about the resolution of the RamanujanNagell equations Cx2 + b2mD = yn. A second part of our study is about the NagellLjungren equations xp+yp x+y = pezq including the diagonal case p = q. Our new results will be applied to the diophantine equations of the form xp + yp = Bzq. The FermatCatalan equation (case B = 1) will be the subject of a special study
Opérateurs de Toeplitz sur l'espace de Bergman harmonique et opérateurs de Teoplitz tronqués de rang fini by
Fanilo rajaofetra Randriamahaleo(
)
1 edition published in 2015 in French and held by 1 WorldCat member library worldwide
In the first part of the thesis,we give some classical results concerning theHardy space, models spaces and analytic and harmonic Bergman spaces. The basic concepts such as projections and reproducing kernels are introduced. We then describe our results on the the stability of the product and the commutativity of two quasihomogeneous Toeplitz operators on the harmonic Bergman space. Finally, we give the matrix description of truncated Toeplitz operators of type "a" in the finite dimensional case
1 edition published in 2015 in French and held by 1 WorldCat member library worldwide
In the first part of the thesis,we give some classical results concerning theHardy space, models spaces and analytic and harmonic Bergman spaces. The basic concepts such as projections and reproducing kernels are introduced. We then describe our results on the the stability of the product and the commutativity of two quasihomogeneous Toeplitz operators on the harmonic Bergman space. Finally, we give the matrix description of truncated Toeplitz operators of type "a" in the finite dimensional case
Développement de compteurs à scintillation hautes performances et de très basse radioactivité pour le calorimètre du
projet SuperNEMO by
Emmanuel Chauveau(
)
1 edition published in 2010 in French and held by 1 WorldCat member library worldwide
SuperNEMO est un projet de détecteur de nouvelle génération pour la recherche de la décroissance double bêta sans émission de neutrinos. La technique expérimental déployée est dans la lignée du son prédécesseur NEMO3, combinant un trajectographe et un calorimètre, afin d'identifier non seulement les électrons des décroissances double bêta, mais également pour mesurer l'ensemble des composantes de bruit de fond du détecteur. Le projet vise ainsi une sensibilité de 10^26 ans sur la période du 82Se, ce qui permettrait de sonder une masse effective du neutrino de 50 meV. Pour atteindre cette sensibilité, le projet prévoit notamment de mettre en place un calorimètre composé d'un millier de compteur à scintillation de basse radioactivité, dont la résolution en énergie serait meilleure que 8 % FWHM pour des électrons de 1 MeV.Ce travail de thèse apporte une contribution importante dans les travaux de Recherche et Développements pour améliorer les performances des scintillateurs et photomultiplicateurs, et pour réduire leur radioactivité, avec notamment la conception d'un nouveau photomultiplicateur en collaboration avec Photonis
1 edition published in 2010 in French and held by 1 WorldCat member library worldwide
SuperNEMO est un projet de détecteur de nouvelle génération pour la recherche de la décroissance double bêta sans émission de neutrinos. La technique expérimental déployée est dans la lignée du son prédécesseur NEMO3, combinant un trajectographe et un calorimètre, afin d'identifier non seulement les électrons des décroissances double bêta, mais également pour mesurer l'ensemble des composantes de bruit de fond du détecteur. Le projet vise ainsi une sensibilité de 10^26 ans sur la période du 82Se, ce qui permettrait de sonder une masse effective du neutrino de 50 meV. Pour atteindre cette sensibilité, le projet prévoit notamment de mettre en place un calorimètre composé d'un millier de compteur à scintillation de basse radioactivité, dont la résolution en énergie serait meilleure que 8 % FWHM pour des électrons de 1 MeV.Ce travail de thèse apporte une contribution importante dans les travaux de Recherche et Développements pour améliorer les performances des scintillateurs et photomultiplicateurs, et pour réduire leur radioactivité, avec notamment la conception d'un nouveau photomultiplicateur en collaboration avec Photonis
Plongement de surfaces continues dans des surfaces discrètes épaisses. by
Bruno Dutailly(
)
1 edition published in 2016 in French and held by 1 WorldCat member library worldwide
In the context of archaeological sciences, 3D images produced by Computer Tomography scanners are segmented into regions of interest corresponding to virtual objects in order to make some scientific analysis. These virtual objects are often used for the purpose of performing accurate measurements. Some of these analysis require extracting the surface of the regions of interest. This PhD falls within this framework and aims to improve the accuracy of surface extraction. We present in this document our contributions : first of all, the weighted HMH algorithm whose objective is to position precisely a point at the interface between two materials. But, applied to surface extraction, this method often leads to topology problems on the resulting surface. So we proposed two other methods : The discrete HMH method which allows to refine the 3D object segmentation, and the surface HMH method which allows a constrained surface extraction ensuring a topologically correct surface. It is possible to link these two methods on a presegmented 3D image in order to obtain a precise surface extraction of the objects of interest These methods were evaluated on simulated CTscan acquisitions of synthetic objects and real acquisitions of archaeological artefacts
1 edition published in 2016 in French and held by 1 WorldCat member library worldwide
In the context of archaeological sciences, 3D images produced by Computer Tomography scanners are segmented into regions of interest corresponding to virtual objects in order to make some scientific analysis. These virtual objects are often used for the purpose of performing accurate measurements. Some of these analysis require extracting the surface of the regions of interest. This PhD falls within this framework and aims to improve the accuracy of surface extraction. We present in this document our contributions : first of all, the weighted HMH algorithm whose objective is to position precisely a point at the interface between two materials. But, applied to surface extraction, this method often leads to topology problems on the resulting surface. So we proposed two other methods : The discrete HMH method which allows to refine the 3D object segmentation, and the surface HMH method which allows a constrained surface extraction ensuring a topologically correct surface. It is possible to link these two methods on a presegmented 3D image in order to obtain a precise surface extraction of the objects of interest These methods were evaluated on simulated CTscan acquisitions of synthetic objects and real acquisitions of archaeological artefacts
Contributions à l'usage des détecteurs de clones pour des tâches de maintenance logicielle by
Alan Charpentier(
)
1 edition published in 2016 in French and held by 1 WorldCat member library worldwide
The existence of several copies of a same code fragmentcalled code clones in the literaturein a software can complicate its maintenance and evolution. Code duplication can lead to consistencyproblems, especially during bug fixes propagation. Code clone detection is therefore a majorconcern to maintain and improve software quality, which is an essential property for a software'ssuccess.The general objective of this thesis is to contribute to the use of code clone detection in softwaremaintenance tasks. We chose to focus our contributions on two research topics. Firstly, themethodology to compare and assess code clone detectors, i.e. clone benchmarks. We perform anempirical assessment of a clone benchmark and we found that results derived from this latter arenot reliable. We also identified recommendations to construct more reliable clone benchmarks.Secondly, the adaptation of code clone detectors in software maintenance tasks. We developed aspecialized approach in one language and one taskrefactoringallowing developers to identifyand remove code duplication in their softwares. We conducted case studies with domain experts toevaluate our approach
1 edition published in 2016 in French and held by 1 WorldCat member library worldwide
The existence of several copies of a same code fragmentcalled code clones in the literaturein a software can complicate its maintenance and evolution. Code duplication can lead to consistencyproblems, especially during bug fixes propagation. Code clone detection is therefore a majorconcern to maintain and improve software quality, which is an essential property for a software'ssuccess.The general objective of this thesis is to contribute to the use of code clone detection in softwaremaintenance tasks. We chose to focus our contributions on two research topics. Firstly, themethodology to compare and assess code clone detectors, i.e. clone benchmarks. We perform anempirical assessment of a clone benchmark and we found that results derived from this latter arenot reliable. We also identified recommendations to construct more reliable clone benchmarks.Secondly, the adaptation of code clone detectors in software maintenance tasks. We developed aspecialized approach in one language and one taskrefactoringallowing developers to identifyand remove code duplication in their softwares. We conducted case studies with domain experts toevaluate our approach
Etude et optimisation des transferts de chaleur en injection moulage : analyse de leur influence sur les propriétés finales by
Hamdy Abo Ali Hassan(
)
1 edition published in 2009 in English and held by 1 WorldCat member library worldwide
Plastics are typically polymers of high molecular weight, and may contain other substances to improve performance and/or reduce costs. Plastic industry is one of the world?s fastest growing industries; almost every product that is used in daily life involves the usage of plastic. There are different methods for polymer processing (thermoforming, blow molding, compression molding of polymers, transfer molding of polymers, extrusion of polymers, injection molding of polymers, etc.) which differ by the method of fabrications, the used materials, the quality of the product and the form of the final product. Demand for injection molded parts continues to increase every year because plastic injection molding process is well known as the most efficient manufacturing techniques for economically producing precise plastic parts and complex geometry at low cost and a large quantity. The plastic injection molding process is a cyclic process where polymer is injected into a mold cavity, and solidifies to form a plastic part. There are three significant stages in each cycle. The first stage is filling the cavity with hot polymer melt at high injection pressure and temperature (filling and postfilling stage). It is followed by cooling the injected polymer material until the material is completely solidified (cooling stage), finally the solidified part is ejected (ejection stage)
1 edition published in 2009 in English and held by 1 WorldCat member library worldwide
Plastics are typically polymers of high molecular weight, and may contain other substances to improve performance and/or reduce costs. Plastic industry is one of the world?s fastest growing industries; almost every product that is used in daily life involves the usage of plastic. There are different methods for polymer processing (thermoforming, blow molding, compression molding of polymers, transfer molding of polymers, extrusion of polymers, injection molding of polymers, etc.) which differ by the method of fabrications, the used materials, the quality of the product and the form of the final product. Demand for injection molded parts continues to increase every year because plastic injection molding process is well known as the most efficient manufacturing techniques for economically producing precise plastic parts and complex geometry at low cost and a large quantity. The plastic injection molding process is a cyclic process where polymer is injected into a mold cavity, and solidifies to form a plastic part. There are three significant stages in each cycle. The first stage is filling the cavity with hot polymer melt at high injection pressure and temperature (filling and postfilling stage). It is followed by cooling the injected polymer material until the material is completely solidified (cooling stage), finally the solidified part is ejected (ejection stage)
Scheduling and memory optimizations for sparse direct solver on multicore/multigpu duster systems by
Xavier Lacoste(
)
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
The ongoing hardware evolution exhibits an escalation in the number, as well as in the heterogeneity, of computing resources. The pressure to maintain reasonable levels of performance and portability forces application developers to leave the traditional programming paradigms and explore alternative solutions. PaStiX is a parallel sparse direct solver, based on a dynamic scheduler for modern hierarchical manycore architectures. In this thesis, we study the benefits and the limits of replacing the highly specialized internal scheduler of the PaStiX solver by two generic runtime systems: PaRSEC and StarPU. Thus, we have to describe the factorization algorithm as a tasks graph that we provide to the runtime system. Then it can decide how to process and optimize the graph traversal in order to maximize the algorithm efficiency for thetargeted hardware platform. A comparative study of the performance of the PaStiX solver on top of its original internal scheduler, PaRSEC, and StarPU frameworks is performed. The analysis highlights that these generic taskbased runtimes achieve comparable results to the applicationoptimized embedded scheduler on homogeneous platforms. Furthermore, they are able to significantly speed up the solver on heterogeneous environments by taking advantage of the accelerators while hiding the complexity of their efficient manipulation from the programmer. In this thesis, we also study the possibilities to build a distributed sparse linear solver on top of taskbased runtime systems to target heterogeneous clusters. To permit an efficient and easy usage of these developments in parallel simulations, we also present an optimized distributed interfaceaiming at hiding the complexity of the construction of a distributed matrix to the user
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
The ongoing hardware evolution exhibits an escalation in the number, as well as in the heterogeneity, of computing resources. The pressure to maintain reasonable levels of performance and portability forces application developers to leave the traditional programming paradigms and explore alternative solutions. PaStiX is a parallel sparse direct solver, based on a dynamic scheduler for modern hierarchical manycore architectures. In this thesis, we study the benefits and the limits of replacing the highly specialized internal scheduler of the PaStiX solver by two generic runtime systems: PaRSEC and StarPU. Thus, we have to describe the factorization algorithm as a tasks graph that we provide to the runtime system. Then it can decide how to process and optimize the graph traversal in order to maximize the algorithm efficiency for thetargeted hardware platform. A comparative study of the performance of the PaStiX solver on top of its original internal scheduler, PaRSEC, and StarPU frameworks is performed. The analysis highlights that these generic taskbased runtimes achieve comparable results to the applicationoptimized embedded scheduler on homogeneous platforms. Furthermore, they are able to significantly speed up the solver on heterogeneous environments by taking advantage of the accelerators while hiding the complexity of their efficient manipulation from the programmer. In this thesis, we also study the possibilities to build a distributed sparse linear solver on top of taskbased runtime systems to target heterogeneous clusters. To permit an efficient and easy usage of these developments in parallel simulations, we also present an optimized distributed interfaceaiming at hiding the complexity of the construction of a distributed matrix to the user
Contribution à la modélisation et à la simulation numérique multiéchelle du transport cinétique électronique dans
un plasma chaud by
Jessy Mallet(
)
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
In plasma physics, the transport of electrons can be described from a kinetic point of view or from an hydrodynamical point of view.Classically in kinetic theory, a FokkerPlanck equation coupled with Maxwell equations is used to describe the evolution of electrons in a collisional plasma. More precisely the solution of the kinetic equations is a nonnegative distribution function f specifying the density of particles as a function of velocity of particles, the time and the position in space. In order to approximate the solution of such problems, many computational methods have been developed. Here, a deterministic method is proposed in a planar geometry. This method is based on different high order numerical schemes. Each deterministic scheme used presents many fundamental properties such as conservation of flux particles, preservation of positivity of the distribution function and conservation of energy. However the kinetic computation of this accurate method is too expensive to be used in practical computation especially in multidimensional space.To reduce the computational time, the plasma can be described by an hydrodynamic model. However for the new high energy target drivers, the kinetic effects are too important to neglect them and replace kinetic calculus by usual macroscopic Euler models.That is why an alternative approach is proposed by considering an intermediate description between the fluid and the kinetic level. To describe the transport of electrons, the new reduced kinetic model M1 proposed is based on a moment approach for MaxwellFokkerPlanck equations. This moment model uses integration of the electron distribution function on the propagating direction and retains only the energy of particles as kinetic variable. The velocity variable is written in spherical coordinates and the model is written by considering the system of moments with respect to the angular variable. The closure of the moments system is obtained under the assumption that the distribution function is a minimum entropy function. This model is proved to satisfy fundamental properties such as the nonnegativity of the distribution function, conservation laws for collision operators and entropy dissipation. Moreover an entropic discretization in the velocity variable is proposed on the semidiscrete model. Moreover the M1 model can be generalized to the MN model by considering N given moments. The Nmoments model obtained also preserves fundamental properties such as conservation laws and entropy dissipation. The associated semidiscrete scheme is shown to preserve the conservation properties and entropy decay
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
In plasma physics, the transport of electrons can be described from a kinetic point of view or from an hydrodynamical point of view.Classically in kinetic theory, a FokkerPlanck equation coupled with Maxwell equations is used to describe the evolution of electrons in a collisional plasma. More precisely the solution of the kinetic equations is a nonnegative distribution function f specifying the density of particles as a function of velocity of particles, the time and the position in space. In order to approximate the solution of such problems, many computational methods have been developed. Here, a deterministic method is proposed in a planar geometry. This method is based on different high order numerical schemes. Each deterministic scheme used presents many fundamental properties such as conservation of flux particles, preservation of positivity of the distribution function and conservation of energy. However the kinetic computation of this accurate method is too expensive to be used in practical computation especially in multidimensional space.To reduce the computational time, the plasma can be described by an hydrodynamic model. However for the new high energy target drivers, the kinetic effects are too important to neglect them and replace kinetic calculus by usual macroscopic Euler models.That is why an alternative approach is proposed by considering an intermediate description between the fluid and the kinetic level. To describe the transport of electrons, the new reduced kinetic model M1 proposed is based on a moment approach for MaxwellFokkerPlanck equations. This moment model uses integration of the electron distribution function on the propagating direction and retains only the energy of particles as kinetic variable. The velocity variable is written in spherical coordinates and the model is written by considering the system of moments with respect to the angular variable. The closure of the moments system is obtained under the assumption that the distribution function is a minimum entropy function. This model is proved to satisfy fundamental properties such as the nonnegativity of the distribution function, conservation laws for collision operators and entropy dissipation. Moreover an entropic discretization in the velocity variable is proposed on the semidiscrete model. Moreover the M1 model can be generalized to the MN model by considering N given moments. The Nmoments model obtained also preserves fundamental properties such as conservation laws and entropy dissipation. The associated semidiscrete scheme is shown to preserve the conservation properties and entropy decay
Modes de représentation pour l'éclairage en synthèse d'images by
Romain Pacanowski(
)
1 edition published in 2009 in French and held by 1 WorldCat member library worldwide
In image synthesis, the main computation involved to generate an image is characterized by an equation named rendering equation [Kajiya1986]. This equation represents the law of energy conservation. It stipulates that the light emanating from the scene objects is the sum of the emitted energy and the reflected energy. Moreover, the reflected energy at a surface point is defined as the convolution of the incoming lighting with a reflectance function. The reflectance function models the object material and represents, in the rendering equation, a directional and energetic filter that describes the surface behavior regarding the reflection. In this thesis, we introduce new representations for the reflectance function and the incoming lighting. In the first part of this thesis, we propose two new models for the reflectance function. The first model is targeted for artists to help them create and edit highlights. Our main idea is to let the user paint and sketch highlight characteristics (shape, color, gradient and texture) in a plane parametrized by the incident lighting direction. The second model is designed to represent efficiently isotropic material data. To achieve this result, we introduce a new representation of the reflectance function that uses rational polynomials. Their coefficients are computed using a fitting process that guarantees an optimal solution regarding convergence. In the second part of this thesis, we introduce a new volumetric structure for indirect illumination that is directionally represented with irradiance vector. We show that our representation is compact and robust to geometric variations, that it can be used as caching system for interactive and offline rendering and that it can also be transmitted with streaming techniques. Finally, we introduce two modifications of the incoming lighting to improve the shape depiction of a surface. The first modification consists in warping the incoming light directions whereas the second one consists in scaling the intensity of each light source
1 edition published in 2009 in French and held by 1 WorldCat member library worldwide
In image synthesis, the main computation involved to generate an image is characterized by an equation named rendering equation [Kajiya1986]. This equation represents the law of energy conservation. It stipulates that the light emanating from the scene objects is the sum of the emitted energy and the reflected energy. Moreover, the reflected energy at a surface point is defined as the convolution of the incoming lighting with a reflectance function. The reflectance function models the object material and represents, in the rendering equation, a directional and energetic filter that describes the surface behavior regarding the reflection. In this thesis, we introduce new representations for the reflectance function and the incoming lighting. In the first part of this thesis, we propose two new models for the reflectance function. The first model is targeted for artists to help them create and edit highlights. Our main idea is to let the user paint and sketch highlight characteristics (shape, color, gradient and texture) in a plane parametrized by the incident lighting direction. The second model is designed to represent efficiently isotropic material data. To achieve this result, we introduce a new representation of the reflectance function that uses rational polynomials. Their coefficients are computed using a fitting process that guarantees an optimal solution regarding convergence. In the second part of this thesis, we introduce a new volumetric structure for indirect illumination that is directionally represented with irradiance vector. We show that our representation is compact and robust to geometric variations, that it can be used as caching system for interactive and offline rendering and that it can also be transmitted with streaming techniques. Finally, we introduce two modifications of the incoming lighting to improve the shape depiction of a surface. The first modification consists in warping the incoming light directions whereas the second one consists in scaling the intensity of each light source
Contrôle de la dynamique de la leucémie myéloïde chronique par Imatinib by
Chahrazed Benosman(
)
1 edition published in 2010 in French and held by 1 WorldCat member library worldwide
Modelling hematopoiesis represents a feature of our research. Hematopoietic stem cells (HSC) are undifferentiated cells, located in bone marrow, with unique abilities of selfrenewal and differentiation (production of white cells, red blood cells and platelets).The process of hematopoiesis often exhibits abnormalities causing hematological diseases. In modelling Chronic Myeloid Leukemia (CML), a frequent hematological disease, we represent hematopoiesis of normal and leukemic cells by means of ordinary differential equations (ODE). Homeostasis of normal and leukemic cells are supposed to be different and depend on some lines of normal and leukemic HSC. We analyze the global dynamics of the model to obtain the conditions for regeneration of hematopoiesis and persistence of CML. We prove as well that normal and leukemic cells can not coexist for a long time. Imatinib is the main treatment of CML, with posology varying from 400 to 1000 mg per day. Some affected individuals respond to therapy with various levels being hematologic, cytogenetic and molecular. Therapy fails in two cases: the patient takes a long time to react, then suboptimal response occurs; or the patient resists after an initial response. Determining the optimal dosage required to reduce leukemic cells is another challenge. We approach therapy effects as an optimal control problem to minimize the cost of treatment and the level of leukemic cells. Suboptimal response, resistance and recovery forms are obtained through the influence of imatinib onto the division and mortality rates of leukemic cells. Hematopoiesis can be investigated according to age of cells. An agestructured system, describing the evolution of normal and leukemic HSC shows that the division rate of leukemic HSC plays a crucial role when determining the optimal control. When controlling the growth of cells under interspecific competition within normal and leukemic HSC, we prove that optimal dosage is related to homeostasis of leukemic HSC
1 edition published in 2010 in French and held by 1 WorldCat member library worldwide
Modelling hematopoiesis represents a feature of our research. Hematopoietic stem cells (HSC) are undifferentiated cells, located in bone marrow, with unique abilities of selfrenewal and differentiation (production of white cells, red blood cells and platelets).The process of hematopoiesis often exhibits abnormalities causing hematological diseases. In modelling Chronic Myeloid Leukemia (CML), a frequent hematological disease, we represent hematopoiesis of normal and leukemic cells by means of ordinary differential equations (ODE). Homeostasis of normal and leukemic cells are supposed to be different and depend on some lines of normal and leukemic HSC. We analyze the global dynamics of the model to obtain the conditions for regeneration of hematopoiesis and persistence of CML. We prove as well that normal and leukemic cells can not coexist for a long time. Imatinib is the main treatment of CML, with posology varying from 400 to 1000 mg per day. Some affected individuals respond to therapy with various levels being hematologic, cytogenetic and molecular. Therapy fails in two cases: the patient takes a long time to react, then suboptimal response occurs; or the patient resists after an initial response. Determining the optimal dosage required to reduce leukemic cells is another challenge. We approach therapy effects as an optimal control problem to minimize the cost of treatment and the level of leukemic cells. Suboptimal response, resistance and recovery forms are obtained through the influence of imatinib onto the division and mortality rates of leukemic cells. Hematopoiesis can be investigated according to age of cells. An agestructured system, describing the evolution of normal and leukemic HSC shows that the division rate of leukemic HSC plays a crucial role when determining the optimal control. When controlling the growth of cells under interspecific competition within normal and leukemic HSC, we prove that optimal dosage is related to homeostasis of leukemic HSC
Segmatation multiagents en imagerie biologique et médicale : application aux IRM 3D by
Richard Moussa(
)
1 edition published in 2011 in French and held by 1 WorldCat member library worldwide
La segmentation d'images est une opération cruciale pour le traitement d'images. Elle est toujours le point de départ des processus d'analyse de formes, de détection de mouvement, de visualisation, des estimations quantitatives de distances linéaires, de surfaces et de volumes. À ces fins, la segmentation consiste à catégoriser les voxels en des classes basées sur leurs intensités locales, leur localisation spatiale et leurs caractéristiques de forme ou de voisinage. La difficulté de la stabilité des résultats des méthodes de segmentation pour les images médicales provient des différents types de bruit présents.Dans ces images, le bruit prend deux formes : un bruit physique dû au système d'acquisition, dans notre cas l'IRM (Imagerie par Résonance Magnétique), et le bruit physiologique dû au patient. Ces bruits doivent être pris en compte pour toutes les méthodes de segmentation d'images. Durant cette thèse,nous nous sommes focalisés sur des modèles MultiAgents basés sur les comportements biologiques des araignées et des fourmis pour effectuer la tâche de segmentation. Pour les araignées, nous avons proposé une approche semiautomatique utilisant l'histogramme de l'image pour déterminer le nombre d'objets à détecter. Tandis que pour les fourmis, nous avons proposé deux approches : la première dite classique qui utilise le gradient de l'image et la deuxième, plus originale, qui utilise une partition intervoxel de l'image. Nous avons également proposé un moyen pour accélérer le processus de segmentation grâce à l'utilisation des GPU (Graphics Processing Unit). Finalement, ces deux méthodes ont été évaluées sur des images d'IRM de cerveau et elles ont été comparées aux méthodes classiques de segmentation : croissance de régions et Otsu pour le modèle des araignées et le gradientde Sobel pour les fourmis
1 edition published in 2011 in French and held by 1 WorldCat member library worldwide
La segmentation d'images est une opération cruciale pour le traitement d'images. Elle est toujours le point de départ des processus d'analyse de formes, de détection de mouvement, de visualisation, des estimations quantitatives de distances linéaires, de surfaces et de volumes. À ces fins, la segmentation consiste à catégoriser les voxels en des classes basées sur leurs intensités locales, leur localisation spatiale et leurs caractéristiques de forme ou de voisinage. La difficulté de la stabilité des résultats des méthodes de segmentation pour les images médicales provient des différents types de bruit présents.Dans ces images, le bruit prend deux formes : un bruit physique dû au système d'acquisition, dans notre cas l'IRM (Imagerie par Résonance Magnétique), et le bruit physiologique dû au patient. Ces bruits doivent être pris en compte pour toutes les méthodes de segmentation d'images. Durant cette thèse,nous nous sommes focalisés sur des modèles MultiAgents basés sur les comportements biologiques des araignées et des fourmis pour effectuer la tâche de segmentation. Pour les araignées, nous avons proposé une approche semiautomatique utilisant l'histogramme de l'image pour déterminer le nombre d'objets à détecter. Tandis que pour les fourmis, nous avons proposé deux approches : la première dite classique qui utilise le gradient de l'image et la deuxième, plus originale, qui utilise une partition intervoxel de l'image. Nous avons également proposé un moyen pour accélérer le processus de segmentation grâce à l'utilisation des GPU (Graphics Processing Unit). Finalement, ces deux méthodes ont été évaluées sur des images d'IRM de cerveau et elles ont été comparées aux méthodes classiques de segmentation : croissance de régions et Otsu pour le modèle des araignées et le gradientde Sobel pour les fourmis
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Université de Bordeaux I (19702013) Degree grantor
 Laboratoire bordelais de recherche en informatique Degree grantor
 Charpentier, Éric (1963 ...). Editor
 Université de Bordeaux (2014....). Degree grantor
 Institut de Mathématiques de Bordeaux Degree grantor
 Nikolski, Nikolaï Kapitonovitch (1940 ...). Opponent Thesis advisor Editor
 Bayart, Frédéric (1975 ...). Thesis advisor
 Institut national de recherche en informatique et en automatique (France) Centre de recherche Bordeaux  SudOuest
 Iollo, Angelo (1966....). Opponent Thesis advisor
 Namyst, Raymond (1969....). Opponent Thesis advisor
Associated Subjects
Alternative Names
École doctorale 39
École doctorale de mathématiques de Bordeaux
École doctorale de mathématiques et informatique (Bordeaux)
École doctorale de mathématiques et informatique de Bordeaux
École doctorale Mathématiques et informatique (Bordeaux)
École doctorale Mathématiques et informatique (Talence, Gironde)
ED 039
ED 39
ED039
ED39
EDMIB
Mathématiques et Informatique (Bordeaux)
Mathématiques et Informatique (Talence, Gironde)
Université Bordeaux I. UFR de Mathématiques et Informatique
Université de Bordeaux I, École doctorale de mathématiques et informatique
Languages