WorldCat Identities

École doctorale de Mathématiques et Informatique (Bordeaux)

Overview
Works: 366 works in 367 publications in 2 languages and 521 library holdings
Roles: 996, Editor, Degree grantor
Classifications: QA7, 510
Publication Timeline
.
Most widely held works by École doctorale de Mathématiques et Informatique (Bordeaux)
Leçons de mathématiques d'aujourd'hui by B Perthame( Book )

1 edition published in 2007 in French and held by 78 WorldCat member libraries worldwide

Leçons de mathématiques d'aujourd'hui( Book )

1 edition published in 2010 in French and held by 52 WorldCat member libraries worldwide

Leçons de mathématiques d'aujourd'hui( Book )

1 edition published in 2012 in French and held by 25 WorldCat member libraries worldwide

Ubiquité de la formule de Riemann-Hurwitz by Alexis Michel( Book )

1 edition published in 1992 in French and held by 2 WorldCat member libraries worldwide

ON DONNE UNE METHODE ALGEBRIQUE POUR ETUDIER LES FORMULES DE TRANSLATIONS DU TYPE RIEMANN-HURWITZ D'INVARIANTS ARITHMETIQUES. LES FORMULES DE KUZ'MIN-KIDA, SUR L'INVARIANT LAMBDA, DE DEURING-SHAFAREVITCH SUR L'INVARIANT DE HASSE-WITT ET DE WINGBERG POUR LE GROUPE DE SELMER ENTRENT DANS CE CADRE. ON EN UNIFIE LES PREUVES ET ON OBTIENT DES RESULTATS DANS D'AUTRES SITUATIONS. ON ETEND CETTE DEMARCHE DANS UN CAS NON-GALOISIEN
Conception et analyse de schémas d'ordre très élevé distribuant le résidu application à la mécanique des fluides by Adam Larat( )

1 edition published in 2009 in French and held by 2 WorldCat member libraries worldwide

Numerical simulations are nowadays a major tool in aerodynamic design in aeronautic, automotive, naval industry etc... One of the main challenges to push further the limits of the simulation codes is to increase their accuracy within a fixed set of resources (computational power and/or time). Two possible approaches to deal with this issue are either to contruct discretizations yielding, on a given mesh, very high order accurate solutions, or to construct compact, massively parallelizable schemes to minimize the computational time by means of a high performance parallel implementation. In this thesis, we try to combine both approaches by investigating the contruction and implementation of very high order Residual Distribution Schemes (RDS) with the most possible compact stencil. The manuscript starts with a review of the mathematical theory of hyperbolic Conservation Laws (CLs). The aim of this initial part is to highlight the properties of the analytical solutions we are trying to approximate, in order to be able to link these properties with the ones of the sought discrete solutions. Next, we describe the three main steps toward the construction of a very high order RDS: - The definition of higher order polynomial representations of the solution over polygons and polyhedra; - The design of low order compact conservative RD schemes consistent with a given (high degree) polynomial representation. Among these, particular accest is put on the simplest, given by a generalization of the Lax-Friedrich's (\LxF) scheme; - The design of a positivity preserving nonlinear transformation, mapping first-order linear schemes onto nonlinear very high order schemes. In the manuscript, we show formally that the schemes obtained following this procedure are consistent with the initial CL, that they are stable in $L^{\infty}$ norm, and that they have the proper truncation error. Even though all the theoretical developments are carried out for scalar CLs, remarks on the extension to systems are given whenever possible. Unortunately, when employing the first order \LxF scheme as a basis for the construction of the nonlinear discretization, the final nonlinear algebraic equation is not well-posed in general. In particular, for smoothly varying solutions one observes the appearance of high frequency spurious modes. In order to kill these modes, a streamline dissipation term is added to the scheme. The analytical implications of this modifications, as well as its practical computation, are thouroughly studied. Lastly, we focus on a correct discretization of the boundary conditions for the very high order RDS proposed. The theory is then extensively verified on a variety of scalar two dimensional test cases. Both triangular, and hybrid triangular-quadrilateral meshes are used to show the generality of the approach. The results obtained in these tests confirm all the theoretical expectations in terms of accuracy and stability and underline some advantages of the hybrid grids. Next, we consider solutions of the two dimensional Euler equations of gas dynamics. The results obtained are quite satisfactory and yet, we are not able to obtain the desired convergence rates on problems involving solid wall boundaries. Further investigation of this problem is under way. We then discuss the parallel implementation of the schemes, and analyze and illustrate the performance of this implementation on large three dimensional problems. Due to the preliminary character and the complexity of these three dimensional problems, a rather qualitative discussion is made for these tests cases: the overall behavior seems to be the correct one, but more work is necessary to assess the properties of the schemes in three dimensions
Simulation numérique d'un écoulement diphasique multicomposant en milieu poreux by Nathalie Laquerie( Book )

2 editions published in 1997 in French and held by 2 WorldCat member libraries worldwide

Le but de la thèse est de simuler le lessivage par injection d'eau d'un aquifère pollué par un hydrocarbure. L'accent est mis sur la modélisation où l'on prend en compte de façon explicite les échanges entre les phases, sans faire l'hypothèse d'équilibre local. Ce modèle nous a conduit à étudier un système de convection-diffusion-réaction dans lequel les temps caractéristiques régissant les divers mécanismes mis en jeu sont très différents. Dans un premier temps, nous avons fait l'étude monodimensionnelle, afin de cerner les principales difficultés : non linéarité, dégénérescence, caractère hyperbolique des équations. Afin de traiter aussi précisément que possible ces trois phénomènes, un schéma basé sur la méthode des pas fractionnaires est mis en place. Pour résoudre la partie convection, nous avons utilisé les schémas de Roe et de Harten, puis un schéma basé sur la méthode MUSCL introduite par Van Leer. La partie diffusion est traitée de manière semi-implicite par un schéma aux différences finies centrées. La partie réaction se ramène à l'étude d'un système d'équations différentielles ordinaires, qui est traité de façon semi-implicite. Dans ces deux dernières parties, les non-linéarités sont résolues à l'aide de la méthode de Newton. Ensuite, le schéma proposé est étendu à la dimension deux. On présente quelques tests numériques pour prouver la faisabilité de la méthode
Contributions à l'usage des détecteurs de clones pour des tâches de maintenance logicielle by Alan Charpentier( )

1 edition published in 2016 in French and held by 1 WorldCat member library worldwide

L'existence de plusieurs copies d'un même fragment de code (nommées des clones dans lalittérature) dans un logiciel peut compliquer sa maintenance et son évolution. La duplication decode peut poser des problèmes de consistance, notamment lors de la propagation de correction debogues. La détection de clones est par conséquent un enjeu important pour préserver et améliorerla qualité logicielle, propriété primordiale pour le succès d'un logiciel.L'objectif général de cette thèse est de contribuer à l'usage des détecteurs de clones dans destâches de maintenance logicielle. Nous avons centré nos contributions sur deux axes de recherche.Premièrement, la méthodologie pour comparer et évaluer les détecteurs de clones, i.e. les benchmarksde clones. Nous avons empiriquement évalué un benchmark de clones et avons montré queles résultats dérivés de ce dernier n'étaient pas fiables. Nous avons également identifié des recommandationspour fiabiliser la construction de benchmarks de clones. Deuxièmement, la spécialisationdes détecteurs de clones dans des tâches de maintenance logicielle.Nous avons développé uneapproche spécialisée dans un langage et une tâche (la réingénierie) qui permet aux développeursd'identifier et de supprimer la duplication de code de leurs logiciels. Nous avons mené des étudesde cas avec des experts du domaine pour évaluer notre approche
On the dynamics of some complex fluids by Francesco De Anna( )

1 edition published in 2016 in English and held by 1 WorldCat member library worldwide

Dans le cadre de cette thèse, on s'intéresse à la dynamique de quelques fluides complexes. D'une part on étudie la dynamique des cristaux liquides nématiques, en utilisant les modèles proposés par Ericksen et Leslie, Beris et Edwards, Qian et Sheng. D'autre part, on analyse un fluide complexe dont la dynamique dépend de la température et qui est modélisée par le système de Boussinesq. Les cristaux liquides sont des matériaux avec une phase de la matière intermédiaire entre les liquides et les solides qui sont des phases plus connues. Dans cette thèse, on s'intéresse à l'étude du problème de Cauchy associé à chaque système modélisant leurs hydrodynamiques. Tout d'abord on obtient des résultats d'existence et d'unicité de solutions faibles ou classiques, solutions qui sont globales en temps. Ensuite, on analyse la propagation de la régularité des données initiales pour ces solutions. Le cadre fonctionnel adopté pour les données initiales est celui des espaces de Besov homogènes, généralisant des classes d'espaces mieux connues : les espaces de Soboloev homogènes et les espaces de Hölder. Le système Ericksen-Leslie est considéré dans la version simplifiée proposée par F. Lin et C. Liu, version qui préserve les principales difficultés du système initial. On étudie ce problème en dimension supérieure ou égale à deux. On considère le système dans le cas inhomogène, c'est-à dire avec une densité variable. De plus, on s'intéresse au cas d'une densité de faible régularité qui est autorisée à présenter des discontinuités. Donc, le résultat que l'on démontre peut être mis en relation avec la dynamique des mélanges de nématiques non miscibles. On démontre l'existence globale en temps de solutions faibles de régularité invariante par changement d'échelle, en supposant une condition de petitesse sur les données initiales dans des espaces de Besov critiques. On démontre aussi l'unicité de ces solutions si de plus on suppose une condition supplémentaire de régularité pour les données initiales. Le système Beris-Edwards est analysé dans le cas bidimensionnel. On obtient l'existence et l'unicité de solutions faibles globales en temps, lorsque les données initiales sont dans des espaces de Sobolev spécifiques (sans condition de petitesse). Le niveau de régularité de ces espaces fonctionnels est adapté pour bien définir les solutions faibles. L'unicité est une question délicate et demande une estimation doublement logarithmique pour une norme sur la différence entre deux solutions dans un espace de Banach convenable. Le lemme d'Osgood permet alors de conclure à l'unicité de la solution. On obtient également un résultat de propagation de régularité d'indice positif. Afin de prendre en compte l'inertie des molécules, on considère aussi le modèle proposé par Qian et Sheng, et on étudie le cas de la dimension supérieure ou égale à deux. Ce système montre une caractéristique structurale spécifique, plus précisément la présence d'un terme inertiel, ce qui génère des difficultés significatives. On démontre l'existence d'une fonctionnelle de Lyapunov et l'existence et l'unicité de solutions classiques globales en temps, en considérant des données initiales petites. Enfin, on analyse le système de Boussinesq et on montre l'existence et l'unicité de solutions globales en temps. On considère la viscosité en fonction de la température en supposant simplement que la température initiale soit bornée, tandis que la vitesse initiale est dans des espaces de Besov avec indice de régularité critique. Les données initiales ont une composante verticale grande et satisfont à une condition de petitesse spécifique sur les composantes horizontales: elles doivent être exponentiellement petites par rapport à la composante verticale
Segmentation spatio-temporelle et indexation vidéo dans le domaine des représentations hiérarchiques by Claire Morand( )

1 edition published in 2009 in French and held by 1 WorldCat member library worldwide

This thesis aims at proposing a solution of scalable object-based indexing of HD video flow compressed by MJPEG2000. In this context, on the one hand, we work in the hierarchical transform domain of the 9/7 Daubechies' wavelets and, on the other hand, the scalable representation implies to search for multiscale methods, from low to high resolution. The first part of this manuscript is dedicated to the definition of a method for automatic extraction of objects having their own motion. It is based on a combination of a robust global motion estimation with a morphological color segmentation at low resolution. The obtained result is then refined following the data order of the scalable flow. The second part is the definition of an object descriptor which is based on the multiscale histograms of the wavelet coefficients. Finally, the performances of the proposed method are evaluated in the context of scalable content-based queries
Le problème inverse en l'électrocardiographie by Alejandro Lopez Rincon( )

1 edition published in 2013 in English and held by 1 WorldCat member library worldwide

In the inverse problem of electrocardiography, the target is to make the reconstruction of electrophysiological activity in the heart without measuring directly in its surface (without interventions with catheter). It is important to note that the current numerical solution of the inverse problem is solved with the quasi-static model. This model does not consider the dynamics of the heart and can cause errors in the reconstruction of the solution on the surface of the heart. This thesis investigates different methodologies was to solve the inverse problem of electrocardiography as artificial intelligence and dynamic models limits. Also, the effects of different operators using boundary element methods, finite element methods, and was investigates
Développement de compteurs à scintillation hautes performances et de très basse radioactivité pour le calorimètre du projet SuperNEMO by Emmanuel Chauveau( )

1 edition published in 2010 in French and held by 1 WorldCat member library worldwide

SuperNEMO is a next generation double beta decay experiment which will extend the successful "tracko-calo" technique employed in NEMO 3. The main characteristic of this type of detector is to identify not only double beta decays, but also to mesure its own background components. The projet aims to reach a sensitivity up to 1026 years on the half-life of 82Se. One of the main challenge of the Research and Development is to achieve an unprecedented energy resolution for the electron calorimeter, better than 8 % FWHM at 1 MeV.This thesis contributes to improve scintillators and photomultiplicators performances and reduce their radioactivity, including in particular the development of a new photomultiplier in collaboration with Photonis
Etudes sur les équations de Ramanujan-Nagell et de Nagell-Ljunggren ou semblables by Benjamin Dupuy( )

1 edition published in 2009 in French and held by 1 WorldCat member library worldwide

In this thesis, we study two types of diophantine equations. A ?rst part of our study is about the resolution of the Ramanujan-Nagell equations Cx2 + b2mD = yn. A second part of our study is about the Nagell-Ljungren equations xp+yp x+y = pezq including the diagonal case p = q. Our new results will be applied to the diophantine equations of the form xp + yp = Bzq. The Fermat-Catalan equation (case B = 1) will be the subject of a special study
Contrôle de la dynamique de la leucémie myéloïde chronique par Imatinib by Chahrazed Benosman( )

1 edition published in 2010 in French and held by 1 WorldCat member library worldwide

Modelling hematopoiesis represents a feature of our research. Hematopoietic stem cells (HSC) are undifferentiated cells, located in bone marrow, with unique abilities of self-renewal and differentiation (production of white cells, red blood cells and platelets).The process of hematopoiesis often exhibits abnormalities causing hematological diseases. In modelling Chronic Myeloid Leukemia (CML), a frequent hematological disease, we represent hematopoiesis of normal and leukemic cells by means of ordinary differential equations (ODE). Homeostasis of normal and leukemic cells are supposed to be different and depend on some lines of normal and leukemic HSC. We analyze the global dynamics of the model to obtain the conditions for regeneration of hematopoiesis and persistence of CML. We prove as well that normal and leukemic cells can not coexist for a long time. Imatinib is the main treatment of CML, with posology varying from 400 to 1000 mg per day. Some affected individuals respond to therapy with various levels being hematologic, cytogenetic and molecular. Therapy fails in two cases: the patient takes a long time to react, then suboptimal response occurs; or the patient resists after an initial response. Determining the optimal dosage required to reduce leukemic cells is another challenge. We approach therapy effects as an optimal control problem to minimize the cost of treatment and the level of leukemic cells. Suboptimal response, resistance and recovery forms are obtained through the influence of imatinib onto the division and mortality rates of leukemic cells. Hematopoiesis can be investigated according to age of cells. An age-structured system, describing the evolution of normal and leukemic HSC shows that the division rate of leukemic HSC plays a crucial role when determining the optimal control. When controlling the growth of cells under interspecific competition within normal and leukemic HSC, we prove that optimal dosage is related to homeostasis of leukemic HSC
Segmatation multi-agents en imagerie biologique et médicale application aux IRM 3D by Richard Moussa( )

1 edition published in 2011 in French and held by 1 WorldCat member library worldwide

La segmentation d'images est une opération cruciale pour le traitement d'images. Elle est toujours le point de départ des processus d'analyse de formes, de détection de mouvement, de visualisation, des estimations quantitatives de distances linéaires, de surfaces et de volumes. À ces fins, la segmentation consiste à catégoriser les voxels en des classes basées sur leurs intensités locales, leur localisation spatiale et leurs caractéristiques de forme ou de voisinage. La difficulté de la stabilité des résultats des méthodes de segmentation pour les images médicales provient des différents types de bruit présents.Dans ces images, le bruit prend deux formes : un bruit physique dû au système d'acquisition, dans notre cas l'IRM (Imagerie par Résonance Magnétique), et le bruit physiologique dû au patient. Ces bruits doivent être pris en compte pour toutes les méthodes de segmentation d'images. Durant cette thèse,nous nous sommes focalisés sur des modèles Multi-Agents basés sur les comportements biologiques des araignées et des fourmis pour effectuer la tâche de segmentation. Pour les araignées, nous avons proposé une approche semi-automatique utilisant l'histogramme de l'image pour déterminer le nombre d'objets à détecter. Tandis que pour les fourmis, nous avons proposé deux approches : la première dite classique qui utilise le gradient de l'image et la deuxième, plus originale, qui utilise une partition intervoxel de l'image. Nous avons également proposé un moyen pour accélérer le processus de segmentation grâce à l'utilisation des GPU (Graphics Processing Unit). Finalement, ces deux méthodes ont été évaluées sur des images d'IRM de cerveau et elles ont été comparées aux méthodes classiques de segmentation : croissance de régions et Otsu pour le modèle des araignées et le gradientde Sobel pour les fourmis
Opérateurs de Toeplitz sur l'espace de Bergman harmonique et opérateurs de Teoplitz tronqués de rang fini by Fanilo rajaofetra Randriamahaleo( )

1 edition published in 2015 in French and held by 1 WorldCat member library worldwide

Dans la première partie de la thèse, nous donnons les résultats classiques concernant l'espace de Hardy, les espaces modèles et les espaces de Bergman analytique et harmonique. Les notions de base telles que les projections et les noyaux reproduisant y sont introduites. Nous exposons ensuite nos résultats concernant d'une part, la stabilité du produit et la commutativité de deux opérateurs de Toeplitz quasihomogènes et d'autre part, la description matricielle des opérateurs de Toeplitz tronqués du type "a" "dans le cas de la dimension finie
Etude et optimisation des transferts de chaleur en injection moulage analyse de leur influence sur les propriétés finales by Hamdy Abo Ali Hassan( )

1 edition published in 2009 in English and held by 1 WorldCat member library worldwide

Plastics are typically polymers of high molecular weight, and may contain other substances to improve performance and/or reduce costs. Plastic industry is one of the world?s fastest growing industries; almost every product that is used in daily life involves the usage of plastic. There are different methods for polymer processing (thermoforming, blow molding, compression molding of polymers, transfer molding of polymers, extrusion of polymers, injection molding of polymers, etc.) which differ by the method of fabrications, the used materials, the quality of the product and the form of the final product. Demand for injection molded parts continues to increase every year because plastic injection molding process is well known as the most efficient manufacturing techniques for economically producing precise plastic parts and complex geometry at low cost and a large quantity. The plastic injection molding process is a cyclic process where polymer is injected into a mold cavity, and solidifies to form a plastic part. There are three significant stages in each cycle. The first stage is filling the cavity with hot polymer melt at high injection pressure and temperature (filling and post-filling stage). It is followed by cooling the injected polymer material until the material is completely solidified (cooling stage), finally the solidified part is ejected (ejection stage)
Algorithmique distribuée asynchrone avec une majorité de pannes by David Bonnin( )

1 edition published in 2015 in French and held by 1 WorldCat member library worldwide

In distributed computing, asynchronous message-passing model with crashes is well-known and considered in many articles, because of its realism and it issimple enough to be used and complex enough to represent many real problems.In this model, n processes communicate by exchanging messages, but withoutany bound on communication delays, i.e. a message may take an arbitrarilylong time to reach its destination. Moreover, up to f among the n processesmay crash, and thus definitely stop working. Those crashes are undetectablebecause of the system asynchronism, and restrict the potential results in thismodel.In many cases, known results in those systems must verify the propertyof a strict minority of crashes. For example, this applies to implementationof atomic registers and solving of renaming. This barrier of a majority ofcrashes, explained by the CAP theorem, restricts numerous problems, and theasynchronous message-passing model with a majority of crashes is thus notwell-studied and rather unknown. Hence, studying what can be done in thiscase of a majority of crashes is interesting.This thesis tries to analyse this model, through two main problems. The first part studies the implementation of shared objects, similar to usual registers,by defining x-colored register banks, and [alpha]-registers. The second partextends the renaming problem into k-redundant renaming, for both one-shotand long-lived versions, and similarly for the shared objects called splitters intok-splitters
Scheduling and memory optimizations for sparse direct solver on multi-core/multi-gpu duster systems by Xavier Lacoste( )

1 edition published in 2015 in English and held by 1 WorldCat member library worldwide

The ongoing hardware evolution exhibits an escalation in the number, as well as in the heterogeneity, of computing resources. The pressure to maintain reasonable levels of performance and portability forces application developers to leave the traditional programming paradigms and explore alternative solutions. PaStiX is a parallel sparse direct solver, based on a dynamic scheduler for modern hierarchical manycore architectures. In this thesis, we study the benefits and the limits of replacing the highly specialized internal scheduler of the PaStiX solver by two generic runtime systems: PaRSEC and StarPU. Thus, we have to describe the factorization algorithm as a tasks graph that we provide to the runtime system. Then it can decide how to process and optimize the graph traversal in order to maximize the algorithm efficiency for thetargeted hardware platform. A comparative study of the performance of the PaStiX solver on top of its original internal scheduler, PaRSEC, and StarPU frameworks is performed. The analysis highlights that these generic task-based runtimes achieve comparable results to the application-optimized embedded scheduler on homogeneous platforms. Furthermore, they are able to significantly speed up the solver on heterogeneous environments by taking advantage of the accelerators while hiding the complexity of their efficient manipulation from the programmer. In this thesis, we also study the possibilities to build a distributed sparse linear solver on top of task-based runtime systems to target heterogeneous clusters. To permit an efficient and easy usage of these developments in parallel simulations, we also present an optimized distributed interfaceaiming at hiding the complexity of the construction of a distributed matrix to the user
Modes de représentation pour l'éclairage en synthèse d'images by Romain Pacanowski( )

1 edition published in 2009 in French and held by 1 WorldCat member library worldwide

In image synthesis, the main computation involved to generate an image is characterized by an equation named rendering equation [Kajiya1986]. This equation represents the law of energy conservation. It stipulates that the light emanating from the scene objects is the sum of the emitted energy and the reflected energy. Moreover, the reflected energy at a surface point is defined as the convolution of the incoming lighting with a reflectance function. The reflectance function models the object material and represents, in the rendering equation, a directional and energetic filter that describes the surface behavior regarding the reflection. In this thesis, we introduce new representations for the reflectance function and the incoming lighting. In the first part of this thesis, we propose two new models for the reflectance function. The first model is targeted for artists to help them create and edit highlights. Our main idea is to let the user paint and sketch highlight characteristics (shape, color, gradient and texture) in a plane parametrized by the incident lighting direction. The second model is designed to represent efficiently isotropic material data. To achieve this result, we introduce a new representation of the reflectance function that uses rational polynomials. Their coefficients are computed using a fitting process that guarantees an optimal solution regarding convergence. In the second part of this thesis, we introduce a new volumetric structure for indirect illumination that is directionally represented with irradiance vector. We show that our representation is compact and robust to geometric variations, that it can be used as caching system for interactive and offline rendering and that it can also be transmitted with streaming techniques. Finally, we introduce two modifications of the incoming lighting to improve the shape depiction of a surface. The first modification consists in warping the incoming light directions whereas the second one consists in scaling the intensity of each light source
Contribution à la modélisation et à la simulation numérique multi-échelle du transport cinétique électronique dans un plasma chaud by Jessy Mallet( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

In plasma physics, the transport of electrons can be described from a kinetic point of view or from an hydrodynamical point of view.Classically in kinetic theory, a Fokker-Planck equation coupled with Maxwell equations is used to describe the evolution of electrons in a collisional plasma. More precisely the solution of the kinetic equations is a non-negative distribution function f specifying the density of particles as a function of velocity of particles, the time and the position in space. In order to approximate the solution of such problems, many computational methods have been developed. Here, a deterministic method is proposed in a planar geometry. This method is based on different high order numerical schemes. Each deterministic scheme used presents many fundamental properties such as conservation of flux particles, preservation of positivity of the distribution function and conservation of energy. However the kinetic computation of this accurate method is too expensive to be used in practical computation especially in multi-dimensional space.To reduce the computational time, the plasma can be described by an hydrodynamic model. However for the new high energy target drivers, the kinetic effects are too important to neglect them and replace kinetic calculus by usual macroscopic Euler models.That is why an alternative approach is proposed by considering an intermediate description between the fluid and the kinetic level. To describe the transport of electrons, the new reduced kinetic model M1 proposed is based on a moment approach for Maxwell-Fokker-Planck equations. This moment model uses integration of the electron distribution function on the propagating direction and retains only the energy of particles as kinetic variable. The velocity variable is written in spherical coordinates and the model is written by considering the system of moments with respect to the angular variable. The closure of the moments system is obtained under the assumption that the distribution function is a minimum entropy function. This model is proved to satisfy fundamental properties such as the non-negativity of the distribution function, conservation laws for collision operators and entropy dissipation. Moreover an entropic discretization in the velocity variable is proposed on the semi-discrete model. Moreover the M1 model can be generalized to the MN model by considering N given moments. The N-moments model obtained also preserves fundamental properties such as conservation laws and entropy dissipation. The associated semi-discrete scheme is shown to preserve the conservation properties and entropy decay
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.89 (from 0.88 for Leçons de ... to 0.97 for On the dyn ...)

Associated Subjects
Alternative Names
École doctorale 39

École doctorale de mathématiques de Bordeaux

École doctorale de mathématiques et informatique (Bordeaux)

École doctorale de mathématiques et informatique de Bordeaux

École doctorale Mathématiques et informatique (Bordeaux)

École doctorale Mathématiques et informatique (Talence, Gironde)

ED 039

ED 39

ED039

ED39

EDMIB

Mathématiques et Informatique (Bordeaux)

Mathématiques et Informatique (Talence, Gironde)

Université Bordeaux I. UFR de Mathématiques et Informatique

Université de Bordeaux I, École doctorale de mathématiques et informatique

Languages
French (16)

English (5)