Ecole Doctorale Mathématiques et Informatique de Marseille (Marseille)
Overview
Works:  395 works in 535 publications in 2 languages and 533 library holdings 

Roles:  Other, Degree grantor, 996 
Publication Timeline
.
Most widely held works by
Ecole Doctorale Mathématiques et Informatique de Marseille (Marseille)
Traitement de surfaces triangulées pour la construction de modèles géologiques structuraux by
NamVan Tran(
)
3 editions published between 2008 and 2017 in French and held by 4 WorldCat member libraries worldwide
Our work originates in the study of triangular meshes issued from the seismic data, widely used as the baseelements of the reservoir geometric modeling. Our input data, obtained by physical measures, are typically inhomogeneous, sparse, noisy and voluminous. Therefore, we are interested in treatments permitting to improve the surface quality, also to detect and then rebuild missing elements : surfaces representing the break of horizons and the relative moving of the two separated broken parts (called the faults)
3 editions published between 2008 and 2017 in French and held by 4 WorldCat member libraries worldwide
Our work originates in the study of triangular meshes issued from the seismic data, widely used as the baseelements of the reservoir geometric modeling. Our input data, obtained by physical measures, are typically inhomogeneous, sparse, noisy and voluminous. Therefore, we are interested in treatments permitting to improve the surface quality, also to detect and then rebuild missing elements : surfaces representing the break of horizons and the relative moving of the two separated broken parts (called the faults)
Circuits de rétroaction dans les réseaux génétiques de régulation intercellulaires by
Anne Crumière(
Book
)
3 editions published between 2008 and 2017 in French and held by 3 WorldCat member libraries worldwide
Les biologistes représentent souvent les interactions génétiques par des graphes dirigés, appelés graphes de régulation génétique. Les sommets désignent les gènes du système et les arêtes, les effets de régulation d'un gène sur un autre. Elles sont munies d'un signe positif dans le cas d'une activation et négatif pour une inhibition. Cette thèse traite des relations entre la structure des graphes et leurs propriétés dynamiques. Dans les années 80, le biologiste R. Thomas a énoncé la règle suivante : une condition nécessaire pour la multistabilité est la présence d'un circuit positif dans un graphe de régulation, le signe d'un circuit étant le produit des signes de ces arêtes. Cette règle a été démontrée dans un formalisme différentiel et plus récemment dans un cadre discret, mais toujours dans le cas où les gènes font partie d'une seule cellule. On peut s'interroger sur la validité de cette règle pour un système d'interactions génétiques intracellulaires et intercellulaires. Dans un premier temps, on considère le cas simplifié où les cellules sont réparties sur une grille infinie unidimensionnelle et on se place dans un cadre booléen. La communication intercellulaire est supposée locale : un gène interagit avec des gènes de sa propre cellule et des cellules voisines (gauche ou droite). Cette supposition, qui est raisonnable biologiquement, est standard dans la définition des automates cellulaires. Puis, on généralise le modèle précédent en supposant que les cellules sont localisées suivant un réseau, i.e, un sousgroupe discret de Rd. De plus, on se place dans le cadre où le niveau d'expression des gènes est multivalué et la communication intercellulaire s'étend à un voisinage quelconque. Dans ce cadre général, on obtient la première règle de Thomas avec une condition spatiale sur les états stables. Ce modèle est illustré par deux applications liées au développement de la Drosophile: la segmentation de l'embryon et la formation de l'organe sensoriel
3 editions published between 2008 and 2017 in French and held by 3 WorldCat member libraries worldwide
Les biologistes représentent souvent les interactions génétiques par des graphes dirigés, appelés graphes de régulation génétique. Les sommets désignent les gènes du système et les arêtes, les effets de régulation d'un gène sur un autre. Elles sont munies d'un signe positif dans le cas d'une activation et négatif pour une inhibition. Cette thèse traite des relations entre la structure des graphes et leurs propriétés dynamiques. Dans les années 80, le biologiste R. Thomas a énoncé la règle suivante : une condition nécessaire pour la multistabilité est la présence d'un circuit positif dans un graphe de régulation, le signe d'un circuit étant le produit des signes de ces arêtes. Cette règle a été démontrée dans un formalisme différentiel et plus récemment dans un cadre discret, mais toujours dans le cas où les gènes font partie d'une seule cellule. On peut s'interroger sur la validité de cette règle pour un système d'interactions génétiques intracellulaires et intercellulaires. Dans un premier temps, on considère le cas simplifié où les cellules sont réparties sur une grille infinie unidimensionnelle et on se place dans un cadre booléen. La communication intercellulaire est supposée locale : un gène interagit avec des gènes de sa propre cellule et des cellules voisines (gauche ou droite). Cette supposition, qui est raisonnable biologiquement, est standard dans la définition des automates cellulaires. Puis, on généralise le modèle précédent en supposant que les cellules sont localisées suivant un réseau, i.e, un sousgroupe discret de Rd. De plus, on se place dans le cadre où le niveau d'expression des gènes est multivalué et la communication intercellulaire s'étend à un voisinage quelconque. Dans ce cadre général, on obtient la première règle de Thomas avec une condition spatiale sur les états stables. Ce modèle est illustré par deux applications liées au développement de la Drosophile: la segmentation de l'embryon et la formation de l'organe sensoriel
Intégration de données ultrasonores peropératoires dans le geste de chirurgie orthopédique assisté par ordinateur by
Agung Alfiansyah(
Book
)
3 editions published between 2008 and 2018 in French and held by 3 WorldCat member libraries worldwide
This work addresses the problem of the integration of ultrasound imaging for intraoperative data acquisition in computer assisted orthopaedic surgery, in particular for hip surgery applications. The point is to improve the quality of the surgery using a minimally invasive, real time, and highly available imaging device. The method we propose uses a featurebased registration between ultrasound images and a preoperative CT scan volume. We present an ultrasound image segmentation method based on a deformable model with the integration of a regional energy term to detect the local characteristics of ultrasound images. The featurebased registration is a variant of the ICP algorithm that uses a precalculated distance map with a LavenbergMarquardt optimization. We also propose a protocol for the pre and intraoperative data acquisition. The real operating room constraint are taken into account for the design of this protocol while trying to provide the necessary ergonomy for the surgeon. A large validation has been conducted on phantoms and a cadaver and is presented in this thesis. From this validation we assess the performances of the data acquisition protocol, as well as the precision of the segmentation and the robustness and precision of the registration. Performances are measured quantitatively and qualitatively. Finally we propose some possible improvements to the segmentation and registration
3 editions published between 2008 and 2018 in French and held by 3 WorldCat member libraries worldwide
This work addresses the problem of the integration of ultrasound imaging for intraoperative data acquisition in computer assisted orthopaedic surgery, in particular for hip surgery applications. The point is to improve the quality of the surgery using a minimally invasive, real time, and highly available imaging device. The method we propose uses a featurebased registration between ultrasound images and a preoperative CT scan volume. We present an ultrasound image segmentation method based on a deformable model with the integration of a regional energy term to detect the local characteristics of ultrasound images. The featurebased registration is a variant of the ICP algorithm that uses a precalculated distance map with a LavenbergMarquardt optimization. We also propose a protocol for the pre and intraoperative data acquisition. The real operating room constraint are taken into account for the design of this protocol while trying to provide the necessary ergonomy for the surgeon. A large validation has been conducted on phantoms and a cadaver and is presented in this thesis. From this validation we assess the performances of the data acquisition protocol, as well as the precision of the segmentation and the robustness and precision of the registration. Performances are measured quantitatively and qualitatively. Finally we propose some possible improvements to the segmentation and registration
Localisation et parcellisation corticale pour la mise en correspondance intersujets de données cérébrales by
Cédric Clouchoux(
Book
)
3 editions published between 2008 and 2018 in French and held by 3 WorldCat member libraries worldwide
This thesis deals with intersubject cortical surface matching, a central point of anatomical and functional MR data analysis. The problem is tackled by defining an anatomically invariant surface parameterization process. The goal is to build a reproductible surface based referential able to locate any point relative to a number of anatomically invariant features. Those features, defined via o model of cortical organisation, are detected and identified automatically. Frome these features, the parametrization is extrapolated over the whole cortex by solving a PDE. We also propose a cortical parcellation technique from the coordinate sytem built before. Processes are tested on a large set of real data
3 editions published between 2008 and 2018 in French and held by 3 WorldCat member libraries worldwide
This thesis deals with intersubject cortical surface matching, a central point of anatomical and functional MR data analysis. The problem is tackled by defining an anatomically invariant surface parameterization process. The goal is to build a reproductible surface based referential able to locate any point relative to a number of anatomically invariant features. Those features, defined via o model of cortical organisation, are detected and identified automatically. Frome these features, the parametrization is extrapolated over the whole cortex by solving a PDE. We also propose a cortical parcellation technique from the coordinate sytem built before. Processes are tested on a large set of real data
Tomographie depuis plusieurs sources vers de multiples destinations dans les réseaux de grilles informatiques hautes performances by
Laurent Bobelin(
Book
)
3 editions published between 2008 and 2017 in French and held by 3 WorldCat member libraries worldwide
Nowadays grids connect up to thousands communicating resources that may interact in a partially or totally coordinated way. Consequently, applications running upon this kind of platform often involve massively concurrent bulk data transfers. In order to optimize overall completion times, those transfers have to be scheduled based on knowledge about network performances and topology. Identifying and inferring performances of a network topology is a classic problem. Achieving this by using only endtoend measurements at the application level is a method known as network tomography. When topology reflects capacities of sets of links with respect to a metric, the model used to represent the topology obtained is called a MetricInduced Network Topology (MINT). Such a type of representation, obtained using statistical methods, has been widely used in order to represent performances of client/server communication protocols. However, it is no longer accurate when dealing with grids. In this thesis, we introduce a novel representation of the infered knowledge from multiple source and multiple destination measurements, algorithms in order to reconstruct such representations, methods to probe network in order to obtain an initial set of data from which we can reconstruct such topology. We also describe the tool we have designed an implemented in order to achieve these goals and experiences we have made in order to validate our methods and algorithms
3 editions published between 2008 and 2017 in French and held by 3 WorldCat member libraries worldwide
Nowadays grids connect up to thousands communicating resources that may interact in a partially or totally coordinated way. Consequently, applications running upon this kind of platform often involve massively concurrent bulk data transfers. In order to optimize overall completion times, those transfers have to be scheduled based on knowledge about network performances and topology. Identifying and inferring performances of a network topology is a classic problem. Achieving this by using only endtoend measurements at the application level is a method known as network tomography. When topology reflects capacities of sets of links with respect to a metric, the model used to represent the topology obtained is called a MetricInduced Network Topology (MINT). Such a type of representation, obtained using statistical methods, has been widely used in order to represent performances of client/server communication protocols. However, it is no longer accurate when dealing with grids. In this thesis, we introduce a novel representation of the infered knowledge from multiple source and multiple destination measurements, algorithms in order to reconstruct such representations, methods to probe network in order to obtain an initial set of data from which we can reconstruct such topology. We also describe the tool we have designed an implemented in order to achieve these goals and experiences we have made in order to validate our methods and algorithms
Contribution à l'accélération de la simulation stochastique sur des modèles AltaRica Data Flow by
MinhThang Khuu(
Book
)
3 editions published between 2008 and 2017 in French and held by 3 WorldCat member libraries worldwide
This thesis relates to the study of stochastic simulation acceleration which applied to states/transitions models. In system dependability studies, stochastic simulation is practically the only method accessible for large states/transitions models. However, simulation processes are likely to be very long in order to obtain statistically stable results. To reduce simulation execution time, we examine the representation of the studied system by instructions of a programming language. The AltaRica Data Flow (ADF) language is the starting point of this thesis. This language generalizes the most used formalisms in system dependability studies. We implement a transformation of an ADF description into instructions of the C language, and an automated generation of a stochastic simulator for the studied system. The carried out experiments justify the use of the generated simulators with compared to the traditional simulators
3 editions published between 2008 and 2017 in French and held by 3 WorldCat member libraries worldwide
This thesis relates to the study of stochastic simulation acceleration which applied to states/transitions models. In system dependability studies, stochastic simulation is practically the only method accessible for large states/transitions models. However, simulation processes are likely to be very long in order to obtain statistically stable results. To reduce simulation execution time, we examine the representation of the studied system by instructions of a programming language. The AltaRica Data Flow (ADF) language is the starting point of this thesis. This language generalizes the most used formalisms in system dependability studies. We implement a transformation of an ADF description into instructions of the C language, and an automated generation of a stochastic simulator for the studied system. The carried out experiments justify the use of the generated simulators with compared to the traditional simulators
Détermination et stabilité du type métrique des singularités by
Guillaume Valette(
)
2 editions published in 2003 in French and held by 3 WorldCat member libraries worldwide
2 editions published in 2003 in French and held by 3 WorldCat member libraries worldwide
Dissimilarités de Robinson : algorithmes de reconnaissance et d'approximation by
Morgan Seston(
Book
)
3 editions published between 2008 and 2017 in French and held by 3 WorldCat member libraries worldwide
Une distance ou plus généralement une dissimilarité d définie sur un ensemble X de n éléments, est dite Robisonienne s'il existe un ordre total < sur X tel que pour tout x < y <z, on ait d(x,z) >= max{d(x,y), d(y,z)}. Un tel ordre est dit compatible avec d. Dans la première partie, nous présentons deux algorithmes de complexité O(n^3) et O(n^2 log n) pour la reconnaissance des dissimilarités de Robinson. Ces algorithmes permettent de coder de façon compacte l'ensemble des ordres compatibles via les PQarbres. La deuxième partie concerne l'approximation en norme l_{\infty} d'une dissimilarité de Robinson. Plus formellement, on veut trouver une dissimilarité de Robinson d_R qui miminise l'erreur :dd_R_{\infty}=\max_{x,y\in X} {d(x,y)d_(x,y)}. Nous montrons que ce problème est NPdifficile. Nous présentons également un algorithme d'approximation avec un facteur 16 pour ce problème, ce qui constitue le résultat principal de cette thèse
3 editions published between 2008 and 2017 in French and held by 3 WorldCat member libraries worldwide
Une distance ou plus généralement une dissimilarité d définie sur un ensemble X de n éléments, est dite Robisonienne s'il existe un ordre total < sur X tel que pour tout x < y <z, on ait d(x,z) >= max{d(x,y), d(y,z)}. Un tel ordre est dit compatible avec d. Dans la première partie, nous présentons deux algorithmes de complexité O(n^3) et O(n^2 log n) pour la reconnaissance des dissimilarités de Robinson. Ces algorithmes permettent de coder de façon compacte l'ensemble des ordres compatibles via les PQarbres. La deuxième partie concerne l'approximation en norme l_{\infty} d'une dissimilarité de Robinson. Plus formellement, on veut trouver une dissimilarité de Robinson d_R qui miminise l'erreur :dd_R_{\infty}=\max_{x,y\in X} {d(x,y)d_(x,y)}. Nous montrons que ce problème est NPdifficile. Nous présentons également un algorithme d'approximation avec un facteur 16 pour ce problème, ce qui constitue le résultat principal de cette thèse
Plans projectifs, cliques et enveloppes convexes by
Roumen Nedev(
Book
)
3 editions published between 2008 and 2018 in French and held by 3 WorldCat member libraries worldwide
We study in this work different types of convex hull of subsets of vertices of the unit cube. We characterize the convex hull of the projective planes of order 2 considered as a subset of the set of the 35 triples of the set with 7 elements. In one second part, we study the neighbourlicity of the kcliques polyhedron of the complete graph. We show that this polyhedron is 3neighbourly, we make the conjecture that the same polyhedron defined on the complete runiform hypergraphs is (2r  1)neighbourly. We describe an integer programming modell which allows us to verify this hypothesis in some particular cases
3 editions published between 2008 and 2018 in French and held by 3 WorldCat member libraries worldwide
We study in this work different types of convex hull of subsets of vertices of the unit cube. We characterize the convex hull of the projective planes of order 2 considered as a subset of the set of the 35 triples of the set with 7 elements. In one second part, we study the neighbourlicity of the kcliques polyhedron of the complete graph. We show that this polyhedron is 3neighbourly, we make the conjecture that the same polyhedron defined on the complete runiform hypergraphs is (2r  1)neighbourly. We describe an integer programming modell which allows us to verify this hypothesis in some particular cases
Sur une méthode numérique ondelettes / domaines fictifs lisses pour l'approximation de problèmes de Stefan by
Ping Yin(
Book
)
2 editions published in 2011 in French and held by 2 WorldCat member libraries worldwide
Our work is devoted to the definition, analysis and implementation of a new algorithms for numerical approximation of the solution of 2 dimensional Stefan problem. In this type of problem a parabolic partial differential equation defined on an openset Omega is coupled with another equation which controls the boundary gamma of the domain itself. The difficulties traditionally associated with this type of problems are: the particular formulation of equation on the boundary of domain, the approximation of the solution defined on general domain, the difficulties associated with the involvement of trace operation (approximation, conditioning), the difficulties associated with the regularity of domain. Addition, many situations of physical interest, for example,require approximations of high degree. Our work is based on aformulation of type level set for the equation on the domain, and aformulation of type fictitious domain (Omega) for the initialequation. The control of boundary conditions is carried out throughLagrange multipliers on boundary (Gamma), called control boundary, which is different with boundary (gamma) of the domain (omega). The approximation is done by a finite difference scheme for time derivative and the discretization by bidimensional wave letfor the initial equation and onedimensional wave let for the Lagrange multipliers. The extension operators from omega to Omega are also constructed from multiresolution analysis on theinterval. We also obtain: a formulation for which the existence of solution is demonstrated, a convergent algorithm for which a global estimate error (on Omega) is established, interior error estimate on domain omega, overline omega subset estimates on the conditioning related to the trace operator, algorithms of smooth extension. Different numerical experiments in 1D or 2D are implemented. The work is organized as follows:The first chapter recalls theconstruction of multiresolution analysis, important properties of wavelet and numerical algorithms. The second chapter gives an outline of classical fictitious domain method, using Galerkin or PetrovGalerkin method. We also describe the limitation of this method and point out the direction of our work.\par The third chapter presents a smooth fictitious domain method. It is coupled with PetrovGalerkin wavelet method for elliptic equations. This section contains the theoretical analysis and numerical implementation to embody the advantages of this new method. The fourth chapter introduces a smooth extension technique. We apply it to elliptic problem with smooth fictitious domain method in 1D and 2D. The fifth chapter is the numerical simulation of the Stefan problem. The property of Bspline render us to exactly calculate the curvature on the moving boundary. We use two examples to test the efficiency of our new method. Then it is used to resolve the twophase Stefan problem with GibbsThomson boundary condition as an experimental case
2 editions published in 2011 in French and held by 2 WorldCat member libraries worldwide
Our work is devoted to the definition, analysis and implementation of a new algorithms for numerical approximation of the solution of 2 dimensional Stefan problem. In this type of problem a parabolic partial differential equation defined on an openset Omega is coupled with another equation which controls the boundary gamma of the domain itself. The difficulties traditionally associated with this type of problems are: the particular formulation of equation on the boundary of domain, the approximation of the solution defined on general domain, the difficulties associated with the involvement of trace operation (approximation, conditioning), the difficulties associated with the regularity of domain. Addition, many situations of physical interest, for example,require approximations of high degree. Our work is based on aformulation of type level set for the equation on the domain, and aformulation of type fictitious domain (Omega) for the initialequation. The control of boundary conditions is carried out throughLagrange multipliers on boundary (Gamma), called control boundary, which is different with boundary (gamma) of the domain (omega). The approximation is done by a finite difference scheme for time derivative and the discretization by bidimensional wave letfor the initial equation and onedimensional wave let for the Lagrange multipliers. The extension operators from omega to Omega are also constructed from multiresolution analysis on theinterval. We also obtain: a formulation for which the existence of solution is demonstrated, a convergent algorithm for which a global estimate error (on Omega) is established, interior error estimate on domain omega, overline omega subset estimates on the conditioning related to the trace operator, algorithms of smooth extension. Different numerical experiments in 1D or 2D are implemented. The work is organized as follows:The first chapter recalls theconstruction of multiresolution analysis, important properties of wavelet and numerical algorithms. The second chapter gives an outline of classical fictitious domain method, using Galerkin or PetrovGalerkin method. We also describe the limitation of this method and point out the direction of our work.\par The third chapter presents a smooth fictitious domain method. It is coupled with PetrovGalerkin wavelet method for elliptic equations. This section contains the theoretical analysis and numerical implementation to embody the advantages of this new method. The fourth chapter introduces a smooth extension technique. We apply it to elliptic problem with smooth fictitious domain method in 1D and 2D. The fifth chapter is the numerical simulation of the Stefan problem. The property of Bspline render us to exactly calculate the curvature on the moving boundary. We use two examples to test the efficiency of our new method. Then it is used to resolve the twophase Stefan problem with GibbsThomson boundary condition as an experimental case
Sur le nombre de points rationels des variétés abéliennes sur les corps finis by
SafiaChristine Haloui(
Book
)
2 editions published in 2011 in French and held by 2 WorldCat member libraries worldwide
The characteristic polynomial of an abelian variety over a finite field is defined to be the characteristic polynomial of its Frobenius endomorphism. The first part of this thesis is devoted to the study of the characteristic polynomials of abelian varieties of small dimension. We describe the set of polynomials which occur in dimension 3 and 4; the analogous problem for elliptic curves and abelian surfaces has been solved by Deuring, Waterhouse and Rück.In the second part, we give upper and lower bounds on the number of points on abelian varieties over finite fields. Next, we give lower bounds specific to Jacobian varieties. We also determine exact formulas for the maximum and minimum number of points on Jacobian surfaces
2 editions published in 2011 in French and held by 2 WorldCat member libraries worldwide
The characteristic polynomial of an abelian variety over a finite field is defined to be the characteristic polynomial of its Frobenius endomorphism. The first part of this thesis is devoted to the study of the characteristic polynomials of abelian varieties of small dimension. We describe the set of polynomials which occur in dimension 3 and 4; the analogous problem for elliptic curves and abelian surfaces has been solved by Deuring, Waterhouse and Rück.In the second part, we give upper and lower bounds on the number of points on abelian varieties over finite fields. Next, we give lower bounds specific to Jacobian varieties. We also determine exact formulas for the maximum and minimum number of points on Jacobian surfaces
Etude arithmétique et algorithmique de courbes de petit genre by
Florent Ulpat Rovetta(
Book
)
2 editions published in 2015 in French and held by 2 WorldCat member libraries worldwide
This thesis addresses several algorithmic aspects of algebraic curves. The first part describe and plug in Magma a computational algorithm of twists for the curves over finite fields and study it's complexity. In the hyperelliptic case, it is the first complete algorithm to do this in all genus. The second part builts representatives family for the non hyperelliptic curves of genus 3 to enable them effective enumeration in connection with the Serre obstruction problem. This part has been published in ANTS and an annex of this thesis is made up of a preprint studing a statistic model for interpreting the data obtained. The last part of the thesis studies the invariants and covariants of binary forms in connexion with the description of the moduli space of curves of genus 2. A new operation in particular is described to generate covariants in small characteristic. We study to the implementation of a new strategy (called GeyerSturmfels) to get the algebras of separants and we apply it of the case of degree 4 ans 6. Finally, the last chapter shows the validity of a reconstruction algorithm for genus 2 curves from their invariants in all characteristic diferent from 2 and implements it in SAGE
2 editions published in 2015 in French and held by 2 WorldCat member libraries worldwide
This thesis addresses several algorithmic aspects of algebraic curves. The first part describe and plug in Magma a computational algorithm of twists for the curves over finite fields and study it's complexity. In the hyperelliptic case, it is the first complete algorithm to do this in all genus. The second part builts representatives family for the non hyperelliptic curves of genus 3 to enable them effective enumeration in connection with the Serre obstruction problem. This part has been published in ANTS and an annex of this thesis is made up of a preprint studing a statistic model for interpreting the data obtained. The last part of the thesis studies the invariants and covariants of binary forms in connexion with the description of the moduli space of curves of genus 2. A new operation in particular is described to generate covariants in small characteristic. We study to the implementation of a new strategy (called GeyerSturmfels) to get the algebras of separants and we apply it of the case of degree 4 ans 6. Finally, the last chapter shows the validity of a reconstruction algorithm for genus 2 curves from their invariants in all characteristic diferent from 2 and implements it in SAGE
Méthodologie pour la détection de défaillance des procédés de fabrication par ACP : application à la production de dispositifs
semiconducteurs by
Alexis Thieullen(
Book
)
2 editions published in 2014 in French and held by 2 WorldCat member libraries worldwide
This thesis focus on developping a fault detection methodology for semiconductor manufacturing equipment. The proposed approach is based on Principal Components Analysis (PCA) to build a representative model of equipment in adequat operating conditions. Our method exploits collected measurements from equipement sensors, for each processed wafer. However, regarding the industrial context and processes, we have to consider additional problems: collected signals from sensors exhibit different length, or durations. This is a limitation considering PCA. We have also to consider synchronization and alignment problems; semiconductor manufacturing equipment are almost dynamic, with strong temporal correlations between sensor measurements all along processes. To solve the first point, we developped a data preprocessing module to transform raw data from sensors into a convenient dataset for PCA application. The interest is to identify outliers data and products, that can affect PCA modelling. This step is based on expert knowledge, statistical analysis, and Dynamic Time Warping, a wellknown algorithm from signal processing. To solve the second point, we propose a combination multiway PCA with the use of an EWMA filter to consider process dynamic. A recursive approach is employed to adapt our PCA model to specific events that can occur on equipment, e.g. maintenance, restart, etc.All the steps of our methodology are illustrated with data from a chemical vapor deposition tool exploited in STMicroelectroics Rousset fab. Finally, the efficiency and industrial interest of the proposed methodologies are verified by considering multiple equipment types on longer operating periods
2 editions published in 2014 in French and held by 2 WorldCat member libraries worldwide
This thesis focus on developping a fault detection methodology for semiconductor manufacturing equipment. The proposed approach is based on Principal Components Analysis (PCA) to build a representative model of equipment in adequat operating conditions. Our method exploits collected measurements from equipement sensors, for each processed wafer. However, regarding the industrial context and processes, we have to consider additional problems: collected signals from sensors exhibit different length, or durations. This is a limitation considering PCA. We have also to consider synchronization and alignment problems; semiconductor manufacturing equipment are almost dynamic, with strong temporal correlations between sensor measurements all along processes. To solve the first point, we developped a data preprocessing module to transform raw data from sensors into a convenient dataset for PCA application. The interest is to identify outliers data and products, that can affect PCA modelling. This step is based on expert knowledge, statistical analysis, and Dynamic Time Warping, a wellknown algorithm from signal processing. To solve the second point, we propose a combination multiway PCA with the use of an EWMA filter to consider process dynamic. A recursive approach is employed to adapt our PCA model to specific events that can occur on equipment, e.g. maintenance, restart, etc.All the steps of our methodology are illustrated with data from a chemical vapor deposition tool exploited in STMicroelectroics Rousset fab. Finally, the efficiency and industrial interest of the proposed methodologies are verified by considering multiple equipment types on longer operating periods
Automatic diagnosis of melanoma from dermoscopic images of melanocytic tumors : Analytical and comparative approaches by
Yanal Wazaefi(
Book
)
2 editions published in 2013 in English and held by 2 WorldCat member libraries worldwide
Melanoma is the most serious type of skin cancer. This thesis focused on the development of two different approaches for computeraided diagnosis of melanoma: analytical approach and comparative approach. The analytical approach mimics the dermatologist's behavior by first detecting malignancy features based on popular analytical methods, and in a second step, by combining these features. We investigated to what extent the melanoma diagnosis can be impacted by an automatic system using dermoscopic images of pigmented skin lesions. The comparative approach, called Ugly Duckling (UD) concept, assumes that nevi in the same patient tend to share some morphological features so that dermatologists identify a few similarity clusters. UD is the nevus that does not fit into any of those clusters, likely to be suspicious. The goal was to model the ability of dermatologists to build consistent clusters of pigmented skin lesions in patients
2 editions published in 2013 in English and held by 2 WorldCat member libraries worldwide
Melanoma is the most serious type of skin cancer. This thesis focused on the development of two different approaches for computeraided diagnosis of melanoma: analytical approach and comparative approach. The analytical approach mimics the dermatologist's behavior by first detecting malignancy features based on popular analytical methods, and in a second step, by combining these features. We investigated to what extent the melanoma diagnosis can be impacted by an automatic system using dermoscopic images of pigmented skin lesions. The comparative approach, called Ugly Duckling (UD) concept, assumes that nevi in the same patient tend to share some morphological features so that dermatologists identify a few similarity clusters. UD is the nevus that does not fit into any of those clusters, likely to be suspicious. The goal was to model the ability of dermatologists to build consistent clusters of pigmented skin lesions in patients
Sélection bayésienne de variables et méthodes de type Parallel Tempering avec et sans vraisemblance by
Meïli Baragatti(
Book
)
2 editions published in 2011 in French and held by 2 WorldCat member libraries worldwide
This thesis is divided into two main parts. In the first part, we propose a Bayesian variable selection method for probit mixed models. The objective is to select few relevant variables among tens of thousands while taking into account the design of a study, and in particular the fact that several datasets are merged together. The probit mixed model used is considered as part of a larger hierarchical Bayesian model, and the dataset is introduced as a random effect. The proposed method extends a work of Lee et al. (2003). The first step is to specify the model and prior distributions. In particular, we use the gprior of Zellner (1986) for the fixed regression coefficients. In a second step, we use a MetropoliswithinGibbs algorithm combined with the grouping (or blocking) technique of Liu (1994). This choice has both theoritical and practical advantages. The method developed is applied to merged microarray datasets of patients with breast cancer. However, this method has a limit: the covariance matrix involved in the gprior should not be singular. But there are two standard cases in which it is singular: if the number of observations is lower than the number of variables, or if some variables are linear combinations of others. In such situations we propose to modify the gprior by introducing a ridge parameter, and a simple way to choose the associated hyperparameters. The prior obtained is a compromise between the conditional independent case of the coefficient regressors and the automatic scaling advantage offered by the gprior, and can be linked to the work of Gupta and Ibrahim (2007).In the second part, we develop two new populationbased MCMC methods. In cases of complex models with several parameters, but whose likelihood can be computed, the EquiEnergy Sampler (EES) of Kou et al. (2006) seems to be more efficient than the Parallel Tempering (PT) algorithm introduced by Geyer (1991). However it is difficult to use in combination with a Gibbs sampler, and it necessitates increased storage. We propose an algorithm combining the PT with the principle of exchange moves between chains with same levels of energy, in the spirit of the EES. This adaptation which we are calling Parallel Tempering with EquiEnergy Move (PTEEM) keeps the original idea of the EES method while ensuring good theoretical properties and a practical use in combination with a Gibbs sampler.Then, in some complex models whose likelihood is analytically or computationally intractable, the inference can be difficult. Several likelihoodfree methods (or Approximate Bayesian Computational Methods) have been developed. We propose a new algorithm, the Likelihood FreeParallel Tempering, based on the MCMC theory and on a population of chains, by using an analogy with the Parallel Tempering algorithm
2 editions published in 2011 in French and held by 2 WorldCat member libraries worldwide
This thesis is divided into two main parts. In the first part, we propose a Bayesian variable selection method for probit mixed models. The objective is to select few relevant variables among tens of thousands while taking into account the design of a study, and in particular the fact that several datasets are merged together. The probit mixed model used is considered as part of a larger hierarchical Bayesian model, and the dataset is introduced as a random effect. The proposed method extends a work of Lee et al. (2003). The first step is to specify the model and prior distributions. In particular, we use the gprior of Zellner (1986) for the fixed regression coefficients. In a second step, we use a MetropoliswithinGibbs algorithm combined with the grouping (or blocking) technique of Liu (1994). This choice has both theoritical and practical advantages. The method developed is applied to merged microarray datasets of patients with breast cancer. However, this method has a limit: the covariance matrix involved in the gprior should not be singular. But there are two standard cases in which it is singular: if the number of observations is lower than the number of variables, or if some variables are linear combinations of others. In such situations we propose to modify the gprior by introducing a ridge parameter, and a simple way to choose the associated hyperparameters. The prior obtained is a compromise between the conditional independent case of the coefficient regressors and the automatic scaling advantage offered by the gprior, and can be linked to the work of Gupta and Ibrahim (2007).In the second part, we develop two new populationbased MCMC methods. In cases of complex models with several parameters, but whose likelihood can be computed, the EquiEnergy Sampler (EES) of Kou et al. (2006) seems to be more efficient than the Parallel Tempering (PT) algorithm introduced by Geyer (1991). However it is difficult to use in combination with a Gibbs sampler, and it necessitates increased storage. We propose an algorithm combining the PT with the principle of exchange moves between chains with same levels of energy, in the spirit of the EES. This adaptation which we are calling Parallel Tempering with EquiEnergy Move (PTEEM) keeps the original idea of the EES method while ensuring good theoretical properties and a practical use in combination with a Gibbs sampler.Then, in some complex models whose likelihood is analytically or computationally intractable, the inference can be difficult. Several likelihoodfree methods (or Approximate Bayesian Computational Methods) have been developed. We propose a new algorithm, the Likelihood FreeParallel Tempering, based on the MCMC theory and on a population of chains, by using an analogy with the Parallel Tempering algorithm
La localisation en logique : géométrie de l'interaction et sémantique dénotationnelle by
Etienne Duchesne(
)
2 editions published between 2009 and 2019 in French and held by 2 WorldCat member libraries worldwide
This thesis deals with the existing links between two localized semantics of classical linear logic : indexed linear logic (LL(I)) of Bucciarelli and Ehrhard, and geometry of interaction (GoI) of Girard. First we introduce the localized relational semantics (RelLoc) in which exponentials are interpreted by nite families. We established a correspondance between families of elements of RelLoc and formulas of a variant of LL(I). The sequent calculus of this variant then represents the experiments for RelLoc. Next we dene the geometry of interaction for classical linear logic. Proofs are interpreted by sums of pairs made of a partial permutation interpreting an additive slice and a boolean identifying the slice. An operation of plunging enables to interpret the promotion. We detail the properties of this semantics, which is not an invariant of reduction. We can then establish a link between RelLoc and GoI, and make the partial permutations of GoI act on the elements of RelLoc. We proove that the ones invariant by the action of the GoI interpretation of a proof belongs to its interpretation in RelLoc
2 editions published between 2009 and 2019 in French and held by 2 WorldCat member libraries worldwide
This thesis deals with the existing links between two localized semantics of classical linear logic : indexed linear logic (LL(I)) of Bucciarelli and Ehrhard, and geometry of interaction (GoI) of Girard. First we introduce the localized relational semantics (RelLoc) in which exponentials are interpreted by nite families. We established a correspondance between families of elements of RelLoc and formulas of a variant of LL(I). The sequent calculus of this variant then represents the experiments for RelLoc. Next we dene the geometry of interaction for classical linear logic. Proofs are interpreted by sums of pairs made of a partial permutation interpreting an additive slice and a boolean identifying the slice. An operation of plunging enables to interpret the promotion. We detail the properties of this semantics, which is not an invariant of reduction. We can then establish a link between RelLoc and GoI, and make the partial permutations of GoI act on the elements of RelLoc. We proove that the ones invariant by the action of the GoI interpretation of a proof belongs to its interpretation in RelLoc
Optimisation combinée des approvisionnements et du transport dans une chaine logistique by
Mouna Rahmouni(
Book
)
2 editions published in 2015 in French and held by 2 WorldCat member libraries worldwide
The proposed joint delivery problem (JDP) is a delivery tour planning problem on a time horizon decomposed into elementary periods or rounds, the time horizon being the common delivery period for all products. The data of these parameters provides a linear formulation of the problem, with binary decision variables. The model also incorporates the constraints of meeting demand from stock and the quantities supplied, storage and transport capacity constraints.In order to also solve the problem of choice of delivery rounds, it is necessary to introduce in the model several constraints and variables related to the sites visited during each round. It is proposed to solve the problem in two steps. The first step is the calculation of the minimum offline cost of the tour associated with each subset of sites. One can observe that for any given subset of sites, the optimal Hamiltonian cycle linking those sites to the warehouse can be calculated in advance by a traveling salesman problem algorithm (TSP). The goal here is not to fully analyze the TSP, but rather to integrate its solution in the formulation of the JRP. In the second stage, binary variables are associated with each subset and each period to determine the selected subset of sites in each period and its associated fixed cost
2 editions published in 2015 in French and held by 2 WorldCat member libraries worldwide
The proposed joint delivery problem (JDP) is a delivery tour planning problem on a time horizon decomposed into elementary periods or rounds, the time horizon being the common delivery period for all products. The data of these parameters provides a linear formulation of the problem, with binary decision variables. The model also incorporates the constraints of meeting demand from stock and the quantities supplied, storage and transport capacity constraints.In order to also solve the problem of choice of delivery rounds, it is necessary to introduce in the model several constraints and variables related to the sites visited during each round. It is proposed to solve the problem in two steps. The first step is the calculation of the minimum offline cost of the tour associated with each subset of sites. One can observe that for any given subset of sites, the optimal Hamiltonian cycle linking those sites to the warehouse can be calculated in advance by a traveling salesman problem algorithm (TSP). The goal here is not to fully analyze the TSP, but rather to integrate its solution in the formulation of the JRP. In the second stage, binary variables are associated with each subset and each period to determine the selected subset of sites in each period and its associated fixed cost
Distribution de la nonlinéarité des fonctions booléennes by
Stephanie Dib(
Book
)
2 editions published in 2013 in English and held by 2 WorldCat member libraries worldwide
Parmi les différents critères qu'une fonction booléenne doit satisfaire en cryptographie, on s'intéresse à la nonlinéarité. Pour une fonction booléenne donnée, cette notion mesure la distance de Hamming qui la sépare des fonctions de degré au plus 1. C'est un critère naturel pour évaluer la complexité d'une fonction cryptographique, celleci ne devant pas admettreune approximation qui soit simple, comme par une fonction de degré 1, ou plus généralement une fonction de bas degré. Ainsi, il est important de considérer plus généralement, la nonlinéarité d'ordre supérieur, qui pour un ordre donné r, mesure la distance d'une fonction donnée à l'ensemble des fonctions de degré au plus r. Cette notion est également importante pour les fonctions vectorielles, i.e., celles à plusieurs sorties. Quand le nombre de variables est grand, presque toutes les fonctions ont une nonlinéarité (d'ordre 1) voisine d'une certaine valeur, assez élevée. Dans un premier travail, on étend ce résultat à l'ordre 2. Cette méthode qui consiste à observer comment les boules de Hamming recouvrent l'hypercube des fonctions booléennes, nous conduit naturellement vers une borne de décodage théorique des codes de ReedMuller d'ordre 1, coïncidant au même endroit où se concentre la nonlinéarité de presque toutes les fonctions ; une approche nouvelle pour un résultat pas entièrement nouveau. On étudie aussi la nonlinéarité des fonctions vectorielles. On montre avec une approche différente, que le comportement asymptotique est le même que celui des fonctions booléennes: une concentration de la nonlinéarité autour d'une valeur assez élevée
2 editions published in 2013 in English and held by 2 WorldCat member libraries worldwide
Parmi les différents critères qu'une fonction booléenne doit satisfaire en cryptographie, on s'intéresse à la nonlinéarité. Pour une fonction booléenne donnée, cette notion mesure la distance de Hamming qui la sépare des fonctions de degré au plus 1. C'est un critère naturel pour évaluer la complexité d'une fonction cryptographique, celleci ne devant pas admettreune approximation qui soit simple, comme par une fonction de degré 1, ou plus généralement une fonction de bas degré. Ainsi, il est important de considérer plus généralement, la nonlinéarité d'ordre supérieur, qui pour un ordre donné r, mesure la distance d'une fonction donnée à l'ensemble des fonctions de degré au plus r. Cette notion est également importante pour les fonctions vectorielles, i.e., celles à plusieurs sorties. Quand le nombre de variables est grand, presque toutes les fonctions ont une nonlinéarité (d'ordre 1) voisine d'une certaine valeur, assez élevée. Dans un premier travail, on étend ce résultat à l'ordre 2. Cette méthode qui consiste à observer comment les boules de Hamming recouvrent l'hypercube des fonctions booléennes, nous conduit naturellement vers une borne de décodage théorique des codes de ReedMuller d'ordre 1, coïncidant au même endroit où se concentre la nonlinéarité de presque toutes les fonctions ; une approche nouvelle pour un résultat pas entièrement nouveau. On étudie aussi la nonlinéarité des fonctions vectorielles. On montre avec une approche différente, que le comportement asymptotique est le même que celui des fonctions booléennes: une concentration de la nonlinéarité autour d'une valeur assez élevée
Modélisation dynamique et suivi de tumeur dans le volume rénal by
Valentin Leonardi(
Book
)
2 editions published in 2014 in French and held by 2 WorldCat member libraries worldwide
This Ph.D. thesis deals with the 3D dynamic modeling of the kidney and tracking a tumor of this organ. It is in line with the KiTT project (Kidney Tumor Tracking) which gathers researchers from different fileds: geometric modeling, radiology and urology. This work arised from the tendency of nowadays surgical gestures to be less and less invasive (HIFU, coelioscopy). Its goal is to result in a totally noninvasive protocol of kidney tumors eradication by transmitting ultrasound waves through the skin without breaking in it. As the kidney presents motions and deformations during the breathing phase, the main issue is to know the kidney and tumor positions at any time in order to adjust the waves accordingly
2 editions published in 2014 in French and held by 2 WorldCat member libraries worldwide
This Ph.D. thesis deals with the 3D dynamic modeling of the kidney and tracking a tumor of this organ. It is in line with the KiTT project (Kidney Tumor Tracking) which gathers researchers from different fileds: geometric modeling, radiology and urology. This work arised from the tendency of nowadays surgical gestures to be less and less invasive (HIFU, coelioscopy). Its goal is to result in a totally noninvasive protocol of kidney tumors eradication by transmitting ultrasound waves through the skin without breaking in it. As the kidney presents motions and deformations during the breathing phase, the main issue is to know the kidney and tumor positions at any time in order to adjust the waves accordingly
Indexation et interrogation de pages web décomposées en blocs visuels by
Nicolas Faessel(
Book
)
2 editions published in 2011 in French and held by 2 WorldCat member libraries worldwide
This thesis is about indexing and querying Web pages. We propose a new model called BlockWeb, based on the decomposition of Web pages into a hierarchy of visual blocks. This model takes in account the visual importance of each block as well as the permeability of block's content to their neighbor blocks on the page. Splitting up a page into blocks has several advantages in terms of indexing and querying. It allows to query the system with a finer granularity than the whole page: the most similar blocks to the query can be returned instead of the whole page. A page is modeled as a directed acyclic graph, the IP graph, where each node is associated with a block and is labeled by the coefficient of importance of this block and each arc is labeled by the coefficient of permeability of the target node content to the source node content. In order to build this graph from the bloc tree representation of a page, we propose a new language : XIML (acronym for XML Indexing Management Language), a rule based language like XSLT. The model has been assessed on two distinct dataset: finding the best entry point in a dataset of electronic newspaper articles, and images indexing and querying in a dataset drawn from web pages of the ImagEval 2006 campaign. We present the results of these experiments
2 editions published in 2011 in French and held by 2 WorldCat member libraries worldwide
This thesis is about indexing and querying Web pages. We propose a new model called BlockWeb, based on the decomposition of Web pages into a hierarchy of visual blocks. This model takes in account the visual importance of each block as well as the permeability of block's content to their neighbor blocks on the page. Splitting up a page into blocks has several advantages in terms of indexing and querying. It allows to query the system with a finer granularity than the whole page: the most similar blocks to the query can be returned instead of the whole page. A page is modeled as a directed acyclic graph, the IP graph, where each node is associated with a block and is labeled by the coefficient of importance of this block and each arc is labeled by the coefficient of permeability of the target node content to the source node content. In order to build this graph from the bloc tree representation of a page, we propose a new language : XIML (acronym for XML Indexing Management Language), a rule based language like XSLT. The model has been assessed on two distinct dataset: finding the best entry point in a dataset of electronic newspaper articles, and images indexing and querying in a dataset drawn from web pages of the ImagEval 2006 campaign. We present the results of these experiments
more
fewer
Audience Level
0 

1  
General  Special 
Related Identities
 AixMarseille Université Degree grantor
 Université AixMarseille II (19692011) Degree grantor
 Laboratoire d'Informatique et Systèmes (LIS) (Marseille, Toulon) Other
 Université de Provence (19702011) Degree grantor
 Laboratoire des sciences de l'information et des systèmes (Marseille) Other
 Institut de mathématiques de Marseille (I2M) Other
 Daniel, Marc (1958....; informaticien) Other Opponent Thesis advisor
 Institut de mathématiques de Luminy (Marseille) Other Degree grantor
 Ouladsine, Mustapha Opponent Thesis advisor
 Laboratoire d'informatique fondamentale (Marseille) Other