WorldCat Identities

Ecole doctorale Sciences et Technologies de l'Information, des Télécommunications et des Systèmes (Orsay, Essonne / 2000-2015)

Overview
Works: 372 works in 437 publications in 2 languages and 385 library holdings
Roles: Other, Degree grantor
Publication Timeline
.
Most widely held works by des Télécommunications et des Systèmes (Orsay, Essonne / 2000-2015) Ecole doctorale Sciences et Technologies de l'Information
Caractérisation aveugle de la courbe de charge électrique : détection, classification et estimation des usages dans les secteurs résidentiel et tertiaire by Mabrouka El Guedri( Book )

3 editions published between 2009 and 2010 in French and held by 3 WorldCat member libraries worldwide

Dans un contexte marqué par une sensibilité croissante aux enjeux environnementaux liés à l'énergie et à l'équilibre entre la demande et la consommation est accentué, EDF s'intéresse au développement de services énergétiques pour les clients résidentiels et professionnels. Dans ce contexte, la thèse s'attache à la problématique de caractérisation aveugle des principaux usages électriques à partir de la courbe de charge (CdC) et ce de manière entièrement non-intrusive. Plus précisément, nous traitons les problèmes sous-jacents suivants : la détection, la classification et l'estimation des usages. Cette problématique s'inscrit dans le cadre générique d'extraction automatique de contenu information des sources d'un mélange mono-valué à des fins de prise de décision (supervisée ou non-supervisée). Les principales contributions méthodologiques sont les suivants : l'extraction de connaissances a priori sur les usages électriques et définition de domaines transformés dédiés à ces usages ; une proposition d'une nouvelle méthodologie d'une représentation temps-fréquence et une approche de séparation de sources mono-capteur fondée sur deux modèles stochastiques de la CdC et des événements de la CdC. Sur le plan applicatif, les travaux de la thèse contribuent à répondre à un besoin industriel court et moyen termes. Il s'agit principalement de la segmentation de la CdC résidentielle quotidienne, la cartographie de l'énergie consommée quotidiennement par famille d'usages, la reconstruction de scénarii de fonctionnement des usages dans des hypermarchés, etc. Ces résultats constituent une composante de l'offre énergétique tel que la facture détaillée dans le secteur résidentiel ou encore l'optimisation du pilotage des usages dans le secteur professionnel / tertiaire
Exploitation de corrélations spatiales et temporelles en tomographie par émission de positrons by Florent Sureau( Book )

3 editions published between 2008 and 2010 in French and held by 3 WorldCat member libraries worldwide

Nous proposons, implémentons et évaluons dans cette thèse des algorithmes permettant d'améliorer la résolution spatiale dans les images et débruiter les données en tomographie par émission de positrons. Ces algorithmes ont été employés pour des reconstructions sur une caméra à haute résolution (HRRT) et utilisés dans le cadre d'études cérébrales, mais la méthodologie développée peut être appliquée à d'autres caméras et dans d'autres situations. Dans un premier temps, nous avons développé une méthode de reconstruction itérative intégrant un modèle de résolution spatiale isotrope et stationnaire dans l'espace image, mesuré expérimentalement. Nous avons évalué les apports de cette méthode sur simulation Monte-Carlo, examens physiques et protocoles cliniques en la comparant à un algorithme de reconstruction de référence. Cette étude suggère une réduction des biais de quantification, en particulier dans l'étude clinique, et de meilleures corrélations spatiales et temporelles au niveau des voxels. Cependant, d'autres méthodes doivent être employées pour réduire le niveau de bruit dans les données. Dans un second temps, une approche de débruitage maximum a posteriori adaptée aux données dynamiques et permettant de débruiter temporellement les données d'acquisition (sinogrammes) ou images reconstruites a été proposé. L'a priori a été introduit en modélisant les coefficients dans une base d'ondelettes de l'ensemble des signaux recherchés (images ou sinogrammes). Nous avons comparé cette méthode à une méthode de débruitage de référence sur des simulations répliquées, ce qui illustre l'intérêt de l'approche de débruitage des sinogrammes
Neural networks as cellular computing models for temporal sequence processing. by Bassem Khouzam( Book )

3 editions published in 2014 in English and held by 3 WorldCat member libraries worldwide

The thesis proposes a sequence learning approach that uses the mechanism of fine grain self-organization. The manuscript initially starts by situating this effort in the perspective of contributing to the promotion of cellular computing paradigm in computer science. Computation within this paradigm is divided into a large number of elementary calculations carried out in parallel by computing cells, with information exchange between them.In addition to their fine grain nature, the cellular nature of such architectures lies in the spatial topology of the connections between cells that complies with to the constraints of the technological evolution of hardware in the future. In the manuscript, most of the distributed architecture known in computer science are examined following this perspective, to find that very few of them fall within the cellular paradigm.We are interested in the learning capacity of these architectures, because of the importance of this notion in the related domain of neural networks for example, without forgetting, however, that cellular systems are complex dynamical systems by construction.This inevitable dynamical component has motivated our focus on the learning of temporal sequences, for which we reviewed the different models in the domains of neural networks and self-organization maps.At the end, we proposed an architecture that contributes to the promotion of cellular computing in the sense that it exhibits self-organization properties employed in the extraction of a representation of a dynamical system states that provides the architecture with its entries, even if the latter are ambiguous such that they partially reflect the system state. We profited from an existing supercomputer to simulate complex architecture, that indeed exhibited a new emergent behavior. Based on these results we pursued a critical study that sets the perspective for future work
Analyse de la dynamique neuronale pour les Interfaces Cerveau-Machines : un retour aux sources by Michel Besserve( Book )

2 editions published in 2007 in French and held by 2 WorldCat member libraries worldwide

Les Interfaces Cerveau-Machine sont des dispositifs permettant d'instaurer un canal de communication entre le cerveau humain et le monde extérieur sans utiliser les voies usuelles nerveuses et musculaires. Le développement de tels systèmes se situe à l'interface entre le traitement du signal, l'apprentissage statistique et la neurophysiologie. Dans cette thèse, nous avons réalisé et étudié un dispositif d'Interface Cerveau-Machine non invasif asynchrone, c'est-à-dire capable d'identifier des actions mentales associées à des tâches motrices ou cognitives imaginées sans synchronisation sur un événement contrôlé par un système externe. Celui-ci est basé sur l'analyse en temps réel de signaux électro-encéphalographiques (BEG) issus d'électrodes disposées à la surface de la tête d'un sujet humain. Du point de vue méthodologique, nous avons implémenté plusieurs méthodes de prétraitement de ces signaux et comparé leur influence sur les performances du système. Ces méthodes comprennent : 1) l'utilisation directe du signal issu des capteurs EEG, 2) l'exploitation de méthodes de séparation de sources qui permettent de résumer les signaux EEG par un faible nombre de composantes spatiales et 3) la reconstruction de l'activité des sources de courant corticales par résolution du problème inverse en EEG. De plus, plusieurs mesures permettant de quantifier l'activité cérébrale sont exploitées et comparées : la puissance spectrale, la cohérence et la synchronie de phase. Nos résultats montrent que la reconstruction préalable de l'activité corticale par problème inverse, ainsi que l'utilisation de mesures d'interaction à distance permettent d'améliorer les performances du dispositif
Contribution à la conception de systèmes mécatroniques automobiles : méthodologie de pré-dimensionnement multi-niveau multi-physique de convertisseurs statiques by Kamal Ejjabraoui( Book )

2 editions published in 2010 in French and held by 2 WorldCat member libraries worldwide

Les travaux de cette thèse sont effectués dans le cadre du projet O2M (Outil de Modélisation Mécatronique), labellisé par les pôles de compétitivité Mov'eo et System@tic, dont l'objectif est de développer une nouvelle génération d'outils dédiés aux différentes phases de conception de systèmes mécatroniques automobiles. Nous avons montré à travers de cette thèse l'absence d'une plateforme logicielle permettant la conception de l'ensemble des éléments d'une chaine d'actionnement mécatronique avec la même finesse et d'une méthodologie globale capable de formaliser le choix d'architecture et de considérer plusieurs contraintes multi-physiques y compris des contraintes d'intégration 3D. Dans ce contexte et au sein du sous-projet "Pré-dimensionnement" dont lequel ces travaux sont principalement concentrés, nous nous sommes intéressé au développement d'une approche de prédimensionnement de systèmes mécatroniques réalisée en trois niveaux : choix d'architecture et de technologies des composants, optimisation sous contraintes multi-physiques et optimisation avec intégration de la simulation numérique 3D. Une évaluation sur des outils de simulation et de conception les plus répandus sur différents critères a permis de conclure qu'une plateforme logicielle mécatronique peut être une association de certains outils tels que MATLAB-SIMULINK, DYMOLA, AMESim pour les niveaux 1 et 2 de prédimensionnement et COMSOL pour le niveau 3. Une adaptation de la démarche proposée est réalisée sur un élément essentiel de la chaine mécatronique, le convertisseur DC-DC. Des bases de données technologiques de composants actifs et passifs sont mises en place afin d'alimenter la démarche de pré-dimensionnement. Des modèles nécessaires à la réalisation de chaque niveau de cette démarche sont développés. Ils permettent dans le premier niveau de réaliser le choix d'architecture, d'estimer rapidement le volume des composants et de faire le choix technologique des composants selon une contrainte majeure (volume dans notre cas). Ils assurent dans le deuxième niveau l'optimisation sous contraintes multiphysiques (volume, rendement, température, spectre électromagnétique et commande). Enfin, dans le troisième niveau une association de deux logiciels (COMSOL pour la simulation éléments finis et MATLAB comme environnement d'optimisation) a été mise en place en vue d'une optimisation du placement des composants de puissance sous contrainte thermique en utilisant un modèle thermique plus fin basé sur la méthode des éléments finis. La démarche est appliquée à trois cahier des charges : convertisseur Buck, convertisseur Boost et l'onduleur triphasé. Des optimisations mono-objectif (volume) et multi-objectif (volume et rendement, volume et temps de réponse) sous contraintes multi-physiques ont été réalisées. Nous avons montré à travers ces optimisations l'impact direct des contraintes liées à la commande (temps de réponse, stabilité) au même titre que celles classiquement utilisées lors de la conception des convertisseurs statiques. De plus, nous avons montré la possibilité de lever des risques d'intégration 3D très tôt dans la phase de conception de ces convertisseurs. Cette démarche de pré-dimensionnement multi-niveau, multi-physique proposée a permis de répondre à des besoins exprimés par les partenaires industriels du projet O2M en termes de méthodologie de conception de systèmes mécatroniques automobiles
Circuits de lecture innovants pour capteur infrarouge bolométrique by Benoît Dupont( Book )

2 editions published in 2008 in French and held by 2 WorldCat member libraries worldwide

This PhD work deals with the image quality improvement of Microbolometer infrared detectors by Fixed Pattern Noise Reduction. First the work addresses the problem thermal imaging acquisition with an uncooled technology. We show why the Fixed Pattern Noise is becoming a predominant factor in image quality evaluation based on the state of the Art in bolometric readout circuit design. An algebraic model is then discussed to identify the predominant technological factor in the detector signal dispersion. We show that this critical factor is the resistance prefactor of the bolometer. This statement has been verified through measurement campaigns on existing devices. An algorithm is presented to correct the signal spread introduced by the prefactor. Performance of this algorithm is evaluated and limits are explained. To overcome these limitations, a new mixed mode architecture is developed and validated by simulation. Finally, two circuits aiming at lowering the second order factors are presented and tested. Functionality is demonstrated and limitations are found. Five circuits have been drawn during this work. They are described in this manuscript
Contribution to quantitative microwave imaging techniques for biomedical applications by Tommy Henriksson( Book )

2 editions published in 2009 in English and held by 2 WorldCat member libraries worldwide

This dissertation presents a contribution to quantitative microwave imaging for breast tumor detection. The study made in the frame of a joint supervision Ph.D. thesis between University Paris-SUD 11 (France) and Mälardalen University (Sweden), has been conducted through two experimental microwave imaging setups, the existing 2.45 GHz planar camera (France) and the multi-frequency flexible robotic system, (Sweden), under development. In this context a 2D scalar flexible numerical tool based on a Newton-Kantorovich (NK) scheme, has been developed. Quantitative microwave imaging is a three dimensional vectorial nonlinear inverse scattering problem, where the complex permittivity of an abject is reconstructed from the measured scattered field, produced by the object. The NK scheme is used in order to deal with the nonlinearity and the ill posed nature of this problem. A TM polarization and a two dimensional medium configuration have been considered in order to avoid its vectorial aspect. The solution is found iteratively by minimizing the square norm of the error with respect to the scattered field data. Consequently, the convergence of such iterative process requires, at least two conditions. First, an efficient calibration of the experimental system has to be associated to the minimization of model errors. Second, the mean square difference of the scattered field introduced by the presence of the tumor has to be large enough, according to the sensitivity of the imaging system. The existing planar camera associated to a flexible 2D scalar NK code, are considered as an experimental platform for quantitative breast imaging. A preliminary numerical study shows that the multi-view planar system is quite efficient for realistic breast tumor phantoms, according to its characteristics (frequency, planar geometry and water as a coupling medium), as long as realistic noisy data are considered. Furthermore, a multi-incidence planar system, more appropriate in term of antenna-array arrangement, is proposed and its concept is numerically validated. On the other hand, an experimental work which includes a new fluid-mixture for the realization of a narrow band cylindrical breast phantom, a deep investigation in the calibration process and model error minimization, are presented. This conducts to the first quantitative reconstruction of a realistic breast phantom by using the planar camera. Next, both the qualitative and quantitative reconstruction of 3D inclusions into the cylindrical breast phantom, by using data from all the retina, are shown and discussed. Finally, the extended work towards the flexible robotic system is presented
Algorithmes de différentiation numérique pour l'estimation de systèmes non linéaires by Mohamed Braci( Book )

2 editions published in 2006 in French and held by 2 WorldCat member libraries worldwide

The main motivation of this PhD dissertation is the study of numerical differentiation algorithms which are simple and efficient for signals which are available only through their samples and which are corrupted by noise. Such algorithms are building blacks of some observer structure which combining observability conditions derived from the differential algebraic approach of observability and a synthesis like Kalman observer which incorporates a measurement error (between the !rue measurement and predicted one) in a loop and a prediction device which allows to compensate for the delay created by the differentiation operators. The necessity for these algorithms to be simple (in terms of computation burden) results from the fact that they may be invoked many times in a single observer. Alter having proposed a slight improvement of the observer structure previously mentioned we have preceded to the review of simple differentiation algorithms which are candidates. As is known numerical differentiation is an ill-posed inverse problem. As all operators of this type, its practical implementation necessarily goes through regularization. A numerical differentiation scheme is precisely an operator which regularizes the differentiation. The first one we have examined is the very popular linear filter consisting of an approximate of the Laplace transform of the differentiation operator by a proper transfer function, often of first order. We have shown that we cannot content ourselves in saying that the filter bandwidth, which is the regularization parameter, should be kept small. We have obtained optimal values of the filter bandwidth as a compromise of the necessity of narrow filter bandwidth in order to efficiently filter out the noise and large filler bandwidth in order to precisely reproduce the differentiation operator. There is also a method of numerical differentiation which popular as well, it is the finite differences method. Here, loo, we have shown how to choose the sampling period in an optimal way. The so-called Savitzky-Golay differentiation scheme, very much used in experimental sciences, is also revisited: we have shown how it can be regularized. The results are applied to 2 academic examples: the estimation of the substrate in a bioreactor, and the estimation of the lateral speed of a car
Détermination de lois de comportement couplé par des techniques d'homogénéisation : application aux matériaux du génie électrique by Romain Corcolle( Book )

2 editions published in 2009 in French and held by 2 WorldCat member libraries worldwide

This study is focused on the development of accurate homogenization models for coupled behavior (such as piezoelectricity or magnetostriction). The main development in this study is the adaptation of classical uncoupled methods based on a clever decomposition of the fields in different terms, depending on their physical origin. Nonlinear behavior has been taken into account through a linearization process. An improvement has been obtain by including the second order moments of the fields in the models. The developed models have been validated through a comparison of the results with the ones obtained from a Finite Element model. The results show a good agreement with a very lower computational cost for homogenization (ratio over 1000 when dealing with linear constitutive laws). The homogenization model has also been able to catch extrinsic effects, such as the magnetoelectric effect. The ratio between estimation quality / computation time shows the advantages of homogenization methods, which have been successfully adapted to coupled behavior
Optimisation de fonctions coûteuses ; Modèles gaussiens pour une utilisation efficace du budget d'évaluations : théorie et pratique industrielle by Julien Villemonteix( Book )

2 editions published in 2008 in French and held by 2 WorldCat member libraries worldwide

This dissertation is driven by a question central to many industrial optimization problems: how to optimize a function when the budget of evaluations is severely limited by either time or cost? For example, when optimization relies on computationally expensive computer simulations taking several hours, the dimension and complexity of the optimization problem may seem irreconcilable with the budget of evaluation. This work discusses the use of optimization algorithms dedicated to this context, which is out of range for most classical methods. The common principle of the methods discussed is to use Gaussian processes and Kriging to build a cheap proxy for the function to be optimized. This approximation is used to choose iteratively the evaluations. Most of the techniques proposed over the years sample where the optimum is most likely to appear. By contrast, we propose an algorithm, named IAGO for Informational Approach to Global Optimization, which samples where the information gain on the optimizer location is deemed to be highest. The organisation of this dissertation is a direct consequence of the industrial concerns which drove this work. We hope it can be of use to the optimization community, but most of all to practitioners confronted with expensive-to evaluate functions. This is why we insist on industrial applications and on the practical use of IAGO for the optimization of a real function, but also when other industrial concerns have to be considered. In particular, we discuss how to handle constraints, noisy evaluation results, multi-objective problems, derivative evaluation results, or significant manufacturing uncertainties
Métamatériaux tout diélectrique micro-ondes by Thomas Lepetit( Book )

2 editions published in 2010 in French and held by 2 WorldCat member libraries worldwide

Metamaterials are periodic structures with a negative permeability and/or permittivity. The unprecedent control of electromagnetic properties afforded by these materials paves the way towards new applications. In this thesis, the study of dielectric metamaterials aims at the reduction of a major inconvenience, losses. A thorough study of dielectric resonateurs, key components of dielectric metamaterials, was done. It lead to experimental proof in the microwave domain of a negative permeability, permittivity and refractive index around resonance frequencies of said resonators. Finally, an alternative to the two resonators paradigm was proposed and experimentally demonstrated to obtain a negative index, a bimodal resonator
Design of wideband arrays of spiral antennas. by Israel Hinostroza( )

2 editions published in 2013 in English and held by 1 WorldCat member library worldwide

This work focuses on the design of wideband dual polarized arrays using spiral antennas. These antennas are known for having wideband properties. But, due to the presence of the grating lobes, the bandwidth is decreased when using an array instead of a single antenna. In order to obtain a dual polarized array, it is needed to use elements of opposite polarization, which creates great distances between same polarization elements, meaning an earlier presence of the grating lobes. In this work, an analytic method was developed to estimate the bandwidth of the spiral arrays. This method showed that the maximum bandwidth of uniform spiral arrays is about an octave, for the mono-polarized case, and nonexistent for the dual polarized case. Working on the validation of the method, some resonances were observed. Explanations are presented, as well as possible solutions. Trying to expand the bandwidth of the array, it was found that it is possible and suitable to use at the same time the two current design paradigms for wideband arrays. Using this idea, a 6:1 bandwidth concentric rings array using connected spirals was achieved. Perspectives are also presented
Croissance et caractérisation des nanofils de silicium et de germanium obtenus par dépôt chimique en phase vapeur sous ultravide by Rym Boukhicha( )

1 edition published in 2011 in French and held by 1 WorldCat member library worldwide

Silicon and germanium nanowires have a high technological potential, which makes them more interesting when their position and size are controlled. As part of this thesis, the growth was achieved by chemical vapor deposition using a gold catalyst through the vapor-liquid-solid mechanism.Initially, various techniques such as dewetting, electron beam evaporation and molecular beam epitaxy to obtain the metal catalyst for the growth of nanowires were performed.In a second step, the growth kinetics of silicon nanowires has been studied as a function of pressure, temperature and catalyst diameter. Silane was used as precursor gas. A critical diameter change of direction of growth and above which the nanowires grow without crystal defects and preferentially in the direction <111> was estimated at 80 nm. The growth kinetics depending on the pressure could be explained by the Gibbs-Thomson. This allowed the determination of the adsorption coefficient of silane molecules on the surface of gold and the saturated vapor pressure of silicon P∞. The morphological change of the section of the nanowire and the distribution of gold nanoclusters on their walls were also detailed analysis using transmission electron microscopy.The integration of nanowires into devices requires to connect them. A process based on the method of local oxidation of silicon is proposed to form Si(111) seeds, from Si(001) substrate. Gold droplets are then located in these seeds and are used to grow nanowires oriented along one of the directions [111].Finally, the growth kinetics of germanium nanowires was studied. Restrictions on the use of 10% germane diluted in hydrogen in our system epitaxy UHV-CVD has been demonstrated. Given our experimental setup, the precursor gas was changed to the digermane diluted to 10% in hydrogen to promote vertical growth of Ge nanowires. This enabled us to develop Ge nanowires with growth rates up to 100 nm / min. Structural analysis showed the existence of a tapering of the nanowires. This is caused by the presence of lateral growth which increases with temperature. As in the case of Si nanowires, gold nanoclusters were observed on the sidewalls of the nanowires. However, the presence of gold is limited to the top of the nanowires. This diffusion of gold nanoclusters on the walls can be reduced by increasing the growth pressure. In addition, the variation of the speed of growth of Ge nanowires according to the radius of drops of gold has identified a critical radius of 6 nm below which the nanowire growth can occur. This result was interpreted using a model based on the Gibbs-Thomson effect and assuming that the limiting step in the vapor-liquid-solid growth is adsorption and evaporation of germanium
Transformation de programmes logiques : application à la personnalisation et à la personnification d'agents. by Georges Dubus( )

1 edition published in 2014 in French and held by 1 WorldCat member library worldwide

This thesis deals with personalization and personification of rational agents within the framework of web applications. Personalization and personification techniques are more and more used to answer the needs of users. Most of those techniques are based on reasoning tools that come from the artificial inteligence field. However, those techniques are usually used in an ad-hoc way for each application. The approach of this thesis is to consider personaliaation and personification as two instances of alteration of behaviour, and to study the alteration of the behaviours of rational agents. The main contributions are WAIG, a formalism for the expression of web applications based on the agent programming language Golog, and PAGE, a formal framework for the manipulation and the alteration of Golog agent programs, which allow to transform an agent automatically following a given criterion. Those contributions are illustrated by concrete scenarios from the fields of personalization and personification
Elastographie-IRM pour le diagnostic et la caractérisation des lésions du sein by Corinne Balleyguier( )

1 edition published in 2013 in French and held by 1 WorldCat member library worldwide

MR-elastography (MRE) is a non-invasive functional Imaging technique using tissue mechanical visco-elastic properties to evaluate tissue stifness. MRE is different from elasticity Imaging in ultrasound, as it is possible to evaluate tumour viscosity. Combining viscosity and elasticity may improve MRI accuracy, in comparison with classical morphological and kinetics criteria. Only very few studies are focused on breast MRE, because of low availability of dedicated breast coils with MRE devices. Firstly, we developed and optimized a breast MRE sequence on a population of 10 volunteers. This sequence is based on a Spin Echo EPI-MRE 3D, and it was possible to acquire 50 slices on one breast in 10 minutes, which is applicable in a clinical routine in breast MRI. Secondly, a multi-frequency approach 37,5 Hz, 75 Hz and 112,5 Hz has been evaluated on the last three volunteers, then transferred to our patient's population. A continous diffusion of waves within the breast was possible with this multifrequency approach sequence. 50 patients presenting undetermined or suspicious breast lesions (37 cancers, 13 benign lesions) were included in this study and examined with a standard breast MRI and MRE sequence. Some patients were also examined with shear-wave ultrasound elastography (ARFI mode, Siemens ®). Morphological, kinetic and visco-elastic MR parameters were correlated to pathology. We demonstrated that MR visco-elastic properties were strongly correlated with Bi-RADS ACR malignancy score of a breast lesion and with malignant and benign status. The best parameter was Gd (dynamic modulus), which corresponded to lesion stiffness. Gd was lower in case of BI-RADS 5 lesions. Gl parameter (Loss modulus) was higher in malignant lesions in comparison with benign lesions, with viscosity level statistically higher in malignant lesions. The best criterion was the ratio y (Gl/Gd), which was significantly higher in malignant lesions in comparison with benign lesions; ratio y was statistically an independent factor. In practice, addition of a MRE sequence to a standard breast MRI improved significantly breast MRI Sensitity (78 to 91 %) without reduction in specificity; Sp was anyway initially high in our study. Nevertheless, we didn't demonstrate a statistical correlation with fibrosis, vascular grading or necrosis with MRE parameters, to explain visco-elastic properties of breast tumours. In conclusion, MR-elastography may be useful to improve breast MRI accuracy. In future studies, MRE sequence may be optimized to allow a bilateral acquisition on both breasts, which would be useful in clinical practice. Future works could include higher number of patients to confirm our results
Pathological synchronization in neuronal populations : a control theoretic perspective by Alessio Franci( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

In the first part of this thesis, motivated by the development of deep brain stimulation for Parkinson's disease, we consider the problem of reducing the synchrony of a neuronal population via a closed-loop electrical stimulation. This, under the constraints that only the mean membrane voltage of the ensemble is measured and that only one stimulation signal is available (mean-field feedback). The neuronal population is modeled as a network of interconnected Landau-Stuart oscillators controlled by a linear single-input single-output feedback device. Based on the associated phase dynamics, we analyze existence and robustness of phase-locked solutions, modeling the pathological state, and derive necessary conditions for an effective desynchronization via mean-field feedback. Sufficient conditions are then derived for two control objectives: neuronal inhibition and desynchronization. Our analysis suggests that, depending on the strength of feedback gain, a proportional mean-field feedback can either block the collective oscillation (neuronal inhibition) or desynchronize the ensemble.In the second part, we explore two possible ways to analyze related problems on more biologically sound models. In the first, the neuronal population is modeled as the interconnection of nonlinear input-output operators and neuronal synchronization is analyzed within a recently developed input-output approach. In the second, excitability and synchronizability properties of neurons are analyzed via the underlying bifurcations. Based on the theory of normal forms, a novel reduced model is derived to capture the behavior of a large class of neurons remaining unexplained in other existing reduced models
Analyse et commande sans modèle de quadrotors avec comparaisons by Jing Wang( )

1 edition published in 2013 in English and held by 1 WorldCat member library worldwide

Inspiré par les limitations de contrôleurs PID traditionnels et les différentes performances dans les cas idéals et réalistes, les quadrotors existants, leurs applications et leurs méthodes de contrôle ont été intensivement étudiés dans cette thèse. De nombreux challenges sont dévoilés: les systèmes embarqués ont des limites des ressources de calcul et de l'énergie; la dynamique est assez complexe et souvent mal connu; l'environnement a beaucoup de perturbations et d'incertitudes; de nombreuses méthodes de contrôle ont été proposées dans des scénarios idéaux dans la littérature sans comparaison avec d'autres méthodes. Par conséquent, cette thèse porte sur ces principaux points dans le contrôle de quadrotors.Tout d'abord, les modèles cinématiques et dynamiques sont proposés, y compris toutes les forces et couples aérodynamiques importants. Un modèle dynamique simplifié est également proposé pour certaines applications. Ensuite, la dynamique de quadrotor est analysée. En utilisant la théorie de la forme normale, le modèle de quadrotor est simplifié à une forme plus simple nommée la forme normale, qui présente toutes les propriétés dynamiques possibles du système d'origine. Les bifurcations de cette forme normale sont étudiées, et le système est simplifié à son point de bifurcation en utilisant la théorie de la variété du centre. Basé sur l'étude des applications de quadrotors, cinq scénarios réalistes sont proposés : un cas idéal, les cas avec la perturbation du vent, les incertitudes des paramètres, les bruits de capteurs et les fautes de moteur. Ces cas réalistes peuvent montrer plus globalement les performances des méthodes de contrôle par rapport aux cas idéaux. Un schéma déclenché par événements est également proposé avec le schéma déclenché par. Ensuite, la commande sans modèle est présentée, Il s'agit d'une technique simple mais efficace pour la dynamique non-linéaire, inconnue ou partiellement connue. La commande par backstepping et la commande par mode glissant sont également proposées pour la comparaison.Toutes les méthodes de contrôle sont mises en œuvre sous les schémas déclenchés par temps et par événements dans cinq scénarios différents. Basé sur l'étude des applications de quadrotors, dix critères sont choisis pour évaluer les performances des méthodes de contrôle, telles que l'erreur maximale absolue de suivi, la variance de l'erreur, le nombre d'actionnement, la consommation d'énergie, etc
Thérapie cellulaire de l'angiogenèse tumorale : évaluation par imagerie morphologique et fonctionnelle en IRM et vidéomicroscopie de fluorescence by Nathalie Faye( )

1 edition published in 2011 in French and held by 1 WorldCat member library worldwide

Introduction : Tumor angiogenesis leads to the development of new vessels enabling the growth of the tumor. Tumor vessels are characterized by abnormalities including mural cells (perivascular muscular cells) responsible for abnormal vessel function and maturation. In this thesis, we studied cellular therapy in a tumor model by injection of mural cells using MRI and fluorescence videomicroscopy. Materiels and methods: Nude mice were injected with squamous cell TC1 tumors and animals were divided in three groups: control (n=17), sham control (n=16) and treated by local injection of human mural cells (n=17). Animals underwent MRI and videomicroscopy before (D7) and after (D14) treatment. Measured parameters included tumor size (caliper and MRI), microvessels density (MVD using MRI, videomicroscopy and pathology), ADC, f, Dr, D* (diffusion MRI), R2* variations under air, oxygen and carbogen (BOLD MRI), and 'index leakage' (reflecting capillary permeability, using videomicroscopy). Results: During tumor growth, the control group showed a decrease in circulating (or functional) vessels reflected by a decrease in D* and R2* under air, the loss of vessel ability to respond to carbogen reflected by an increase of the delta R2* under carbogen, and increased capillary permeability resulting in a higher ”index leakage”. In the group treated by injection of mural cells, we observed a slowing of tumor growth and stabilization of these parameters of microcirculation and vessel maturation. Conclusion : Therapy by local injection of mural cells was effective resulting in slower tumor growth, stabilization of microcirculatory hemodynamics and maturation, and decreased capillary permeability, consistent with the alleged 'stabilizing' and 'normalizing' effects of mural cells on microvessels
Optimisation du procédé de création de voix en synthèse par sélection by Didier Cadic( )

1 edition published in 2011 in French and held by 1 WorldCat member library worldwide

This work falls within the scope of text-to-speech (TTS) technology. More precisely, focus is on the voice creation process for unit-selection synthesis. In a standard approach, a textual script of several thousands of words is read by a speaker in order to generate approximately 5 to 10 hours of useable speech. The recording time is spread out over one or two weeks and is followed by the considerable task of manually revising the phonetic segmentation for all of the speech.Such a costly and time-consuming process presents a major obstacle to diversifying synthesized voices. In order to increase efficiency in this process, we introduce a new unit, called a "vocalic sandwich", to optimize coverage of the recording texts. Phonetically, this unit better addresses the segmental limitations of unit-selection TTS than state-of-the-art units (diphones, triphones, syllables, words...). Linguistically, a new set of contextual symbols focuses the coverage, allowing for more control and consideration of prosody. Practically, in order to automate the segmentation process, better anticipation of the phonetic and prosodic content desired in the final database is required. This is achieved here by increasing the readability and consistency of each sentence included in the script. As a side, these properties also help to facilitate the reading stage. Furthermore, as an alternative to the classic corpus condensation, a semi-automatic sentence building algorithm is developed in this work wherein sentences are built rather than selected from a reference corpus. Ultimately, the sentence building provides access to much denser scripts, specifically allowing for increases in density of between 30 and 40%.In incorporating these new approaches and tools, the voice creation process is made very efficient, as is validated in this work through the preparation and evaluation of numerous synthesized voices. Perceptive scores that are comparable to the traditional process are achieved with 40 minutes of speech (half-day recording) and without any manual post-processing. Finally, we take advantage of these results in order to enhance our synthesized voices with various expressive, multi-expressive and paralinguistic features
Robust target detection for Hyperspectral Imaging. by Joana Maria Frontera Pons( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

L'imagerie hyperspectrale (HSI) repose sur le fait que, pour un matériau donné, la quantité de rayonnement émis varie avec la longueur d'onde. Les capteurs HSI mesurent donc le rayonnement des matériaux au sein de chaque pixel pour un très grand nombre de bandes spectrales contiguës et fournissent des images contenant des informations à la fois spatiale et spectrale. Les méthodes classiques de détection adaptative supposent généralement que le fond est gaussien à vecteur moyenne nul ou connu. Cependant, quand le vecteur moyen est inconnu, comme c'est le cas pour l'image hyperspectrale, il doit être inclus dans le processus de détection. Nous proposons dans ce travail d'étendre les méthodes classiques de détection pour lesquelles la matrice de covariance et le vecteur de moyenne sont tous deux inconnus.Cependant, la distribution statistique multivariée des pixels de l'environnement peut s'éloigner de l'hypothèse gaussienne classiquement utilisée. La classe des distributions elliptiques a été déjà popularisée pour la caractérisation de fond pour l'HSI. Bien que ces modèles non gaussiens aient déjà été exploités dans la modélisation du fond et dans la conception de détecteurs, l'estimation des paramètres (matrice de covariance, vecteur moyenne) est encore généralement effectuée en utilisant des estimateurs conventionnels gaussiens. Dans ce contexte, nous analysons de méthodes d'estimation robuste plus appropriées à ces distributions non-gaussiennes : les M-estimateurs. Ces méthodes de détection couplées à ces nouveaux estimateurs permettent d'une part, d'améliorer les performances de détection dans un environment non-gaussien mais d'autre part de garder les mêmes performances que celles des détecteurs conventionnels dans un environnement gaussien. Elles fournissent ainsi un cadre unifié pour la détection de cibles et la détection d'anomalies pour la HSI
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  General Special  
Audience level: 0.94 (from 0.91 for Neural net ... to 0.99 for Exploitati ...)

Alternative Names
Ecole Doctorale STITS. Orsay, Essonne

Ecole supérieure d'électricité (Gif-sur-Yvette, Essonne). Ecole Doctorale Sciences et Technologies de l'Information des Télécommunications et des Systèmes

ED 422

ED STITS. Orsay, Essonne

ED422

STITS. Orsay, Essonne

Université Paris 11. Ecole Doctorale Sciences et Technologies de l'Information, des Télécommunications et des Systèmes

Université Paris-Sud 11. Ecole Doctorale Sciences et Technologies de l'Information, des Télécommunications et des Systèmes

Université Paris-Sud. Ecole Doctorale Sciences et Technologies de l'Information, des Télécommunications et des Systèmes

Languages
French (25)

English (10)