WorldCat Identities

Institut Pascal (Aubière, Puy-de-Dôme)

Overview
Works: 276 works in 277 publications in 2 languages and 277 library holdings
Roles: Other, Degree grantor
Publication Timeline
.
Most widely held works by Puy-de-Dôme) Institut Pascal (Aubière
Analyse radiative des photobioréacteurs by Jérémi Dauchet( Book )

2 editions published in 2012 in French and held by 2 WorldCat member libraries worldwide

Photosynthesis engineering is a promising mean to produce both energy carriers and fine chemicals in order to remedy the growing scarcity of fossil fuels. This is a challenging task since it implies to design process for solar biomass production associated with short time constant (few days), while oil formation took hundred million of years. This aim could be achieved by cultivating photosynthetic microorganisms in photobioreactors with optimal surface and volume kinetic performances. Above all, such an optimization necessitate a careful radiative study of the process. A radiative analysis of photobioreactors is here proposed that starts with the determination of the absorption and scattering properties of photosynthetic microorganisms suspensions, from the knowledge their morphological, metabolic and structural features. A model is constructed, implemented and validated for microorganisms with simple shapes ; the extension of this approach for the treatment of complex shapes will eventually be straightforward. Then, multiple scattering radiative transfer analysis is introduced and illustrated through different approximations that are relevant for the conceptualization of photobioreactors, leading to the construction of physical pictures that are necessary for the optimization of the process. Finally, the Monte Carlo method is implemented in order to rigorously solve multiple scattering in complex geometries (geometries that correspond to an optimized design of the process) and in order to calculate the kinetic performances of the reactor. In this trend, we develop a novel methodological development that simplies the treatment of the non-linear coupling between radiative transfer and the local kinetic of photosynthesis. These simulation tools also benefit from the most recent developments in the field of the Monte Carlo method : integral formulation, zero-variance algorithms, sensitivity evaluation (a specific approach for the evaluation of sensitivities to geometrical parameters is here developed and shown to correspond to a simple implementation in the case of a set of academic configurations that are tested). Perspectives of this work will be to take advantage of the developed analysis tools in order to stimulate the reflexion regarding photobioreactor intensification, and to extend the proposed approach to the study of photoreactive systems engineering in general
Hémorragies du post-partum immédiat : Estimations visuelles des pertes sanguines par les sages-femmes et les étudiants sages-femmes et prévalence des troubles psychologiques en cas d'hémorragie du post-partum immédiat : Etude PSYCHE by Marine Pranal( )

1 edition published in 2019 in French and held by 1 WorldCat member library worldwide

Objectives: The objectives were to assess the accuracy of visual estimates of blood loss (EVPS) by midwives and midwifery students (Part 1) and secondly to assess the psychological consequences after postpartum haemorrhage (PPH) (Part 2). Part 1: We performed a multicenter cross-sectional study (n = 16,656). French practicing midwives and midwifery students were asked to estimate eight photographs of the volume of blood loss via online survey. Each photograph was duplicated and randomly ordered in the questionnaire with a reference 50 mL. We observed that the overall percentage of exact estimates of the volume of losses proposed was low in both groups of respondents (34.1%). PPH threshold was always successfully diagnosed but identified in less than half of the cases for severe PPH. Intra-observer agreement was better for the extreme values (100 mL and 1500 mL) with higher agreement (weighted kappa ≥ 0.8) for the highest values (1000 mL and 1500 mL). Midwives tended to underestimate the amount of blood loss but to a lesser extent than students. Regardless of respondent category or diagnosis (HPP or severe PPH), the specificity of the EVPS as a diagnostic test was greater than its sensitivity. Part 2: Our second component was a monocentric cross-sectional descriptive and etiologically oriented study on a cohort of women who gave birth at Clermont-Ferrand University Hospital [n = 1298; 528 women with HPP = exposed (GE) and 770 women without HPP = unexposed (GNE)]. The prevalence of depression in women after immediate PPH (<24 hours) was assessed at postpartum M2, M6 and M12 using the EPDS questionnaire. Anxiety was assessed at the same time with the STAI-YA and GAD-7 questionnaires and post-traumatic stress disorder (PTSD) via the IES-R. All questionnaires were self-reported. The overall participation of women at M2 was 63,7% (GE: 63% and GNE: 64,1%). We found prevalences in exposed patients of 24,1% for DPP (vs. GNE: 18,3%), 20,4% anxiety (vs. GNE: 13,4%) and 12,9% TSPT (vs. GNE: 7,8%). After adjustment, only the risk of having PTSD at M2 remained significantly increased in women who had PPH (ORa = 2,11, 95% CI: 1,14-4,00). Analyzes at M6 and M12 will be carried out when the follow-up is completed. Conclusion: Part 1: Students midwives tended to underestimate the amount of blood loss more often than midwives despite using a standard measure. HPP (≥ 500 ml) was always identified but severe PPH (≥ 1000 ml) was identified in less than half of the cases. The difficulty of the EVPS must be emphasized during the initial training of the students and during professional's trainings. Part 2: DPP, anxiety, and PTSD are common in the postpartum period including women who have not had PPH. The occurrence of PTSD is to be monitored at M2 in women with PPH. It is important to identify these disorders in all postpartum women in order to implement adapted and individualized follow-up of these women and thus to promote the mother-child bond
Localisation d'une flotte de véhicules communicants par approche de type SLAM visuel décentralisé by Guillaume Bresson( )

1 edition published in 2014 in French and held by 1 WorldCat member library worldwide

The localization of a vehicle with the use of SLAM techniques (Simultaneous Localization And Mapping) has been extensively studied during the last 20 years. However, only a few approaches have tried to extend these algorithms to a fleet of vehicles despite the many potential applications. It is the objective of this thesis. First of all, a monocular SLAM for a single vehicle has been developed. This one proposes to pair an Extended Kalman Filter with a Cartesian representation for landmarks so as to produce accurate low density maps. Indeed, the extension of SLAM to several vehicles requires permanent communications inside the fleet. With only a few landmarks mapped, our approach scales nicely with the number of vehicles. Cheap sensors have been favored (a single camera and an odometer) in order to spread more easily the use of multi-vehicle applications. Correctives have been proposed in order to avoid the divergence problems induced by such a scheme. The experiments showed that our SLAM is able to furnish good localization results while being light and fast.The drift affecting every SLAM algorithm has also been studied. Its integration inside the SLAM process, thanks to a dedicated architecture and a dynamic model, allows to ensure consistency even without an estimation of it. Loop closures or the integration of geo-referenced information becomes straightforward. They naturally correct all the past positions while still maintaining consistency. In a multi-vehicle scenario, it is a key aspect as each vehicle drifts differently from one another. It is consequently important to take it into account. Our SLAM algorithm has then been extended to several vehicles. A generic structure has been used so as to allow any SLAM algorithm to replace our monocular SLAM. The multi-vehicle architecture avoids data incest (double-counting information) and handles network failures, be they communication breakdowns or latencies when receiving data. The static part of the drift model allows to take into account the fact that the initial positions of the different vehicles composing the fleet might be unknown. Consistency is thus permanently preserved. Our approach has been successfully tested using simulations and real experiments with various settings (row or column convoy with 2 or 3 vehicles) in a fully decentralized way
End-to-End Ego-Vehicle Localization using Multi-sensor and a Low Cost Map by Abderrahim Kasmi( )

1 edition published in 2021 in English and held by 1 WorldCat member library worldwide

Nowadays, Autonomous Vehicles (AVs) are capable of realizing extraordinary and complicated tasks. Notwithstanding these amazing achievements, several challenges arise, one of them is the ability of the autonomous car to perceive its environment in order to properly evaluate the situation with regards to the road environment. Part of this situation evaluation is the knowledge about ego-localization. In the broadest sense, ego-localization is a meaningful concept that can be related to different problematics. However, one interpretation of ego-localization consists of the knowledge of three key components: the road on which the vehicle is traveling (Road Level Localization (RLL)), the ego-lane position (Ego-Lane Level Localization (ELL)), and the lane on which the vehicle is traveling (LLL). Therefore, a reliable ego-localization system has to fulfill the localization's requirement of each of these components.The objective of this Ph.D. work is to propose a unified, generalized and modular localization system architecture that tackles every aspect of the localization system. In addition that, a focus is given on opensource map OpenSteetMap (OSM) to demonstrate that even a low-cost map can be used to obtain an accurate localization. To do so, an end-to-end framework composed of several interconnected components is presented. This framework is responsible of providing a localization solution on a digital map by developing a robust map-matching algorithm. Furthermore, it permits the localization of the ego-vehicle with respect to ego-lane by proposing a top-down approach that exploits the priors of the map in order to detect the lane marking. Finally, it determines the lane on which the vehicle is traveling by introducing a modular framework that handles the ambiguities in the lane-level localization. The reliability and the flexibility of the overall proposed architecture and its elementary components have been intensively validated, first, individually using different dataset, and secondly, as a whole solution using a collected dataset in the region of Clermont-Ferrand
Détection d'objets stationnaires par une paire de caméras PTZ by Constant Guillot( )

1 edition published in 2012 in French and held by 1 WorldCat member library worldwide

Video analysis for video surveillance needs a good resolution in order to analyse video streams with a maximum of robustness. In the context of stationary object detection in wide areas a good compromise between a limited number of cameras and a high coverage of the area is hard to achieve. Here we use a pair of Pan-Tilt-Zoom (PTZ) cameras whose parameter (pan, tilt and zoom) can change. The cameras go through a predefined set of parameters chosen such that the entire scene is covered at an adapted resolution. For each triplet of parameters a camera can be assimilated to a stationary camera with a very low frame-rate and is referred to as a view. First each view is considered independently. A background subtraction algorithm, robust to changes in illumination and based on a grid of SURF descriptors, is proposed in order to separate background from foreground. Then the detection and segmentation of stationary objects is done by reidentifying foreground descriptor to a foreground model. Then in order to filter out false alarms and to localise the objects in the3D world, the detected stationary silhouettes are matched between the two cameras. To remain robust to segmentation errors, instead of matched a silhouette to another, groups of silhouettes from the two cameras and mutually explaining each other are matched. Each of the groups then correspond to a stationary object. Finally the triangulation of the top and bottom points of the silhouettes gives an estimation of the position and size of the object
Flexible and Smooth Trajectory Generation based on Parametric Clothoids for Nonholonomic Car-like Vehicles by Suhyeon Gim( )

1 edition published in 2017 in English and held by 1 WorldCat member library worldwide

Smooth path generation for car-like vehicles is one of the most important requisite to facilitate the broadcast use of autonomous navigation. This thesis proposes a smooth path generation method for nonholonomic vehicles which has inherently continuity of curvature and having important flexibility for various boundary conditions. The continuous curvature path is constructed by composing multiple clothoids including lines and/or arc segments, and where each clothoid is obtained by parameter regulation. From those properties the path is named pCCP (parametric Continuous Curvature Path) and provides curvature diagram which facilitates a smooth steering control for path following problem. Local pCCP problem is defined by initial and final tuple configurations (vehicles posture and steering angle). The problem is expanded to be as general as possible by including several cases. The local pCCP generation for steady target pose is specifically described, where the problem is divided into three problems and each problem is also decomposed into several sub-cases. To give more flexibility to the proposed pCCP, dynamic target is considered to obtain dynamic-pCCP (d-CCP). A simple but efficient framework to analyze the future status of obstacle avoidance is applied in 4D (3D with the addition of time axis) configuration and two avoidance maneuvers as front and rear avoidance are applied and validated with several examples. Under the similar methodology in performance criteria of pCCP generation, the human-CCP (h-CCP) is derived from experimental patterns of human driver samples. From several subexperiments, human driving pattern for obstacle avoidance, lane change and cornering motion are extracted and those pattern were included to make the h-CCP (which is obtained with similar way as pCCP but with different optimization criteria) to enhance considerably the passenger comfort
Etude des microcavités planaires ZnO dans le régime de coupage fort by Laurent Orosz( )

1 edition published in 2013 in French and held by 1 WorldCat member library worldwide

This thesis reports a spectroscopic study of the light-matter interaction in ZnO based microcavities.We have examined several planar microcavities which dier from the previous ones through their structures and their epitaxial processes. The theoretical advantages that have driven these realizations are discussed and veried through experimental measurements of reectivity and photoluminescence as a function of temperature and excitation intensity. Thanks to the optical characterics of these new cavities, we have studied the coherent light emission based on the condensation of polaritons at high temperature, up to 300K. High optical quality factor and high Rabi splitting allow to deeply analyze the relationship which exists between the photonic fraction of polaritons and the threshold excitation value corresponding to the occurrence of the polariton laser eect. This work highlights two identied physical processes which contribute to the laser eect : the thermodynamic and kinetic regimes. Moreover, it appears that the exciton-phonon interaction constitutes a specic phenomenon which allows to reduce the polariton laser threshold
Modélisation hydromécanique du bois : application au sapin blanc du Massif Central by Sung Lam Nguyen( )

1 edition published in 2016 in French and held by 1 WorldCat member library worldwide

This work concerns 3D modeling of hydro-mechanical behavior of wood in general and the silver fir (Abies alba Mill.) in particular with taking account of the couplings between the effects: orthotropie, hydric, elastic, viscoelastic and mechano-sorptive including hydro-lock effect that is a temporary locking of the mechanical strain during a period of drying under stress. This memory is divided into three parts divided into seven chapters. The first part examines the background and the problem of hydro-mechanical behavior of wood. The aspects go from the structure, hygroscopic phenomenon and the effect of swelling/shrinkage, to various aspects of the hydro-mechanical behavior of wood under constant or variable moisture as orthotropie, viscoelasticity and mechano-sorptive effects that is the interaction complex between mechanical loading and moisture variations. The bases for modeling are presented in the second chapter, such as incremental formulation on finite time step to model the 3D orthotropic viscoelastic behavior and interesting mechano-sorptive models of literature. From this literature review, we propose a way to model in this work mechano-sorptive effect as the sum of three elementary effects: irrecoverable mechanosorptive, mechano-sorptive creep and hydro-lock effect. The first and the second effects are modeled by existing models while modeling hygroverrou effect is an original subject in this work. The second part, divided into two chapters, is dedicated to building the 3D model of behavior. The first chapter presents the mathematical developments for the development of an analytical model. This model is based on the assumption partition of the total strain by a sum of six elementary strains: hydric, instant elastic, viscoelastic pure, hydro-lock, irrecoverable mechano-sorptive and mechano-sorptive creep. Variations of these elementary strains are established separately. In particular, the evolution law of the hydro-lock strain constructed on the basis of experimental observations is different in phase of drying and moistening. An auxiliary stress introduced in accordance with thermodynamic principles, solves the problem of recovering the hydro-lock strain in the moistening phase in case with zero or little stress. In parallel, a new rheological model is proposed to model the viscoelastic behavior at variable humidity. This model, equivalent to a generalized Maxwell model and / or a generalized Kelvin-Voigt model, is able to describe the creep as well as relaxation. The second chapter of this part is devoted to the transformation of the analytical model in an incremental form on finite time step. The contribution of each elementary part is established by exact resolutionfrom differential equations or Boltzmann's integrals. The sum of elementary forms thus obtained leads to the complete model behavior law which is similar to that of an equivalent thermo-elastic behavior. Because of the integration process, the time step calculation is finished but not necessarily small. This property is very important because it significantly reduces the computation time while maintaining very good accuracy. The last part is divided into three chapters. It presents numerical implementation of hydro- mechanical model using the finite element code Cast3m, followed by validation and applications to various classes of problems. The numerical algorithm is organized into independent modules. Elementary procedures are built to perform specific functions; they are called in a specific order by the main program. Model validation is made by comparison between simulated results and experimental data available in tension and bending. The last chapter of the thesis presents applications of solid wood reconstituted silver fir. They show the ability of the model to predict the states of stress and strain in timber structures under mechanical loading and variable humidity
A preliminary theoretical and experimental study of a photo-electrochemical cell for solar hydrogen production. by Azin Eskandari( )

1 edition published in 2019 in English and held by 1 WorldCat member library worldwide

In order to meet the energy and climate challenge of the coming 21st century, one solution consists of developing processes for producing storable energy carriers by artificial photosynthesis to synthesize solar fuels, in particular hydrogen, in order to valorize the solar resource. The understanding of these processes and the achievement of high kinetic and energetic performances require the development of generic, robust and predictive knowledge models considering radiative transfer as a physical process controlling the process at several scales but also including the various other phenomena involved in the structure or reification of the model.In this PhD work, the photo-reactive process at the heart of the study was the photo-electrochemical cell. More complex than the simple photoreactor, with a photo-anode and a (photo)cathode, the photo-electrochemical cell spatially dissociates the oxidation and reduction steps. Based both on the existing literature (mainly in the field of electrochemistry) and by deploying the tools developed by the research team on radiative transfer and thermokinetic coupling formulation, it was possible to establish performance indicators of photo-electrochemical cells.In parallel to the establishment of this model, an experimental approach was undertaken based first on a commercial Grätzel-type cell (DS-PEC) indicating the general trends of such photon energy converters with in particular a drop in energy efficiency as a function of the incident photon flux density. A modular experimental device (Minucell) has also been developed and validated in order to characterize photo-anodes of different compositions such as chromophore impregnated TiO2 electrodes for operation in Grätzel cells or Fe2O3 hematite electrodes (SC-PEC) where the semiconductor plays both the functions of photon absorption and charge carrier conduction. Above all, the Minucell device allowed to test, characterize and model the behavior of a bio-inspired photo-electrochemical cell for H2 production using at the photo-anode a Ru-RuCat molecular catalyst (developed by ICMMO Orsay/CEA Saclay) and at the cathode a CoTAA catalyst (developed by LCEMCA Brest). Minucell was used to characterize each constituent element of a photo-electrochemical cell and then the cell as a whole confirming the trends and observations obtained on energy efficiencies.This preliminary work opens up a wide range of research prospects, lays common ground between electrochemistry and photo-reactive systems engineering, and provides insights into the design and kinetic and energy optimization of photo-electrochemical cells for the production of hydrogen and solar fuels
Calcul de probabilités d'événements rares liés aux maxima en horizon fini de processus stochastiques by Jun Shao( )

1 edition published in 2016 in French and held by 1 WorldCat member library worldwide

Initiée dans le cadre d'un projet ANR (le projet MODNAT) ciblé sur la modélisation stochastique de phénomènes naturels et la quantification probabiliste de leurs effets dynamiques sur des systèmes mécaniques et structuraux, cette thèse a pour objet le calcul de probabilités d'événements rares liés aux maxima en horizon fini de processus stochastiques, avec prise en compte des quatre contraintes imposées suivantes : (1) l'ensemble des processus considérés doit contenir les quatre grandes catégories de processus rencontrés en dynamique aléatoire, à savoir les gaussiens stationnaires, les gaussiens non stationnaires, les non gaussiens stationnaires et les non gaussiens non stationnaires ; (2) ces processus doivent pouvoir être, soit décrits par leurs lois, soit fonctions de processus décrits par leurs lois, soit solutions d'équations différentielles stochastiques, soit même solutions d'inclusions différentielles stochastiques ; (3) les événements en question sont des dépassements de seuils très élevés par les maxima en horizon fini des processus considérés et ces événements sont de très faible occurrence, donc de très faible probabilité (de l'ordre de 10 -4 à 10 -8 ), du fait de la valeur élevée des seuils ; et enfin (4) le recours à une approche Monte-Carlo pour effectuer ce type de calcul doit être banni, car trop chronophage compte tenu des contraintes précédentes. Pour résoudre un tel problème, dont le domaine d'intérêt s'étend bien au delà de la mécanique probabiliste et de la fiabilité structurale (on le rencontre notamment dans tous les secteurs scientifiques en connexion avec la statistique des valeurs extrêmes, comme par exemple les mathématiques financières ou les sciences économiques) une méthode innovante est proposée, dont l'idée maîtresse est née de l'analyse des résultats d'une étude statistique de grande ampleur menée dans le cadre du projet MODNAT. Cette étude, qui porte sur l'analyse du comportement des valeurs extrêmes des éléments d'un vaste ensemble de processus, a en effet mis en évidence deux fonctions germes dépendant explicitement de la probabilité cible (la première en dépendant directement, la seconde indirectement via une probabilité conditionnelle auxiliaire elle-même fonction de la probabilité cible) et possédant des propriétés de régularité remarquables et récurrentes pour tous les processus de la base de données, et c'est sur l'exploitation conjointe de ces propriétés et d'un principe d'approximation bas niveau-extrapolation haut niveau que s'appuie la construction de la méthode. Deux versions de celle-ci en sont d'abord proposées, se distinguant par le choix de la fonction germe et dans chacune desquelles cette fonction est approximée par un polynôme. Une troisième version est également développée, basée sur le formalisme de la deuxième version mais utilisant pour la fonction germe une approximation de type "fonction de survie de Pareto". Les nombreux résultats numériques présentés attestent de la remarquable efficacité des deux premières versions. Ils montrent également que celles-ci sont de précision comparable. La troisième version, légèrement moins performante que les deux premières, présente quant à elle l'intérêt d'établir un lien direct avec la théorie des valeurs extrêmes. Dans chacune de ses trois versions, la méthode proposée constitue à l'évidence un progrès par rapport aux méthodes actuelles dédiées à ce type de problème. De par sa structure, elle offre en outre l'avantage de rester opérationnelle en contexte industriel
Amélioration par la gestion de redondance du comportement des robots à structure hybride sous sollicitations d'usinage by Richard Cousturier( )

1 edition published in 2017 in French and held by 1 WorldCat member library worldwide

Les robots industriels ont évolué fondamentalement ces dernières années pour répondre aux exigences industrielles de machines et mécanismes toujours plus performants. Ceci se traduit aujourd'hui par de nouveaux robots anthropomorphes plus adaptés laissant entrevoir la réalisation de tâches plus complexes et soumis à de fortes sollicitations comme durant l'usinage. L'étude du comportement des robots anthropomorphes, à structures parallèles ou hybrides montre une anisotropie aussi bien cinématique, que dynamique, impactant la précision attendue. Ces travaux de thèse étudient l'intégration des redondances cinématiques qui permettent de pallier en partie ce problème en positionnant au mieux la tâche à réaliser dans un espace de travail compatible avec les capacités attendues. Ces travaux ont permis d'améliorer notre outil d'optimisation et de le tester à la fois sur un modèle Eléments Finis du robot et sur le robot réel. Ainsi, ces travaux de thèse apportent des contributions à : - la définition de critères adaptés à la réalisation de tâches complexes et sollicitantes pour la gestion des redondances cinématiques ; - l'identification du comportement des structures sous sollicitations par moyen métrologique (Laser tracker) ; - l'optimisation du comportement permettant l'amélioration de la qualité de réalisation des opérations d'usinage ; - la modélisation Eléments Finis des robots prenant en compte l'identification des rigidités des corps et articulaires
Modélisation de l'influence des défauts de surface sur le comportement en fatigue de nuances d'acier innovantes by Wichian Niamchaona( )

1 edition published in 2019 in French and held by 1 WorldCat member library worldwide

The steel manufacturers develop nowadays high strength steels as CP800 grade for automotive applications for the purpose of lightening vehicles. Such steels are strongly sensitive from the fatigue behaviour point of view to the surface defects generated by metal forming or cutting of steel sheets. Surface defects of different types and sizes were machined by electroerosion on CP800 specimens so that they are similar to the surface defects observed on steel sheets after stamping or cutting. The present study deals with the numerical and the experimental fatigue behaviour simulation of these specimens.The defect influence modelisation about the steel fatigue behaviour uses either the critical plane approach or the integral approach in multiaxial fatigue.The stress gradient influence contributes also to the fatigue life prediction of the defective simples.The numerical simulation aims to assess stress states and stress gradient fields within the tested specimens in the vicinity of their own surface defects. Accounting for stress gradients strongly improves the ability of multiaxial fatigue criteria to accurately predict the actual fatigue resistance of defective specimens. It shows also that multiaxial criteria have to be calibrated over fatigue test results with high stress gradients to properly predict the fatigue behaviour of high strength steel with surface small defect
Spectroscopie de condensats polaritoniques dans des microcavités et guides d'onde à base de GaN et ZnO by Omar Jamadi( )

1 edition published in 2018 in French and held by 1 WorldCat member library worldwide

This manuscript is devoted to polariton condensates in two wide band gap semiconductors: GaN and ZnO. The first part of this work focuses on the study by optical spectroscopy of two planar microcavities (one of GaN, the other of ZnO) sharing the same structure and the same photonic properties. The strong coupling and polariton lasing regime have been observed from 5 K to 300 K in both microcavities. The realization of phase diagrams has pointed out the inconstant impact of resonances with LO phonons on the lowering of the laser threshold. The study of the GaN microcavity has been pushed to 350 K and we have demonstrated, for the first time at this temperature, the persistence of the strong coupling regime and the polariton laser under optimal excitation conditions. The second part of this work is focused on ZnO waveguides. Besides the observation of strong coupling regime from 5 K to 300 K, our study has highlighted a new lasing effect in this geometry: the horizontal polariton laser
Optimisation des structures nanophotoniques pour le photovoltaïque by Mamadou Aliou Barry( )

1 edition published in 2018 in French and held by 1 WorldCat member library worldwide

The present manuscript deals with the problem of the design in photonics, i.e. to determine which is the best way to assemble nanometric elements to reach a desired optical response. Different algorithms are tested. One algorithm in particular seems well adapted to this kind of problem, and allows to retrieve the most emblematic photonic structures which a present in nature on the tegument of insects or on the wings of butterflies. Applied to the case of an anti-reflective coating for a photovoltaic device, the algorithm has produced a particularly counter intuivite but efficient structure. This clearly demonstrates the potential of such an approach
Optimisation du coût du cycle de vie des structures en béton armé by Lara Saad( )

1 edition published in 2016 in English and held by 1 WorldCat member library worldwide

Les structures de génie civil, en particulier les ponts en béton armé, doivent être conçues et gérées pour assurer les besoins de transport et de communication dans la société. Il est indispensable de garantir un fonctionnement convenable et sécuritaire de ces structures, puisque les défaillances peuvent conduire à des perturbations du transport, des pertes catastrophiques de concessions et des pertes de vies humaines, avec des impacts économiques, sociétaux et environnementaux graves, à court et à long termes. Les gestionnaires entreprennent diverses activités pour maintenir la performance et le fonctionnement adéquat à long terme, tout en satisfaisant les contraintes financières et sécuritaires. Idéalement, ils peuvent recourir à des techniques d'optimisation pour établir les compromis entre la réduction du coût du cycle de vie (LCC) et la maximisation de la durée de vie. Cela nécessite le développement de l'analyse du cycle de vie, de l'analyse de fiabilité et de l'optimisation structurale.Les approches actuelles pour la conception et la gestion des structures s'appuyant sur l'analyse du coût de cycle de vie, montrent les besoins suivants : (1) une approche intégrée et systématique pour modéliser de façon cohérente les processus de dégradation, les charges de trafic, le vieillissement et les conséquences directes et indirectes de la défaillance, (2) une considération complète des dépendances économiques, structurales et stochastiques entre les différents éléments de l'ouvrage, (3) une approche permettant de modéliser efficacement un système structural formé de plusieurs éléments interdépendants, (4) une évaluation des conséquences de la dégradation et de la redistribution des charges entre les éléments en tenant compte de la redondance du système et de la configuration de l'ouvrage, (5) une méthode d'optimisation de la conception et de la maintenance qui préserve l'exigence de fiabilité tout en considérant la robustesse de la décision. L'objectif global de cette thèse est de fournir des procédures améliorées qui peuvent être appliquées à la conception et à la gestion fiabilistes et robustes des ouvrages en béton armé, en réduisant les coûts supportés par les gestionnaires et les utilisateurs, tout en tenant compte des dépendances entre les éléments. Dans la première partie de cette thèse, une synthèse bibliographique concernant les procédures de la conception et de la maintenance basée sur des calculs fiabilistes est présentée, et les différents composants du LCC sont développés. Ensuite, une approche est proposée pour la conception des ouvrages en tenant compte du coût aux usagers et en intégrant dans la fonction du coût de cycle de vie. Le modèle couplé corrosion-fatigue est aussi considéré dans l'optimisation de la conception. La planification de la maintenance des ouvrages est ensuite développée, en considérant les différents types d'interaction entre les éléments, en particulier les dépendances économiques, structurales et stochastiques. Ce modèle utilise l'analyse de l'arbre de défaillance et les probabilités conditionnelles pour tenir compte des dépendances dans la planification de la maintenance. Les conséquences de la dégradation et de la redistribution des charges sont prises en compte dans l'approche proposée. Par ailleurs, une méthode pratique de calcul de la fiabilité d'un système formé de plusieurs composantes interdépendantes est proposée, à travers un facteur de redondance calculé par la modélisation mécanique. Enfin, une nouvelle procédure d'optimisation est proposée, permettant de tenir compte des incertitudes dans le système et la capacité structurale de s'adapter aux variabilités intrinsèques. La procédure proposée tient compte des incertitudes et de la variabilité dans une formulation cohérente, validée au moyen des applications numériques. (...)
Commande locale décentralisée de robots mobiles en formation en milieu naturel by Audrey Guillet( )

1 edition published in 2015 in French and held by 1 WorldCat member library worldwide

This thesis focuses on the issue of the control of a formation of wheeled mobile robots travelling in off-road conditions. The goal of the application is to follow a reference trajectory (entirely or partially) known beforehand. Each robot of the fleet has to track this trajectory while coordinating its motion with the other robots in order to maintain a formation described as a set of desired distances between vehicles. The off-road context has to be considered thoroughly as it creates perturbations in the motion of the robots. The contact of the tire on an irregular and slippery ground induces significant slipping and skidding. These phenomena are hardly measurable with direct sensors, therefore an observer is set up in order to get an estimation of their value. The skidding effect is included in the evolution of each robot as a side-slip angle, thus creating an extended kinematic model of evolution. From this model, adaptive control laws on steering angle and velocity for each robot are designed independently. These permit to control respectively the lateral distance to the trajectory and the curvilinear interdistance of the robot to a target. Predictive control techniques lead then to extend these control laws in order to account for the actuators behavior so that positioning errors due to the delay of the robot response to the commands are cancelled. The elementary control law on the velocity control ensures an accurate longitudinal positioning of a robot with respect to a target. It serves as a base for a global fleet control strategy which declines the overall formation maintaining goal in local positioning objective for each robot. A bidirectionnal control strategy is designed, in which each robot defines 2 targets, the immediate preceding and following robot in the fleet. The velocity control of a robot is finally defined as a linear combination of the two velocity commands obtained by the elementary control law for each target. The linear combination parameters are investigated, first defining constant parameters for which the stability of the formation is proved through Lyapunov techniques, then considering the effect of variable coefficients in order to adapt in real time the overall behavior of the formation. The formation configuration can indeed be prone to evolve, for application purposes and to guarantee the security of the robots. To fulfill this latter requirement, each robot of the fleet estimates in real time a minimal stopping distance in case of emergency and two avoidance trajectories to get around the preceding vehicle if this one suddenly stops. Given the initial configuration of the formation and the emergency behaviors calculated, the desired distances between the robots can be adapted so that the new configuration thus described ensures the security of each and every robot of the formation against potential collisions
Identification modale opérationnelle des robots d'usinage en service by Asia Maamar( )

1 edition published in 2019 in French and held by 1 WorldCat member library worldwide

The identification of the modal parameters of machining robots in service has a significant adverse influence on machining stability, which will, therefore, decrease the quality of the workpiece and reduce the tool life. However, in presence of strong harmonic excitation, the application of Operational Modal Analysis (OMA) is not straightforward. Firstly, the issue of choosing the most appropiate OMA method for an application in presence of harmonic components, is handled. For a comparison purpose, the modified Enhanced Frequency Domain Decomposition (EFDD) method, the Stochastic Subspace Identification (SSI) method, the PolyMAX method and the Transmissibility Function Based (TFB) method are investigated. The obtained results lead to the adoption of the Transmissibility Function Based (TFB) method for an OMA of machining robots. For an accurate modal identification procedure, the OMA of a machine tool is, initially, conducted. It is a preparation step in order to verify the performance of the chosen method under machining conditions as well as a machine tool is a rigid structure, thus, it has less variation in its dynamic behavior compared to a machining robot. Results demonstrate the efficiency of the TFB method to identify the machine tool modal parameters even in the presence of preponderant harmonic components. Finally, the OMA of the machining robot ABB IRB 6660, which has a flexible structure compared to a machine tool, is carried out for a machining trajectory. The obtained results allow the identification of a modal basis of the machining robot illustrating the evolution of its modal behavior, in service. The main novelty of this thesis lies in the development of a robust procedure for an operational modal identification of machining robots, in service, which makes it possible to continuously follow the variations in the modal parameters of machining robots
Gestion de données manquantes dans des cascades de boosting : application à la détection de visages by Pierre Bouges( )

1 edition published in 2012 in French and held by 1 WorldCat member library worldwide

This thesis has been realized in the ISPR group (ImageS, Perception systems and Robotics) of the Institut Pascal with the ComSee team (Computers that See). My research is involved in a project called Bio Rafale. It was created by the compagny Vesalis in 2008 and it is funded by OSEO. Its goal is to improve the security in stadium using identification of dangerous fans. The applications of these works deal with face detection. It is the first step in the process chain of the project. Most efficient detectors use a cascade of boosted classifiers. The term cascade refers to a sequential succession of several classifiers. The term boosting refers to a set of learning algorithms that linearly combine several weak classifiers. The detector selected for this thesis also uses a cascade of boosted classifiers. The training of such a cascade needs a training database and an image feature. Here, covariance matrices are used as image feature. The limits of an object detector are fixed by its training stage. One of our contributions is to adapt an object detector to handle some of its limits. The proposed adaptations lead to a problem of classification with missing data. A probabilistic formulation of a cascade is then used to incorporate the uncertainty introduced by the missing data. This formulation involves the estimation of a posteriori probabilities and the computation of new rejection thresholds at each level of the modified cascade. For these two problems, several solutions are proposed and extensive tests are done to find the best configuration. Finally, our solution is applied to the detection of turned or occluded faces using just an uprigth face detector. Detecting the turned faces requires the use of a 3D geometric model to adjust the position of the subwindow associated with each weak classifier
Comportement des assemblages mixtes bois-métal avec trous oblongs by Edouard Cavène( )

1 edition published in 2019 in French and held by 1 WorldCat member library worldwide

Nowadays, hybrid structures are common because of architectural and environmental reasons. In that context, timber steel structures are very relevant because they combine lightness with large slenderness. However, combining both timber that presents a hydroscopic behavior and steel material that is not, raises a problem of cracking in connection zone. Indeed, the large number of connectors in this part of the structure prevents timber from swelling and shrinkage deformations and then creates cracks. In order to limit the effect of the connection on the cracking of the timber, the present work proposes to release degrees of freedom using slotted holes in steel plates in bolted timber-steel connections. Due to the lack of studies on bolted cover plate with slotted holes, a large part of this work proposes to analyze the behavior of such connections based an experimental study using full field measurement technique. The first part of the present work allows to better understanding the behavior of bolted cover plate with slotted holes using load-displacement curves, failure modes and strain analyses obtained with Digital Image Correlation (DIC) technique. This analysis highlights the presence of two types of behaviors. The first one is mostly due to the bending of the end distance area whereas the second is due to bearing. Thereafter, an analytical model predicting the initial stiffness of such connections is proposed. This model is based on numerical and experimental results obtained by DIC. In the last part of the study, an experimental campaign is dedicated to the evaluation of the effect of the presence of slotted holes in steel plate of bolted timber-steel connection under bending moment. The result of the study shows that using slotted holes has no negative impact on the short-time behavior of bolted timber steel connections and, in certain circumstance, an increase of the resistance is observed
Modélisations et stratégie de prise pour la manipulation d'objets déformables by Lazher Zaidi( )

1 edition published in 2016 in French and held by 1 WorldCat member library worldwide

La manipulation dextre est un sujet important dans la recherche en robotique et dans lequel peu de travaux ont abordé la manipulation d'objets déformables. De nouvelles applications en chirurgie, en industrie agroalimentaire ou encore dans les services à la personne nécessitent la maîtrise de la saisie et la manipulation d'objets déformables. Cette thèse s'intéresse à la manipulation d'objets déformables par des préhenseurs mécaniques anthropomorphiques tels que des mains articulées à plusieurs doigts. Cette tâche requière une grande expertise en modélisation mécanique et en commande : modélisation des interactions, perception tactile et par vision, contrôle des mouvements des doigts en position et en force pour assurer la stabilité de la saisie. Les travaux présentés dans cette thèse se focalisent sur la modélisation de la saisie d'objets déformables. Pour cela, nous avons utilisé une discrétisation par des systèmes masses-ressorts non-linéaires pour modéliser des corps déformables en grands déplacements et déformations tout en ayant un coût calculatoire faible. Afin de prédire les forces d'interactions entre main robotique et objet déformable, nous avons proposé une approche originale basée sur un modèle rhéologique visco-élasto-plastique pour évaluer les forces tangentielles de contact et décrire la transition entre les modes d'adhérence et de glissement. Les forces de contact sont évaluées aux points nodaux en fonction des mouvements relatifs entre les bouts des doigts et les facettes du maillage de la surface de l'objet manipulé. Une autre contribution de cette thèse consiste à utiliser de cette modélisation dans la planification des tâches de manipulation d'objets déformables 3D. Cette planification consiste à déterminer la configuration optimale de la main pour la saisie de l'objet ainsi que les trajectoires à suivre et les efforts à appliquer par les doigts pour contrôler la déformation de l'objet tout en assurant la stabilité de l'opération. La validation expérimentale de ces travaux a été réalisée sur deux plateformes robotiques : une main Barrett embarquée sur un bras manipulateur Adept S1700D et une main Shadow embarquée sur un bras manipulateur Kuka LWR4+
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  General Special  
Audience level: 0.87 (from 0.87 for Analyse ra ... to 0.88 for Analyse ra ...)

Languages
French (17)

English (4)