WorldCat Identities

Roustant, Olivier (1973-....).

Overview
Works: 16 works in 18 publications in 2 languages and 23 library holdings
Roles: Opponent, Other, Author, Thesis advisor, Contributor
Publication Timeline
.
Most widely held works by Olivier Roustant
Produits dérivés climatiques : aspects économétriques et financiers by Olivier Roustant( Book )

3 editions published between 2003 and 2013 in French and held by 4 WorldCat member libraries worldwide

Cette thèse constitue l'une des premières études des produits dérivés climatiques. Elle examine d'abord la modélisation de la température, qui est la variable climatique la plus fréquente (chapitre 1). Un modèle univarié de type autorégressif à volatilité périodique est en général approprié pour décrire sa dynamique, notamment pour les données françaises. Elle aborde alors quelques aspects financiers des risques climatiques. En particulier, la quasi-indépendance du marché à la température semble justifier la pratique de l'évaluation selon une approche actuarielle (chapitre 2). Enfin, elle quantifie le risque de modèle lié à cette pratique (chapitre 3). Tandis que les prix des contrats Futures sont robustes par rapport aux erreurs de modélisation, d'importantes incertitudes sont mises en évidence autour du prix des options. La source des erreurs a été identifiée : il s'agit des composantes déterministes de tendance et de saisonnalité relatives à la moyenne du processus de température
Échantillonnages Monte Carlo et quasi-Monte Carlo pour l'estimation des indices de Sobol' : application à un modèle transport-urbanisme by Laurent Gilquin( )

1 edition published in 2016 in French and held by 2 WorldCat member libraries worldwide

Le développement et l'utilisation de modèles intégrés transport-urbanisme sont devenus une norme pour représenter les interactions entre l'usage des sols et le transport de biens et d'individus sur un territoire. Ces modèles sont souvent utilisés comme outils d'aide à la décision pour des politiques de planification urbaine.Les modèles transport-urbanisme, et plus généralement les modèles mathématiques, sont pour la majorité conçus à partir de codes numériques complexes. Ces codes impliquent très souvent des paramètres dont l'incertitude est peu connue et peut potentiellement avoir un impact important sur les variables de sortie du modèle.Les méthodes d'analyse de sensibilité globales sont des outils performants permettant d'étudier l'influence des paramètres d'un modèle sur ses sorties. En particulier, les méthodes basées sur le calcul des indices de sensibilité de Sobol' fournissent la possibilité de quantifier l'influence de chaque paramètre mais également d'identifier l'existence d'interactions entre ces paramètres.Dans cette thèse, nous privilégions la méthode dite à base de plans d'expériences répliqués encore appelée méthode répliquée. Cette méthode a l'avantage de ne requérir qu'un nombre relativement faible d'évaluations du modèle pour calculer les indices de Sobol' d'ordre un et deux.Cette thèse se focalise sur des extensions de la méthode répliquée pour faire face à des contraintes issues de notre application sur le modèle transport-urbanisme Tranus, comme la présence de corrélation entre paramètres et la prise en compte de sorties multivariées.Nos travaux proposent également une approche récursive pour l'estimation séquentielle des indices de Sobol'. L'approche récursive repose à la fois sur la construction itérative d'hypercubes latins et de tableaux orthogonaux stratifiés et sur la définition d'un nouveau critère d'arrêt. Cette approche offre une meilleure précision sur l'estimation des indices tout en permettant de recycler des premiers jeux d'évaluations du modèle. Nous proposons aussi de combiner une telle approche avec un échantillonnage quasi-Monte Carlo.Nous présentons également une application de nos contributions pour le calage du modèle de transport-urbanisme Tranus
The tail dependograph by Cécile Mercadier( )

1 edition published in 2019 in English and held by 2 WorldCat member libraries worldwide

On the choice of the low-dimensional domain for global optimization via random embeddings by Mickaël Binois( )

1 edition published in 2019 in English and held by 2 WorldCat member libraries worldwide

Computer experiments with functional inputs and scalar outputs by a norm-based approach by Thomas Muehlenstaedt( )

1 edition published in 2016 in English and held by 2 WorldCat member libraries worldwide

Méthodes socio-statistiques pour l'aide à la décision en milieu industriel : Application à la gestion des capacités d'un système d'information en industrie micro-électronique by Michel Lutz( )

1 edition published in 2013 in French and held by 1 WorldCat member library worldwide

Les données industrielles offrent un matériau pour la prise de décision. Les travaux présentés concernent la transformation de données brutes en connaissances, pour contribuer au système de connaissances d'une organisation et améliorer son système décisionnel. Un processus d'aide à la décision est proposé. Il implique les acteurs de l'organisation et l'emploi de méthodes formelles. D'abord, il analyse et formalise les problématiques décisionnelles. Ensuite, il construit une aide la décision quantitative. Cette méthodologie est appliquée à un problème particulier : la gestion des capacités des TI d'une usine de STMicroelectronics. En effet, les managers doivent assurer un équilibre entre le coût de l'infrastructure TI et le niveau de service offert. Notre processus offre une aide pertinente. Il permet de surmonter deux enjeux, fréquents lors de la gestion des capacités : la complexité des systèmes IT et la prise en compte de l'activité métier. Situant ces travaux dans le cadre du référentiel ITIL, l'application du processus permet de constituer des modèles prédictifs, mettant en relation l'activité des serveurs informatiques et l'activité industrielle. Cette application permet aussi de contrôler dynamiquement la validité des modèles, ainsi que l'activité quotidienne du SI. Nos travaux formalisent quantitativement des connaissances, en favorisent l'utilisation dans les processus décisionnels, et en assurent l'évolution dans le temps. Nos recherches posent des fondations pour un plus large recours plus à l'exploitation des données issues des systèmes de production, dans le cadre du développement de systèmes de support à la décision et de perspectives Big Data
Estimation d'état et modélisation inverse appliquées à la pollution sonore en milieu urbain by Antoine Lesieur( )

1 edition published in 2021 in English and held by 1 WorldCat member library worldwide

Noise pollution is a public health problem well identified by health authorities. In order to establish the noise exposure of populations, noise maps are regularly generated. These maps are produced for the main sources of noise. For road traffic, they are the result of simulations that estimate the noise level from traffic data, meteorological data, topography, building distribution and vegetation. The resulting maps are an estimate of the spatial distribution of average noise levels over the study area. These data are spatially limited, there exists some uncertainties which prevent to know precisely the annual average traffic on all the roads in the study area. The accuracy of the noise maps is limited by the length of the computation time, which requires fairly simple models of the acoustic propagation. In addition to noise maps, stakeholders carry out noise level measurement campaigns. They measure the temporal evolution of noise levels at a series of given locations. These data give a more realistic reflection of the actual noise level than the results of noise map simulations, but they are very local. They are also expensive, which prohibits an extensive gridding of an area with a network of sensors. Combining modeling and measurement approaches would increase the amount of data useful for the production of noise maps. A noise map that combines the two approaches can overcome the limitations of simulation and measurement and provide dynamic, real-time mapping of noise levels. The objective of this thesis is to implement so-called data assimilation methods to unite the benefits of both approaches, simulation and observation
Contrôle de paramètre en présence d'incertitudes by Victor Trappler( )

1 edition published in 2021 in English and held by 1 WorldCat member library worldwide

To understand and to be able to forecast natural phenomena is increasingly important nowadays, as those predictions are often the basis of many decisions, whether economical or ecological. In order todo so, mathematical models are introduced to represent the reality at a specific scale, and are then implemented numerically. However in this process of modelling, many complex phenomena occurring at a smaller scale than the one studied have to be simplified and quantified. This often leads to the introduction of additional parameters, which then need to be properly estimated. Classical methods of estimation usually involve an objective function, that measures the distance between the simulations and some observations, which is then optimised. Such an optimisation require many runs of the numerical model and possibly the computation of its gradient, thus can be expensive to evaluate computational-wise.However, some other uncertainties can also be present, which represent some uncontrollable and external factors that affect the modelling. Those variables will be qualified as environmental. By modelling them with a random variable, the objective function is then a random variable as well, that we wish to minimise in some sense. Omitting the random nature of the environmental variable can lead to localised optimisation, and thus a value of the parameters that is optimal only for the fixed nominalvalue. To overcome this, the minimisation of the expected value of the objective function is often considered in the field of optimisation under uncertainty for instance.In this thesis, we focus instead on the notion of regret, that measures the deviation of the objective function from its optimal value given a realisation of the environmental variable. This regret (either additive or relative) translates a notion of robustness through its probability of exceeding a specified threshold. So, by either controlling the threshold or the probability, we can define a family of estimators based on this regret.The regret can quickly become expensive to evaluate since it requires an optimisation of the objective for every realisation of the environmental variable. We then propose to use Gaussian Processes (GP) in order to reduce the computational burden of this evaluation. In addition to that, we propose a few adaptive methods in order to improve the estimation: the next points to evaluate are chosensequentially according to a specific criterion, in a Stepwise Uncertainty Reduction (SUR) strategy.Finally, we will apply some of the methods introduced in this thesis on an academic problem of parameter estimation. We will study the calibration of the bottom friction of a model of the Atlantic ocean near the French coasts, while introducing some uncertainties in the forcing of the tide, and get a robust estimation of this friction parameter in a twin experiment setting
Étude de classes de noyaux adaptées à la simplification et à l'interprétation des modèles d'approximation. Une approche fonctionnelle et probabiliste. by Nicolas Durrande( )

1 edition published in 2011 in French and held by 1 WorldCat member library worldwide

Le thème général de cette thèse est celui de la construction de modèles permettantd'approximer une fonction f lorsque la valeur de f(x) est connue pour un certainnombre de points x. Les modèles considérés ici, souvent appelés modèles de krigeage,peuvent être abordés suivant deux points de vue : celui de l'approximation dans les espacesde Hilbert à noyaux reproduisants ou celui du conditionnement de processus gaussiens.Lorsque l'on souhaite modéliser une fonction dépendant d'une dizaine de variables, lenombre de points nécessaires pour la construction du modèle devient très important etles modèles obtenus sont difficilement interprétables. A partir de ce constat, nous avonscherché à construire des modèles simplifié en travaillant sur un objet clef des modèles dekrigeage : le noyau. Plus précisement, les approches suivantes sont étudiées : l'utilisation denoyaux additifs pour la construction de modèles additifs et la décomposition des noyauxusuels en sous-noyaux pour la construction de modèles parcimonieux. Pour finir, nousproposons une classe de noyaux qui est naturellement adaptée à la représentation ANOVAdes modèles associés et à l'analyse de sensibilité globale
Quantification de radionucléides par approche stochastique globale by Aloïs Clément( )

1 edition published in 2017 in French and held by 1 WorldCat member library worldwide

Parmi les techniques de mesure nucléaire non destructives utilisées par les instrumentistes du noyau atomique, la spectrométrie gamma est aujourd'hui une méthode d'identification et de quantification de radionucléides largement employée dans le cadre de la gestion d'objets nucléaires complexes tels que des déchets radioactifs, des fûts de déchets ou des boîtes à gants. Les caractéristiques physico-nucléaires non-reproductibles et variées de ces objets, telles que leurs compositions, la répartition des matériaux, leurs densités et formes géométriques, ou le nombre et la forme de leurs termes sources émetteurs, induisent une inaptitude des méthodes d'étalonnage traditionnel à permettre l'obtention de l'activité d'un matériau nucléaire donné. Cette thèse propose une méthode de quantification de radionucléides multi-émetteurs, limitant, voire supprimant, l'utilisation d'informations dites a priori issues de l'avis d'expert ou du retour d'expériences. Cette méthode utilise entre autres la métamodélisation pour construire une efficacité de détection gamma équivalente de la scène de mesure, le formalisme de résolution de problème inverse par Chaines de Markov Monte-Carlo (MCMC), le tout placé dans un cadre de travail probabiliste bayésien afin d'estimer les densités de probabilités des variables d'intérêt telle qu'une masse de radionucléide. Un protocole de validation expérimentale permet de vérifier la robustesse de la méthode à estimer une masse de 239Pu au sein d'objets similaires à ceux traités en routine par le laboratoire. Les perspectives de la méthode concernent la réduction des temps de calcul, des coûts financiers et humains par limitation de l'approche type expert, et la réduction des incertitudes associées
Quantification et méthodes statistiques pour le risque de modèle by Ibrahima Niang( )

1 edition published in 2016 in French and held by 1 WorldCat member library worldwide

In finance, model risk is the risk of loss resulting from using models. It is a complex risk which recover many different situations, and especially estimation risk and risk of model misspecification. This thesis focuses: on model risk inherent in yield and credit curve construction methods and the analysis of the consistency of Sobol indices with respect to stochastic ordering of model parameters. it is divided into three chapters. Chapter 1 focuses on model risk embedded in yield and credit curve construction methods. We analyse in particular the uncertainty associated to the construction of yield curves or credit curves. In this context, we derive arbitrage-free bounds for discount factor and survival probability at the most liquid maturities. In Chapter 2 of this thesis, we quantify the impact of parameter risk through global sensitivity analysis and stochastic orders theory. We analyse in particular how Sobol indices are transformed further to an increase of parameter uncertainty with respect to the dispersive or excess wealth orders. Chapter 3 of the thesis focuses on contrast quantile index. We link this latter with the risk measure CTE and then we analyse on the other side, in which circumstances an increase of a parameter uncertainty in the sense of dispersive or excess wealth orders implies and increase of contrast quantile index. We propose finally an estimation procedure for this index. We prove under some conditions that our estimator is consistent and asymptotically normal
Sensitivy analysis and graph-based methods for black-box functions with on application to sheet metal forming. by Jana Fruth( )

1 edition published in 2015 in English and held by 1 WorldCat member library worldwide

The general field of the thesis is the sensitivity analysis of black-box functions. Sensitivity analysis studies how the variation of the output can be apportioned to the variation of input sources. It is an important tool in the construction, analysis, and optimization of computer experiments.The total interaction index is presented, which can be used for the screening of interactions. Several variance-based estimation methods are suggested. Their properties are analyzed theoretically as well as on simulations.A further chapter concerns the sensitivity analysis for models that can take functions as input variables and return a scalar value as output. A very economical sequential approach is presented, which not only discovers the sensitivity of those functional variables as a whole but identifies relevant regions in the functional domain.As a third concept, support index functions, functions of sensitivity indices over the input distribution support, are suggested.Finally, all three methods are successfully applied in the sensitivity analysis of sheet metal forming models
Optimisation bayésienne sous contraintes et en grande dimension appliquée à la conception avion avant projet by Rémy Priem( )

1 edition published in 2020 in French and held by 1 WorldCat member library worldwide

Nowadays, the preliminary design in aeronautics is based mainly on numericalmodels bringingtogether many disciplines aimed at evaluating the performance of the aircraft. These disciplines,such as aerodynamics, structure and propulsion, are interconnected in order to take into accounttheir interactions. This produces a computationally expensive aircraft performance evaluationprocess. Indeed, an evaluation can take from thirty seconds for low fidelity models to severalweeks for higher fidelity models. In addition, because of the multi-disciplinarity of the processand the diversity of the calculation tools, we do not always have access to the properties or thegradient of this performance function. In addition, each discipline uses its own design variablesand must respect equality or inequality constraints which are often numerous and multi-modal.We ultimately seek to find the best possible configuration in a given design space.This research can be mathematically translated to a black-box optimization problem under inequalityand equality constraints, also known as mixted constraints, depending on a large numberof design variables. Moreover, the constraints and the objective function are expensive to evaluateand their regularity is not known. This is why we are interested in derivative-free optimizationmethods and more specifically the ones based on surrogatemodels. Bayesian optimization methods,using Gaussian processes, are more particularly studied because they have shown rapid convergenceon multimodal problems. Indeed, the use of evolutionary optimization algorithms orother gradient-based methods is not possible because of the computational cost that this implies:too many calls to generate populations of points, or to approach the gradient by finite difference.However, the Bayesian optimization method is conventionally used for optimization problemswithout constraints and of small dimension. Extensions have been proposed to partially take thislock into account. On the one hand, optimization methods have been introduced to solve optimizationproblems with mixed constraints. However, none of them is adaptable to the largedimension, to the multi-modal problems and to mixed constraints. On the other hand, non-linearoptimization methods have been developed for the large dimension up to a million design variables.In the same way, these methods extend only with difficulty to the constrained problemsbecause of the computing time which they require or their random character.A first part of this work is based on the development of a Bayesian optimization algorithmsolvingunconstrained optimization problems in large dimensions. It is based on an adaptive learningstrategy of a linear subspace carried out in conjunction with the optimization. This linear subspaceis then used to perform the optimization. This method has been tested on academic testcases.A second part of this work deals with the development of a Bayesian optimization algorithm tosolve multi-modal optimization problems under mixed constraints. It has been extensively comparedto algorithms in the literature on a large battery of academic tests.Finally, the second algorithm was compared with two aeronautical test cases. The first testcase is a classic medium range aircraft configuration with hybrid electric propulsion developedby ONERA and ISAE-Supaero. The second test case is a classic business aircraft configuration developedat Bombardier Aviation. This test case is based on an optimization at two levels of fidelity.A conceptual fidelity level and a preliminary fidelity level for which the problem is evaluated inthirty seconds and 25 minutes, respectively. This last study was carried out during an internationalmobility at Bombardier Aviation in Montreal (CA). The results showed the interest of theimplemented method
Correspondance entre régression par processus Gaussien et splines d'interpolation sous contraintes linéaires de type inégalité. Théorie et applications. by Hassan Maatouk( )

1 edition published in 2015 in French and held by 1 WorldCat member library worldwide

This thesis is dedicated to interpolation problems when the numerical function is known to satisfy some properties such as positivity, monotonicity or convexity. Two methods of interpolation are studied. The first one is deterministic and is based on convex optimization in a Reproducing Kernel Hilbert Space (RKHS). The second one is a Bayesian approach based on Gaussian Process Regression (GPR) or Kriging. By using a finite linear functional decomposition, we propose to approximate the original Gaussian process by a finite-dimensional Gaussian process such that conditional simulations satisfy all the inequality constraints. As a consequence, GPR is equivalent to the simulation of a truncated Gaussian vector to a convex set. The mode or Maximum A Posteriori is defined as a Bayesian estimator and prediction intervals are quantified by simulation. Convergence of the method is proved and the correspondence between the two methods is done. This can be seen as an extension of the correspondence established by [Kimeldorf and Wahba, 1971] between Bayesian estimation on stochastic process and smoothing by splines. Finally, a real application in insurance and finance is given to estimate a term-structure curve and default probabilities
Indices de sensibilité via des méthodes à noyaux pour des problèmes d'optimisation en grande dimension by Adrien Spagnol( )

1 edition published in 2020 in English and held by 1 WorldCat member library worldwide

This thesis treats the optimization under constraints of high-dimensional black-box problems. Common in industrial applications, they frequently have an expensive associated cost which make most of the off-the-shelf techniques impractical. In order to come back to a tractable setup, the dimension of the problem is often reduced using different techniques such as sensitivity analysis. A novel sensitivity index is proposed in this work to distinct influential and negligible subsets of inputs in order to obtain a more tractable problem by solely working with the primer. Our index, relying on the Hilbert Schmidt independence criterion, provides an insight on the impact of a variable on the performance of the output or constraints satisfaction, key information in our study setting. Besides assessing which inputs are influential, several strategies are proposed to deal with negligible parameters. Furthermore, expensive industrial applications are often replaced by cheap surrogate models and optimized in a sequential manner. In order to circumvent the limitations due to the high number of parameters, also known as the curse of dimensionality, we introduce in this thesis an extension of the surrogated-based optimization. Thanks to the aforementioned new sensitivity indices, parameters are detected at each iteration and the optimization is conducted in a reduced space
Méta-modélisation et analyse de sensibilité pour les modèles avec sortie spatiale. Application aux modèles de submersion marine. by Tran Vi-Vi Elodie Perrin( )

1 edition published in 2021 in French and held by 1 WorldCat member library worldwide

Motivated by the risk assessment of coastal flooding, the numerical hydrodynamic models of the BRGM and the CCR are considered. Their outputs are flood maps. The aim is to perform a sensitivity analysis (SA) to quantify and hierarchize the influence of the input parameters on the output. The application of functional PCA (FPCA) is proposed to reduce both computation time and spatial output dimension. The output is decomposed on a basis of functions designed to handle local variations, such as wavelets or B-splines. PCA with an ad-hoc metric is applied on the most important coefficients, according to an energy criterion after basis orthonormalization, or on the initial basis with a penalized regression approach. Fast-to-evaluate metamodels (such as Kriging) are built on the first principal components, on which SA can be done. As a by-product, we obtain analytical formulas for variance-based sensitivity indices, generalizing a known formula assuming the orthonormality of basis functions. The whole methodology is applied to an analytical case and two coastal flooding cases. Gains in accuracy and computation time have been obtained. An R package has been developed, which allows sharing the research outputs
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  General Special  
Audience level: 0.94 (from 0.87 for Quantifica ... to 1.00 for Produits d ...)

Alternative Names
Roustant, Olivier Maurice Antoine

Roustant, Olivier Maurice Antoine 1973-...

Languages
French (11)

English (7)