WorldCat Identities

Laboratoire d'informatique de l'Institut Gaspard Monge

Overview
Works: 128 works in 128 publications in 2 languages and 233 library holdings
Roles: 981, Degree grantor
Publication Timeline
.
Most widely held works by Laboratoire d'informatique de l'Institut Gaspard Monge
Propriétés syntaxico-sémantiques des verbes à complément en -e en coréen by So-Yun Kim( )

1 edition published in 2010 in French and held by 2 WorldCat member libraries worldwide

This study is a general classification of verbal constructions and a syntactico-semantic description of the Korean verbs with essential complements introduced by the postposition e. The theoretical model of this study is the Lexicon-grammar by M. Gross (1975), which is based on the theory of Z. S. Harris (1968). In this study, we examined the syntactico-semantic features of the 3000 verbs requiring that complement and classified them into 8 classes. Those syntactico-semantic features will constitute the syntactic information which serves the structure analysis of a sentence and the automatic text retrieval in Korean
Autour des automates : génération aléatoire et contribution à quelques extensions by Vincent Carnino( )

1 edition published in 2014 in French and held by 2 WorldCat member libraries worldwide

The subject of this thesis is decided into three parts: two of them are about extensions of the classical model in automata theory, whereas the third one is about a more concrete aspect which consists in randomly generate automata with specific properties. We first give an extension of the universal automaton on finite words to infinite words. To achieve this, we define a normal form in order to take account of the specific acceptance mode of Büchi automata which recognize omega-langages. Then we define two kinds of omega-factorizations, a "regular" one and the "pure" kind, which are both extensions of the classical concept of factorization of a language. This let us define the universal automaton of an omega-language. We prove that it has all the required properties: it is the smallest Buchi automaton, in normal form, that recognizes the omega-language and which has the universal property. We also give an effective way to compute the "regular" omega-factorizations of a language using a prophetic automaton recognizing the language. In the second part, we deal with two-way automata weighted over a semi ring. First, we give a slightly different version of the computation of a weighted one-way automaton from a weighted two-way automaton and we prove that it preserves the non-ambiguity but not the determinism. We prove that non-ambiguous weighted two-way automata are equivalent to deterministic weighted one-way automata. In a later part, we focus on tropical semi rings (or min-+). We prove that two-way automata on N-min-+ are equivalent to one-way automata on N-min-+. We also prove that the behavior of two-way automata on Z-min-+ are not always defined and that this property is decidable whereas it is undecidable whether or not there exists a word on which the behavior is defined. In the last section, we propose an algorithm in order to randomly generate acyclic, accessible and determinist automata and minimal acyclic automata with an almost uniform distribution using Morkov chains. We prove the reliability of both algorithms and we explain how to adapt them in order to fit with constraints on the set of final states
Robust, refined and selective matching for accurate camera pose estimation by Zhe Liu( )

1 edition published in 2015 in English and held by 2 WorldCat member libraries worldwide

With the recent progress in photogrammetry, it is now possible to automatically reconstruct a model of a 3D scene from pictures or videos. The model is reconstructed in several stages. First, salient features (often points, but more generally regions) are detected in each image. Second, features that are common in images pairs are matched. Third, matched features are used to estimate the relative pose (position and orientation) of images. The global poses are then computed as well as the 3D location of these features (structure from motion). Finally, a dense 3D model can be estimated. The detection of salient features, their matching, as well as the estimation of camera poses play a crucial role in the reconstruction process. Inaccuracies or errors in these stages have a major impact on the accuracy and robustness of reconstruction for the entire scene. In this thesis, we propose better methods for feature matching and feature selection, which improve the robustness and accuracy of existing methods for camera position estimation. We first introduce a photometric pairwise constraint for feature matches (VLD), which is more reliable than geometric constraints. Then we propose a semi-local matching approach (K-VLD) using this photometric match constraint. We show that our method is very robust, not only for rigid scenes but also for non-rigid and repetitive scenes, which can improve the robustness and accuracy of pose estimation methods, such as based on RANSAC. To improve the accuracy in camera position estimation, we study the accuracy of reconstruction and pose estimation in function of the number and quality of matches. We experimentally derive a "quantity vs. quality" relation. Using this relation, we propose a method to select a subset of good matches to produce highly accurate pose estimations. We also aim at refining match position. For this, we propose an improvement of least square matching (LSM) using an irregular sampling grid and image scale exploration. We show that match refinement and match selection independently improve the reconstruction results, and when combined together, the results are further improved
Quelques contributions à la sélection de variables et aux tests non-paramétriques by Laëtitia Comminges( )

1 edition published in 2012 in French and held by 2 WorldCat member libraries worldwide

Real-world data are often extremely high-dimensional, severely under constrained and interspersed with a large number of irrelevant or redundant features. Relevant variable selection is a compelling approach for addressing statistical issues in the scenario of high-dimensional and noisy data with small sample size. First, we address the issue of variable selection in the regression model when the number of variables is very large. The main focus is on the situation where the number of relevant variables is much smaller than the ambient dimension. Without assuming any parametric form of the underlying regression function, we get tight conditions making it possible to consistently estimate the set of relevant variables. Secondly, we consider the problem of testing a particular type of composite null hypothesis under a nonparametric multivariate regression model. For a given quadratic functional $Q$, the null hypothesis states that the regression function $f$ satisfies the constraint $Q[f] = 0$, while the alternative corresponds to the functions for which $Q[f]$ is bounded away from zero. We provide minimax rates of testing and the exact separation constants, along with a sharp-optimal testing procedure, for diagonal and nonnegative quadratic functionals. We can apply this to testing the relevance of a variable. Studying minimax rates for quadratic functionals which are neither positive nor negative, makes appear two different regimes: "regular" and "irregular". We apply this to the issue of testing the equality of norms of two functions observed in noisy environments
Algèbres de Hopf combinatoires by Rémi Maurice( )

1 edition published in 2013 in French and held by 2 WorldCat member libraries worldwide

This thesis is in the field of algebraic combinatorics. In other words, the idea is to use algebraic structures, in this case of combinatorial Hopf algebras, to better study and understand the combinatorial objects and algorithms for composition and decomposition about these objects. This research is based on the construction and study of algebraic structure of combinatorial objects generalizing permutations. After recalling the background and notations of various objects involved in this research, we propose, in the second part, the study of the Hopf algebra introduced by Aguiar and Orellana based on uniform block permutations. By focusing on a description of these objects via well-known objects, permutations and set partitions, we propose a polynomial realization and an easier study of this algebra. The third section considers a second generalization interpreting permutations as matrices. We define and then study the families of square matrices on which we define algorithms for composition and decomposition. The fourth part deals with alternating sign matrices. Having defined the Hopf algebra of these matrices, we study the statistics and the behavior of the algebraic structure with these statistics. All these chapters rely heavily on computer exploration, and is the subject of an implementation using Sage software. This last chapter is dedicated to the discovery and manipulation of algebraic structures on Sage. We conclude by explaining the improvements to the study of algebraic structure through the Sage software
Architecture et protocoles applicatifs pour la chorégraphie de services dans l'Internet des objets by Sylvain Cherrier( )

1 edition published in 2013 in French and held by 2 WorldCat member libraries worldwide

Les défis que l'Internet des objets posent sont à la mesure des transformations que cette technologie est susceptible d'entraîner dans notre rapport quotidien à notre environnement. Nos propres objets, et des milliards d'autres, disposeront de capacités de traitement des données et de connexion au réseau, certes limitées mais effectives. Alors, ces objets se doteront d'une dimension numérique, et deviendront accessibles d'un façon tout à fait nouvelle. Ce n'est pas seulement la promesse d'un accès original à l'objet, mais bel et bien l'avènement d'une nouvelle perception et interaction avec ce qui nous entoure. Les applications de l'Informatique ubiquitaire utiliseront majoritairement les interactions entre objets, et la somme de leurs actions/réactions offrira une véritable valeur ajoutée. Mais l'hétérogénéité des composants matériels et des réseaux empruntés freine considérablement l'essor de l'Internet des objets. L'objectif de cette thèse est de proposer une solution effective et le cadre nécessaire à la construction de telles applications. Après avoir montré la pertinence des solutions chorégraphiées et quantifié le gain acquis sur des structures de communication arborescentes, nous présenterons D-LITe, notre framework, qui appréhende chaque objet comme étant fournisseur de services. Grâce à son approche REST assurant l'interopérabilité dans l'assortiment des composants et réseaux de l'Internet des objets, le framework D-LITe, hébergé par chaque objet (et adapté à ses contraintes), fournit un contrôle distant, aussi bien pour sa reprogrammation dynamique que les échanges avec ses partenaires. Nous poursuivrons en présentant SALT, le langage de programmation compris par D-LITe, basé sur les transducteurs à états fini. Outre son expressivité étendue aux particularités du domaine, SALT accorde un accès aux fonctionnalités de l'objet au travers d'une couche d'abstraction matérielle. Enfin, profitant de la standardisation offerte par D-LITe pour la programmation de chaque composant en particulier, une solution de composition, BeC3, va offrir un moyen efficace pour construire une application complète par assemblage des comportement distribués, tout en respectant la cohérence des interactions entre objets, par l'intermédiaire d'une abstraction des échanges et de leur modélisation. Aussi sommes-nous, par la résolution des problématiques rencontrées à chacun des différents niveaux, capables de présenter une solution simple, cohérente et fonctionnelle à même de bâtir réellement et efficacement des applications robustes pour l'Internet des objets
Analyse de signaux et d'images par bancs de filtres : applications aux géosciences by Jérôme Gauthier( )

1 edition published in 2008 in French and held by 2 WorldCat member libraries worldwide

Our main purpose in this PhD thesis is to perform local frequential (or directional) processing in different kind of data (volumes, images or signals). To this end, filter banks (FBs) are studied. More precisely, we first investigate the existence and the construction of synthesis FBs inverse to a given FIR complex analysis FB. Through the study of the polyphase analysis matrix, we are able to propose methods to test the invertibility and to build one inverse FB. Using this inverse, we provide a parametrization of the set of synthesis FB, with which we optimize filter responses with different criteria. The same study is performed in the multidimensional case. Since FBs provide an efficient representation of structured information in data, it is then possible to preserve them while rejecting unwanted perturbations. By associating Stein's principle and those FB, we proposed two methods to denoise signals and images corrupted by Gaussian noise. These methods, named FB-SURELET-E and FB-SURELET-C, are compared to recent denoising methods and are found to offer good results, especially for textured images. Another type of application is then investigated : separation of oriented structures. To this end, we have developed an anisotropic filtering method. The different proposed methods are finally applied on images and signals from various fields : seismic images and cubes, transmission electron microscopy (TEM) images of catalysts and vibration signals from car engines
The structure of orders in the pushdown hierarchy by Laurent Braud( )

1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide

Cette thèse étudie les structures dont la théorie au second ordremonadique est décidable, et en particulier la hiérarchie à pile. Onpeut définir celle-ci comme la hiérarchie pour $n$ des graphesd'automates à piles imbriquées $n$ fois ; une définition externe, partransformations de graphes, est également disponible. Nous nousintéressons à l'exemple des ordinaux. Nous montrons que les ordinauxplus petits que $epsilon_0$ sont dans la hiérarchie, ainsi que des graphesporteurs de plus d'information, que l'on appelle "graphecouvrants''. Nous montrons ensuite l'inverse : tous les ordinaux de lahiérarchie sont plus petits que $epsilon_0$. Ce résultat utilise le fait queles ordres d'un niveau sont en fait isomorphes aux structures desfeuilles des arbres déterministes dans l'ordre lexicographique, aumême niveau. Plus généralement, nous obtenons une caractérisation desordres linéaires dispersés dans la hiérarchie. Dans un troisièmetemps, nous resserons l'intérêt aux ordres de type $omega$ -- les mots infinis -- pour montrer que les mots du niveau 2 sont les motsmorphiques, ce qui nous amène à une nouvelle extension au niveau 3
Sûreté temporelle pour les systèmes temps réel multiprocesseurs by Frédéric Fauberteau( )

1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide

The hard real-time systems are characterized by sets of tasks for which are known the deadline, the arrival model (frequency) and the Worst-Case Execution Time (WCET). We focus on the scheduling of these systems on multiprocessor platforms. One of the main issues of this topic is to ensure that all deadlines are met. We go further by focusing on the temporal safety which we characterized by the properties of (i) robustness and (ii) sustainability. The robustness consists in providing an interval on the increases of (i-a) WCET and (i-b) frequency in such a way that the deadlines are met. The sustainability consists in ensuring that no deadline is missed when the following constraints are relaxed : (ii-a) WCET (decreasing), (ii-b) frequency (decreasing) and (ii-c) deadline (increasing). The robustness amounts to tolerate unexpected behaviors while the sustainability is the guarantee that the scheduling algorithm does not suffer from anomalies because of a relaxation of constraints. We consider fixed-priority scheduling for which any job of a task is scheduled with the same priority. Firstly, we study the property of robustness in off-line scheduling approaches without migration (partitioning). We deal with the case of tasks with or without shared resources. Secondly, we study the property of sustainability of an online restricted-migration scheduling approach without shared resources
Deep learning on attributed graphs by Martin Simonovsky( )

1 edition published in 2018 in English and held by 2 WorldCat member libraries worldwide

Graph is a powerful concept for representation of relations between pairs of entities. Data with underlying graph structure can be found across many disciplines, describing chemical compounds, surfaces of three-dimensional models, social interactions, or knowledge bases, to name only a few. There is a natural desire for understanding such data better. Deep learning (DL) has achieved significant breakthroughs in a variety of machine learning tasks in recent years, especially where data is structured on a grid, such as in text, speech, or image understanding. However, surprisingly little has been done to explore the applicability of DL on graph-structured data directly.The goal of this thesis is to investigate architectures for DL on graphs and study how to transfer, adapt or generalize concepts working well on sequential and image data to this domain. We concentrate on two important primitives: embedding graphs or their nodes into a continuous vector space representation (encoding) and, conversely, generating graphs from such vectors back (decoding). To that end, we make the following contributions.First, we introduce Edge-Conditioned Convolutions (ECC), a convolution-like operation on graphs performed in the spatial domain where filters are dynamically generated based on edge attributes. The method is used to encode graphs with arbitrary and varying structure.Second, we propose SuperPoint Graph, an intermediate point cloud representation with rich edge attributes encoding the contextual relationship between object parts. Based on this representation, ECC is employed to segment large-scale point clouds without major sacrifice in fine details.Third, we present GraphVAE, a graph generator allowing to decode graphs with variable but upper-bounded number of nodes making use of approximate graph matching for aligning the predictions of an autoencoder with its inputs. The method is applied to the task of molecule generation
Méthodes proximales pour la résolution de problèmes inverses : application à la tomographie par émission de positrons by Nelly Pustelnik( )

1 edition published in 2010 in French and held by 2 WorldCat member libraries worldwide

The objective of this work is to propose reliable, efficient and fast methods for minimizing convex criteria, that are found in inverse problems for imagery. We focus on restoration/reconstruction problems when data is degraded with both a linear operator and noise, where the latter is not assumed to be necessarily additive.The methods reliability is ensured through the use of proximal algorithms, the convergence of which is guaranteed when a convex criterion is considered. Efficiency is sought through the choice of criteria adapted to the noise characteristics, the linear operators and the image specificities. Of particular interest are regularization terms based on total variation and/or sparsity of signal frame coefficients. As a consequence of the use of frames, two approaches are investigated, depending on whether the analysis or the synthesis formulation is chosen. Fast processing requirements lead us to consider proximal algorithms with a parallel structure. Theoretical results are illustrated on several large size inverse problems arising in image restoration, stereoscopy, multi-spectral imagery and decomposition into texture and geometry components. We focus on a particular application, namely Positron Emission Tomography (PET), which is particularly difficult because of the presence of a projection operator combined with Poisson noise, leading to highly corrupted data. To optimize the quality of the reconstruction, we make use of the spatio-temporal characteristics of brain tissue activity
Analysis & design of control for distributed embedded systems under communication constraints by Kumar Roy Prateep( )

1 edition published in 2009 in English and held by 2 WorldCat member libraries worldwide

The Networked Embedded Control System (NECS) uses communication networks in the feedback loops. Since the embedded systems have the limited battery power along with limited bandwidth and computing power, the feedback data rates are limited. The rate of communications can drastically affect system stability. Hence, there is a strong need for understanding and merging the Control Theory with Communication or Information Theory. The data rate constraint introduces quantization into the feedback loop whereas the communication or computational model induces discrete events which are no more periodic. These two phenomena give the NECS a twofold nature : continuous and discrete, and render them specific. In this thesis we analyze the stability and performance of NECS from Informationtheoretic point of view. For linear systems, we show how fundamental are the tradeoffs between the communication-rate and control goals, such as stability, controllability / observability and performances. An integrated approach of control and communication (in terms of Shannon Information Rate) of NECS or distributed embedded control systems is studied. The main results are as follows : We showed that the entropy reduction which is same as uncertainty reduction is dependent on Controllability Gramian only. It is also related to Shannon Mutual-Information. We demonstrated that the gramian of controllability constitutes a metric of information theoretic entropy with respect to the noises induced by quantization. Reduction of these noises is equivalent to the design methods proposing a reduction of the controllability gramian norm. We established a new relation of Fisher Information Matrix (FIM) and Controllability Gramian (CG) based on estimation-theoretic and information-theoretic explanations. We propose an algorithm which optimally distributes the network capacity between a number "n" of competing actuators. The metric of this distribution is the Controllability Gramian
Filtering of thin objects : applications to vascular image analysis by Olena Tankyevych( )

1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide

The motivation of this work is filtering of elongated curvilinear objects in digital images. Their narrowness presents difficulties for their detection. In addition, they are prone to disconnections due to noise, image acquisition artefacts and occlusions by other objects. This work is focused on thin objects detection and linkage. For these purposes, a hybrid second-order derivative-based and morphological linear filtering method is proposed within the framework of scale-space theory. The theory of spatially-variant morphological filters is discussed and efficient algorithms are presented. From the application point of view, our work is motivated by the diagnosis, treatment planning and follow-up of vascular diseases. The first application is aimed at the assessment of arteriovenous malformations (AVM) of cerebral vasculature. The small size and the complexity of the vascular structures, coupled to noise, image acquisition artefacts, and blood signal heterogeneity make the analysis of such data a challenging task. This work is focused on cerebral angiographic image enhancement, segmentation and vascular network analysis with the final purpose to further assist the study of cerebral AVM. The second medical application concerns the processing of low dose X-ray images used in interventional radiology therapies observing insertion of guide-wires in the vascular system of patients. Such procedures are used in aneurysm treatment, tumour embolization and other clinical procedures. Due to low signal-to-noise ratio of such data, guide-wire detection is needed for their visualization and reconstruction. Here, we compare the performance of several line detection algorithms. The purpose of this work is to select a few of the most promising line detection methods for this medical application
Lexique-grammaire et Unitex : analyse sur deux corpus comparables de médecine thermale : quels apports pour une description terminologique bilingue de qualité ? by Rosa Cetro( )

1 edition published in 2013 in French and held by 2 WorldCat member libraries worldwide

Terminology is the science concerned with the study of the terms, those lexical units thatpossess a specialized meaning within a scientific or technical context. Established as ascience in the first half of 20th century, terminology is an interdisciplinary field takingadvantage of contributions from linguistics, logics, and informatics. This latter in particularhas allowed significant developments in terminology. Lexicon-grammar is an empirical method of linguistic description inspired by the works of Zellig S. Harris, which has been founded by the French linguist Maurice Gross at the end of the 1960s. Linguistic description has been carried out in parallel with the development of informatics tools able to formalise and exploit linguistic data, including the software Unitex (Paumier, 2002). Both lexicon-grammar and Unitex have an interesting, largely unexploited potential for further developments in terminology. In this work, we assess the contributions brought by lexicon-grammar and Unitex to a high-profile bilingual terminological description. After defining quality criteria for such terminological description, we carry out our evaluation on two comparable corpora specific of thermal medicine, both in French and in Italian
Compression guidée par automate et noyaux rationnels by Ahmed Amarni( )

1 edition published in 2015 in French and held by 2 WorldCat member libraries worldwide

En raison de l'expansion des données, les algorithmes de compression sont désormais cruciaux. Nous abordons ici le problème de trouver des algorithmes de compression optimaux par rapport à une source de Markov donnée. A cet effet, nous étendons l'algorithme de Huffman classique. Pour se faire premièrement on applique Huffman localement à chaque état de la source Markovienne, en donnant le résultat de l'efficacité obtenue pour cet algorithme. Mais pour bien approfondir et optimiser quasiment l'efficacité de l'algorithme, on donne un autre algorithme qui est toujours appliqué localement à chaque états de la source Markovienne, mais cette fois ci en codant les facteurs partant de ces états de la source Markovienne de sorte à ce que la probabilité du facteur soit une puissance de 1/2 (sachant que l'algorithme de Huffman est optimal si et seulement si tous les symboles à coder ont une probabilité puissance de 1/2). En perspective de ce chapitre on donne un autre algorithme (restreint à la compression de l'étoile) pour coder une expression à multiplicité, en attendant dans l'avenir à coder une expression complète
Intégration des évènements non périodiques dans les systèmes temps réel : application à la gestion des évènements dans la spécification temps réel pour Java by Damien Masson( )

1 edition published in 2008 in French and held by 2 WorldCat member libraries worldwide

Les systèmes temps réel sont des systèmes informatiques composés de tâches auxquelles sont associées des contraintes temporelles, appelées échéances. Dans notre étude, nous distinguons deux familles de tâches : les tâches temps réel dur et les tâches temps réel souple. Les premières possèdent une échéance stricte, qu'elles doivent impérativement respecter. Elles sont de nature périodique, ou sporadique, et l'étude analytique de leur comportement fait l'objet d'un état de l'art conséquent. Les secondes sont de nature apériodique. Aucune hypothèse sur leur modèle d'arrivéée ni sur leur nombre n'est possible. Aucune garantie ne saurait être donnée sur leur comportement dès lors que l'on ne peut écarter les situations de surcharge, où la demande de calcul peut dépasser les capacités du système. La problématique devient alors l'étude des solutions d'ordonnancement mixte de tâches périodiques et apériodiques qui minimisent les temps de réponse des tâches apériodiques tout en garantissant les échéances des tâches périodiques. De nombreuses solutions ont été proposées ces vingt dernières années. On distingue les solutions basées sur la réservation de ressources, les serveurs de tâches, des solutions exploitant les instants d'inactivité du système, comme les algorithmes de vol de temps creux. La spécification Java pour le temps réel (RTSJ) voit le jour dans les années 2000. Si cette norme répond à de nombreux problèmes liés à la gestion de la mémoire ou à l'ordonnancement des tâches périodiques, celui de l'ordonnancement mixte de tâches périodiques et apériodiques n'est pas abordé. Nous proposons dans cette thèse d'apporter les modifications nécessaires aux algorithmes principaux d'ordonnancement mixte, le Polling Server (PS), le Deferrable Server (DS) et le Dynamic Approximate Slack Stealer (DASS) en vue de leur implantation avec RTSJ. Ces algorithmes ne peuvent en effet être implantés directement tels qu'ils sont décrits, car ils sont trop liés à l'ordonnanceur du système. Nous proposons des extensions aux APIs RTSJ existantes pour faciliter l'implantation de ces mécanismes modifiés, et nous fournissons les interfaces utiles à l'ajout d'autres solutions algorithmiques. Nous proposons également des modifications sur les APIs existantes de RTSJ afin de répondre aux problèmes d'intégration et d'implantation d'algorithmes d'analyse de faisabilité. Nous proposons enfin un algorithme d'estimation des temps creux, le Minimal Approximate Slack Stealer (MASS), dont l'implantation au niveau utilisateur, permet son intégration dans RTSJ
Random matrices and applications to statistical signal processing by Pascal Vallet( )

1 edition published in 2011 in English and held by 2 WorldCat member libraries worldwide

In this thesis, we consider the problem of source localization in large sensor networks, when the number of antennas of the network and the number of samples of the observed signal are large and of the same order of magnitude. We also consider the case where the source signals are deterministic, and we develop an improved algorithm for source localization, based on the MUSIC method. For this, we fist show new results concerning the position of the eigen values of large information plus noise complex gaussian random matrices
Structures algorithmiques pour les opérateurs d'algèbre géométrique et application aux surfaces quadriques by Stéphane Breuils( )

1 edition published in 2018 in English and held by 2 WorldCat member libraries worldwide

L'algèbre géométrique est un outil permettant de représenter et manipuler les objets géométriques de manière générique, efficace et intuitive. A titre d'exemple, l'Algèbre Géométrique Conforme (CGA), permet de représenter des cercles, des sphères, des plans et des droites comme des objets algébriques. Les intersections entre ces objets sont incluses dans la même algèbre. Il est possible d'exprimer et de traiter des objets géométriques plus complexes comme des coniques, des surfaces quadriques en utilisant une extension de CGA. Cependant due à leur représentation requérant un espace vectoriel de haute dimension, les implantations de l'algèbre géométrique, actuellement disponible, n'autorisent pas une utilisation efficace de ces objets. Dans ce manuscrit, nous présentons tout d'abord une implantation de l'algèbre géométrique dédiée aux espaces vectoriels aussi bien basses que hautes dimensions. L'approche suivie est basée sur une solution hybride de code pré-calculé en vue d'une exécution rapide pour des espaces vectoriels de basses dimensions, ce qui est similaire aux approches de l'état de l'art. Pour des espaces vectoriels de haute dimension, nous proposons des méthodes de calculs ne nécessitant que peu de mémoire. Pour ces espaces, nous introduisons un formalisme récursif et prouvons que les algorithmes associés sont efficaces en termes de complexité de calcul et complexité de mémoire. Par ailleurs, des règles sont définies pour sélectionner la méthode la plus appropriée. Ces règles sont basées sur la dimension de l'espace vectoriel considéré. Nous montrons que l'implantation obtenue est bien adaptée pour les espaces vectoriels de hautes dimensions (espace vectoriel de dimension 15) et ceux de basses dimensions. La dernière partie est dédiée à une représentation efficace des surfaces quadriques en utilisant l'algèbre géométrique. Nous étudions un nouveau modèle en algèbre géométrique de l'espace vectoriel $mathbb{R}^{9,6}$ pour manipuler les surfaces quadriques. Dans ce modèle, une surface quadrique est construite par l'intermédiaire de neuf points. Nous montrerons que ce modèle permet non seulement de représenter de manière intuitive des surfaces quadriques mais aussi de construire des objets en utilisant les définitions de CGA. Nous présentons le calcul de l'intersection de surfaces quadriques, du vecteur normal, du plan tangent à une surface en un point de cette surface. Enfin, un modèle complet de traitement des surfaces quadriques est détaillé
Aspects algorithmiques de la comparaison d'éléments biologiques by Florian Sikora( )

1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide

Pour mieux saisir les liens complexes entre génotype et phénotype, une méthode utilisée consiste à étudier les relations entre différents éléments biologiques (entre les protéines, entre les métabolites...). Celles-ci forment ce qui est appelé un réseau biologique, que l'on représente algorithmiquement par un graphe. Nous nous intéressons principalement dans cette thèse au problème de la recherche d'un motif (multi-ensemble de couleurs) dans un graphe coloré, représentant un réseau biologique. De tels motifs correspondent généralement à un ensemble d'éléments conservés au cours de l'évolution et participant à une même fonction biologique. Nous continuons l'étude algorithmique de ce problème et de ses variantes (qui admettent plus de souplesse biologique), en distinguant les instances difficiles algorithmiquement et en étudiant différentes possibilités pour contourner cette difficulté (complexité paramétrée, réduction d'instance, approximation...). Nous proposons également un greffon intégré au logiciel Cytoscape pour résoudre efficacement ce problème, que nous testons sur des données réelles.Nous nous intéressons également à différents problèmes de génomique comparative. La démarche scientifique adoptée reste la même: depuis une formalisation d'un problème biologique, déterminer ses instances difficiles algorithmiquement et proposer des solutions pour contourner cette difficulté (ou prouver que de telles solutions sont impossibles à trouver sous des hypothèses fortes)
Grammaires de graphes et langages formels by Trong Hiêu Dinh( )

1 edition published in 2011 in French and held by 2 WorldCat member libraries worldwide

Pas de résumé en anglais
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.99 (from 0.99 for Structures ... to 0.99 for Structures ...)

Alternative Names
Institut Gaspard Monge. Laboratoire d'informatique

LabInfo IGM

UMR 8049

Languages
French (13)

English (7)