WorldCat Identities

Reynaud, Chantal

Overview
Works: 44 works in 57 publications in 2 languages and 92 library holdings
Genres: Periodicals 
Roles: Editor, Thesis advisor, Opponent, Other, Author, 956
Publication Timeline
.
Most widely held works by Chantal Reynaud
Le web sémantique( Book )

1 edition published in 2005 in French and held by 24 WorldCat member libraries worldwide

Fouille du web( Book )

1 edition published in 2007 in French and held by 6 WorldCat member libraries worldwide

Acquisition and validation of expert knowledge by using causal models by Chantal Reynaud( Book )

3 editions published in 1992 in English and held by 5 WorldCat member libraries worldwide

ADELE, un outil d'aide à l'acquisition des connaissances basé sur des justifications by Chantal Reynaud( Book )

2 editions published in 1989 in French and held by 4 WorldCat member libraries worldwide

Le travail présenté dans cette thèse s'inscrit dans le cadre de l'acquisition de connaissances et des explications dans un système expert de seconde génération (S.E.S.G.). Les S.E.S.G. comportent plusieurs bases de connaissances modélisant le domaine à différents niveaux d'abstraction : des connaissances expertes (raccourcis de raisonnement) et des connaissances plus profondes, explicitant des associations de faits effectuées implicitement par les experts. Si les connaissances expertes ne peuvent être recueillies qu'auprès d'experts, au contraire, les connaissances plus profondes peuvent plus facilement être obtenues sans experts. ADELE suppose ces dernières connaissances acquises ct les utilisent pour valider des connaissances expertes. La validation consiste à contrôler qu'une connaissance est bien formalisée en cherchant à la justifier. La réalisation de cette tâche conduit à proposer d'éventuelles modifications et des ajouts de connaissances proches. ADELE fait communiquer deux bases de connaissances exprimées dans des formalismes différents: les connaissances expertes sont représentées sous forme de règles de production, les connaissances plus profondes sont représentées dans un langage proche de celui des réseaux sémantiques. Ce système a été validé dans deux domaines d'application : le domaine du diagnostic médical en electromyographie et le domaine des annonces à l'ouverture au bridge
Knowledge acquisition techniques and second-generation expert systems by M. O Cordier( Book )

3 editions published in 1991 in English and held by 4 WorldCat member libraries worldwide

Une approche adaptative pour la recherche d'information sur le Web by Cédric Pruski( Book )

2 editions published in 2009 in French and held by 2 WorldCat member libraries worldwide

In this work, we address the problem of knowledge evolution for improving web search in the sense of relevance of the returned results. The advocated solution implements ontologies, cornerstone of the semantic web, for representing both the domain targeted by the query and the profile of the user who entered the query. Ontologies are considered as knowledge that is evolving over time. Hence, the ontology evolution problem is tackled as regard the evolution of the targeted domain but also with respect to the evolution of users' profile. First, based on the adaptation of ideas developed in psychology to the knowledge engineering field, we introduce a new paradigm: adaptive ontology and a process for making adaptive ontologies smoothly follow the evolution of a domain. Then, we propose an approach exploiting adaptive ontologies for improving web information retrieval. To this end, we first introduce data structures, wpgraphs and w3graphs, for representing web data. We then introduce the ask query language tailored to the extraction of relevant information from these structures. We also propose a set of query enrichment rules based on the exploitation of ontological relations as well as adaptive ontologies elements of the ontology representing the domain targeted by the query and the one representing the view of the user on the domain. Lastly, we present a tool for managing adaptive ontologies and for searching relevant information on the web as well as an experimental validation of the introduced concepts. We based our validation on the definition of a realistic case study devoted to the retrieval of scientific articles published at the www series of conference
Ontologies et aide à l'utilisateur pour l'interrogation de sources multiples et hétérogènes by Hassen Kefi( Book )

2 editions published in 2006 in French and held by 2 WorldCat member libraries worldwide

The explosion in the number of information sources available on the Web multiplies the needs for multiple and heterogeneous data sources integration techniques. These techniques rest on the construction of a uniform view of the distributed data allowing to give to the user the feeling he queries a homogeneous and centralized system. The work undertaken in this thesis concerns ontologies as assistance tools to the interrogation of a server of information. The two aspects of ontologies which we treated are ontologies as a query refinement tool, on the one hand, and as an assistance for unified interrogation, on the other hand. Concerning the first aspect, we propose to gradually build, interactively with the user, more specific and more constrained requests until obtaining fewer and more relevant answers. Our approach is based on the combined use of related ontology and of Galois lattices. Concerning the second aspect, we propose a generic approach of alignment of ontologies implemented through a semi-automatic process. The approach that we propose applies in the presence of a dissymmetry in the structure of compared taxonomies. We propose to use together, in a precisely defined order, terminological, structural and semantic techniques. These two aspects were the subject of distinct works carried out within two projects. The first one was Picsel 2 project, carried out in collaboration with France Telecom R&D whose field of experimentation is tourism. The second was RNTL eDot project, whose applicability relates to the analysis of the bacteriological risk of food contamination
Concise Pattern Learning for RDF Data Sets Interlinking by Zhengjie Fan( )

1 edition published in 2014 in English and held by 2 WorldCat member libraries worldwide

De nombreux jeux de données sont publiés sur le web à l'aide des technologies du web sémantique. Ces jeux de données contiennent des données qui représentent des liens vers des ressources similaires. Si ces jeux de données sont liés entre eux par des liens construits correctement, les utilisateurs peuvent facilement interroger des données à travers une interface uniforme, comme s'ils interrogeaient un jeu de données unique. Mais, trouver des liens corrects est très difficile car de nombreuses comparaisons doivent être effectuées. Plusieurs solutions ont été proposées pour résoudre ce problème : (1) l'approche la plus directe est de comparer les valeurs d'attributs d'instances pour identifier les liens, mais il est impossible de comparer toutes les paires possibles de valeurs d'attributs. (2) Une autre stratégie courante consiste à comparer les instances selon les attribut correspondants trouvés par l'alignement d'ontologies à base d'instances, qui permet de générer des correspondances d'attributs basés sur des instances. Cependant, il est difficile d'identifier des instances similaires à travers les ensembles de données car,dans certains cas, les valeurs des attributs en correspondance ne sont pas les mêmes.(3) Plusieurs méthodes utilisent la programmation génétique pour construire des modèles d'interconnexion afin de comparer différentes instances, mais elles souffrent de longues durées d'exécution.Dans cette thèse, une méthode d'interconnexion est proposée pour relier les instances similaires dans différents ensembles de données, basée à la fois sur l'apprentissage statistique et sur l'apprentissage symbolique. L'entrée est constituée de deux ensembles de données, des correspondances de classes sur les deux ensembles de données et un échantillon de liens “positif” ou “négatif” résultant d'une évaluation de l'utilisateur. La méthode construit un classifieur qui distingue les bons liens des liens incorrects dans deux ensembles de données RDF en utilisant l'ensemble des liens d'échantillons évalués. Le classifieur est composé de correspondances d'attributs entre les classes correspondantes et de deux ensembles de données,qui aident à comparer les instances et à établir les liens. Le classifieur est appelé motif d'interconnexion dans cette thèse. D'une part, notre méthode découvre des correspondances potentielles entre d'attributs pour chaque correspondance de classe via une méthode d'apprentissage statistique : l'algorithme de regroupement K-medoids,en utilisant des statistiques sur les valeurs des instances. D'autre part, notre solution s'appuie sur un modèle d'interconnexion par une méthode d'apprentissage symbolique: l'espace des versions, basée sur les correspondances d'attributs potentielles découvertes et l'ensemble des liens de l'échantillon évalué. Notre méthode peut résoudre la tâche d'interconnexion quand il n'existe pas de motif d'interconnexion combiné qui couvre tous les liens corrects évalués avec un format concis.L'expérimentation montre que notre méthode d'interconnexion, avec seulement1% des liens totaux dans l'échantillon, atteint une F-mesure élevée (de 0,94 à 0,99)
Modélisation logique et générique des systèmes d'hypermédias adaptatifs by Cédric Jacquiot( Book )

2 editions published in 2006 in French and held by 2 WorldCat member libraries worldwide

Les hypermédias adaptatifs, comme tous les systèmes à base de connaissances, peuvent être divisés en deux parties : une partie statique, permettant la représentation de données relatives aux domaines à traiter, et une partie dynamique, consacrée au traitement des données par différents procédés.Les modèles de données existants sont souvent difficiles à réutiliser car ils sont soit très spécifiques à une application particulière, soit très généraux et, dans ce cas, il est rarement possible de les rendre plus spécifiques pour un domaine d'application particulier. Les modèles d'adaptation existants se limitent souvent à des langages de règles, qui ne font pas de distinction entre les différentes sortes de données.Cette thèse propose des modèles génériques de données, décrits sous forme de diagrammes de classe UML, permettant par spécialisation la création de modèles spécifiques à un domaine d'application. Elle présente également un modèle d'adaptation entièrement décrit en calcul des prédicats du premier ordre, basé sur la logique situationnelle, et muni d'un langage de règles constitué de plusieurs niveaux, prenant en compte les différents types de données à des niveaux différents du langage. Ce langage introduit, en outre, une forme de méta-adaptation, par sélection de stratégies de parcours du domaine. Cette thèse introduit la notion de parcours par tronçons, qui offre une solution intermédiaire, entre le parcours libre et le parcours guidé. L'ensemble des modèles est utilisé pour proposer une application dans le domaine du e-learning
Construction automatisée de l'ontologie des systèmes médiateurs : application à des systèmes intégrant des services standards accessibles via le Web by Gloria-Lucia Giraldo Gómez( Book )

2 editions published in 2005 in French and held by 2 WorldCat member libraries worldwide

IN THIS THESIS, WE PROPOSE A METHOD OF SEMI-AUTOMATIC CONSTRUCTION OF THE "DOMAIN ONTOLOGY" FOR THE MEDIATOR SYSTEM PICSEL INTEGRATING XML SOURCES. THIS APPROACH IS BASED ON THE EXPLOITATION OF THE XML DOCUMENTS STRUCTURES REPRESENTED IN DTDS. INITIALLY, WE PROPOSE A METHOD TO BUILD IN A SEMI-AUTOMATIC WAY AN ONTOLOGY FROM A SET OF SEMANTICALLY HETEROGENEOUS DTDS COVERING THE MEDIATOR'S DOMAIN. IN A SECOND TIME, WE PROPOSE A MORE AUTOMATED APPROACH WHICH EXPLOITS STANDARDIZED DTDS DEFINED BY ORGANIZATIONS WORKING IN THE NORMALISATION OF COMMERCIAL TRANSACTIONS, THESE DTDS REPRESENT THE STRUCTURE OF MESSAGES RELATIVE TO A GIVEN DOMAIN AND BE USED TO COMMUNICATE IN THE E-COMMERCE CONTEXT. THE RE-USE OF STANDARDIZED DESCRIPTIONS ENABLED US TO AUTOMATE COMPLETELY THE "KNOWLEDGE BASES" OF THE MEDIATOR (ONTOLOGY AND DESCRIPTION OF THE FUNCTIONALITIES OF THE INTEGRATED SERVICES), THE CONSTRUCTION OF THE INTERFACE BETWEEN USERS AND MEDIATOR SYSTEM AND THE CONSTRUCTION OF WRAPPERS, IT ALLOWS AN EASIER AND FASTER INTEGRATION OF NEW SOURCES AND A SIMPLE HANDLING FOR NON EXPERT USERS WE DEVELOPED AN APPLICATION IN THE TOURISM DOMAIN, EXPLOITING THE OTA (OPEN TRAVEL ALLIANCE) STANDARDS AND ALLOWING PICSEL MEDIATOR TO INTEGRATE SERVICES ON THE WEB THAT LED US TO STUDY THE CONSEQUENCES OF THIS CONTEXT CHANGING ON THE USE OF THE PICSEL ENGINE WITH RESPECT OF THE ONTOLOGY SPECIFICATION, THE DESCRIPTION OF THE FUNCTIONALITIES OF THE SERVICES, THE SPECIFICATION OF THE USER'S QUERIES AND THE INTERPRETATION OF THERE WRITINGS OBTAINED BY THE MEDIATOR
Intéropérabilité sémantique dans le domaine du diagnostic in vitro : Représentation des Connaissances et Alignement by Melissa Mary( )

2 editions published in 2017 in French and held by 2 WorldCat member libraries worldwide

The centralization of patient data in different digital repositories raises issues of interoperability with the different medical information systems, such as those used in clinics, pharmacies or in medical laboratories. The public health authorities, charged with developing and implementing these repositories, recommend the use of standards to structure (syntax) and encode (semantic) health information. For data from in vitro diagnostics (IVD) two standards are recommended: - the LOINC® terminology (Logical Observation Identifier Names and Codes) to represent laboratory tests;- the SNOMED CT® ontology (Systematized Nomenclature Of MEDicine Clinical Terms) to express the observed results.This thesis focuses on the semantic interoperability problems in clinical microbiology with two major axes: How can an IVD Knowledge Organization System be aligned with SNOMED CT®? To answer this, I opted for the development of alignment methodologies adapted to the in vitro diagnostic data rather than proposing a specific method for the SNOMED CT®. The common alignment methods are evaluated on a gold standard alignment between LOINC® and SNOMED CT®. Themost appropriate are implemented in an R library which serves as a starting point to create new alignments at bioMérieux.What are the advantages and limits of a formal representation of DIV knowledge? To answer this, I looked into the formalization of the couple 'test-result' (observation) in a laboratory report. I proposed a logical formalization to represent the LOINC® terminology and I demonstrated the advantages of an ontological representation to sort and query laboratory tests. As a second step, I formalized an observation pattern compatible with the SNOMED CT® ontology and aligned onthe concept of the top-ontology BioTopLite2. Finally, the observation pattern was evaluated in order to be used within clinical microbiology expert systems. To resume, my thesis addresses some issues on IVD patient data share and reuse. At present, the problems of semantic interoperability and knowledge formalization in the field of in vitro diagnostics hampers the development of expert systems. My research has enabled some of the obstacles to be raised and could be used in new intelligent clinical microbiology systems, for example in order to be able to monitor the emergence of multi resistant bacteria and consequently adapt antibiotic therapies
Extraction et gestion des connaissances : EGC'2014 by Journées internationales francophones sur l'extraction et la gestion des connaissances( Book )

1 edition published in 2014 in French and held by 2 WorldCat member libraries worldwide

Gestion de l'incertitude dans le processus d'extraction de connaissances à partir de textes by Fadhela Kerdjoudj( )

1 edition published in 2015 in French and held by 2 WorldCat member libraries worldwide

The increase of textual sources over the Web offers an opportunity for knowledge extraction and knowledge base creation. Recently, several research works on this topic have appeared or intensified. They generally highlight that to extract relevant and precise information from text, it is necessary to define a collaboration between linguistic approaches, e.g., to extract certain concepts regarding named entities, temporal and spatial aspects, and methods originating from the field of semantics' processing. Moreover, successful approaches also need to qualify and quantify the uncertainty present in the text. Finally, in order to be relevant in the context of the Web, the linguistic processing need to be consider several sources in different languages. This PhD thesis tackles this problematic in its entirety since our contributions cover the extraction, representation of uncertain knowledge as well as the visualization of generated graphs and their querying. This research work has been conducted within a CIFRE funding involving the Laboratoire d'Informatique Gaspard Monge (LIGM) of the Université Paris-Est Marne la Vallée and the GEOLSemantics start-up. It was leveraging from years of accumulated experience in natural language processing (GeolSemantics) and semantics processing (LIGM).In this context, our contributions are the following:- the integration of a qualifation of different forms of uncertainty, based on ontology processing, within the knowledge extraction processing,- the quantification of uncertainties based on a set of heuristics,- a representation, using RDF graphs, of the extracted knowledge and their uncertainties,- an evaluation and an analysis of the results obtained using our approach
Decouverte de mappings dans un systeme pair-a-pair semantique by François-Elie Calvier( Book )

2 editions published in 2010 in French and held by 2 WorldCat member libraries worldwide

The richness of answers to queries asked to peer to peer data management systems (PDMS) depends on the number of mappings between ontologies of different peers. Increasing this number can improve responses to queries. This is the problem considered in this thesis. We aims at discovering semantic links between ontologies of different peers. This problem, known as ontology alignment, is specific in peer-to-peer systems in which ontologies are not completely known a priori, the number of ontologies to align is very large and alignment should be done without any centralized control. We propose semi-automatic techniques for identifying: (1) mapping shortcut corresponding to a composition of existing mappings and (2) new mappings which can not be inferred in the current state of the system. These techniques are based on the use of reasoning mechanisms of PDMS and filtering criteria restricting the number of pairs of elements to align. Mapping shortcuts are identified from the analysis of trace of queries asked by users, but also after application of criteria considering their usefulness. The discovery of new mappings consists in identifying the elements of the ontology of a given peer that are judged interesting and then in selecting the elements from distant peer with which it is relevant to align them. The proposed alignment techniques are either adaptations of existing technology or innovative techniques exploiting the specificities of our framework
L'exploitation de modèles de connaissances du domaine dans le processus de développement d'un système à base de connaissances by Chantal Reynaud( Book )

2 editions published in 1999 in French and held by 2 WorldCat member libraries worldwide

Annotation sémantique de documents semi-structurés pour la recherche d'information by Mouhamadou Thiam( Book )

2 editions published in 2010 in French and held by 2 WorldCat member libraries worldwide

The semantic web is defined by a set of methods and technologies enabling software agents to reason about the contents of Web resources. This vision of the Web depends on the construction of ontologies and the use of metadata to represent these resources. The objective of our thesis is to annotate semantically tagged documents related to a domain of interest. These documents may contain well-structured nodes and textual ones. We assume having a domain ontology defined by concepts, relations between these concepts and their properties. This ontology includes a lexical component (labels, a set of named entities (NE) and terms) for each concept. We have defined an automatic and domain independent approach SHIRl-Extract that extracts terms and NE and aligns them with the concepts of the ontology. The alignment uses the lexical component or the Web to discover new terms. We have defined an annotation model which represents the results of extraction and annotation. The metadata of this model distinguish nodes depending on how the terms and NE are aligned and aggregated in a node. The model can also represent the structural neighbor relations between nodes. We have defined SHIRl-Annot, a set of declarative rules to annotate the nodes and their relations. The constructed RDF(S) annotation base can be queried using SP ARQL. We have implemented and evaluated our approach on a collection of call for participation to computer science conferences
Knowledge Discovery for Avionics Maintenance : An Unsupervised Concept Learning Approach by Luis Palacios Medinacelli( )

1 edition published in 2019 in English and held by 1 WorldCat member library worldwide

In this thesis we explore the problem of signature analysis in avionics maintenance, to identify failures in faulty equipment and suggest corrective actions to resolve the failure. The thesis takes place in the context of a CIFRE convention between Thales R&T and the Université Paris-Sud, thus it has both a theoretical and an industrial motivation. The signature of a failure provides all the information necessary to understand, identify and ultimately repair a failure. Thus when identifying the signature of a failure it is important to make it explainable. We propose an ontology based approach to model the domain, that provides a level of automatic interpretation of the highly technical tests performed in the equipment. Once the tests can be interpreted, corrective actions are associated to them. The approach is rooted on concept learning, used to approximate description logic concepts that represent the failure signatures. Since these signatures are not known in advance, we require an unsupervised learning algorithm to compute the approximations. In our approach the learned signatures are provided as description logics (DL) definitions which in turn are associated to a minimal set of axioms in the A-Box. These serve as explanations for the discovered signatures. Thus providing a glass-box approach to trace the reasons on how and why a signature was obtained. Current concept learning techniques are either designed for supervised learning problems, or rely on frequent patterns and large amounts of data. We use a different perspective, and rely on a bottom-up construction of the ontology. Similarly to other approaches, the learning process is achieved through a refinement operator that traverses the space of concept expressions, but an important difference is that in our algorithms this search is guided by the information of the individuals in the ontology. To this end the notions of justifications in ontologies, most specific concepts and concept refinements, are revised and adapted to our needs. The approach is then adapted to the specific avionics maintenance case in Thales Avionics, where a prototype has been implemented to test and evaluate the approach as a proof of concept
L'évolution du web de données basée sur un système multi-agents by Fatma Chamekh( )

1 edition published in 2016 in French and held by 1 WorldCat member library worldwide

In this thesis, we investigate the evolution of RDF datasets from documents and LOD. We identify the following issues : the integration of new triples, the proposition of changes by taking into account the data quality and the management of differents versions.To handle with the complexity of the web of data evolution, we propose an agent based argumentation framework. We assume that the agent specifications could facilitate the process of RDF dataset evolution. The agent technology is one of the most useful solution to cope with a complex problem. The agents work as a team and are autonomous in the sense that they have the ability to decide themselves which goals they should adopt and how these goals should be acheived. The Agents use argumentation theory to reach a consensus about the best change alternative. Relatively to this goal, we propose an argumentation model based on the metric related to the intrinsic dimensions.To keep a record of all the occured modifications, we are focused on the ressource version. In the case of a collaborative environment, several conflicts could be generated. To manage those conflicts, we define rules.The exploited domain is general medecine
Mapping Adaptation between Biomedical Knowledge Organization Systems by Julio Cesar Dos Reis( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

Modern biomedical information systems require exchanging and retrieving data between them, due to the overwhelming available data generated in this domain. Knowledge Organization Systems (KOSs) offer means to make the semantics of data explicit which, in turn, facilitates their exploitation and management. The evolution of semantic technologies has led to the development and publication of an ever increasing number of large KOSs for specific sub-domains like genomics, biology, anatomy, diseases, etc. The size of the biomedical field demands the combined use of several KOSs, but it is only possible through the definition of mappings. Mappings interconnect entities of domain-related KOSs via semantic relations. They play a key role as references to enable advanced interoperability tasks between systems, allowing software applications to interpret data annotated with different KOSs. However, to remain useful and reflect the most up-to-date knowledge of the domain, the KOSs evolve and new versions are periodically released. This potentially impacts established mappings demanding methods to ensure, as automatic as possible, their semantic consistency over time. Manual maintenance of mappings stands for an alternative only if a restricted number of mappings are available. Otherwise supporting methods are required for very large and highly dynamic KOSs. To address such problem, this PhD thesis proposes an original approach to adapt mappings based on KOS changes detected in KOS evolution. The proposal consists in interpreting the established correspondences to identify the relevant KOS entities, on which the definition relies on, and based on the evolution of these entities to propose actions suited to modify mappings. Through this investigation, (i) we conduct in-depth experiments to understand the evolution of KOS mappings; we propose automatic methods (ii) to analyze mappings affected by KOS evolution, and (iii) to recognize the evolution of involved concepts in mappings via change patterns; finally (iv) we design techniques relying on heuristics explored by novel algorithms to adapt mappings. This research achieved a complete framework for mapping adaptation, named DyKOSMap, and an implementation of a software prototype. We thoroughly evaluated the proposed methods and the framework with real-world datasets containing several releases of mappings between biomedical KOSs. The obtained results from experimental validations demonstrated the overall effectiveness of the underlying principles in the proposed approach to adapt mappings. The scientific contributions of this thesis enable to largely automatically maintain mappings with a reasonable quality, which improves the support for mapping maintenance and consequently ensures a better interoperability over time
ADELE : un outil pour l'acquisition des connaissances base sur des justifications by Chantal Reynaud( Book )

1 edition published in 1989 in French and held by 1 WorldCat member library worldwide

 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.89 (from 0.79 for Knowledge ... to 0.99 for Fouille du ...)

Alternative Names
Delaître, Chantal Reynaud-

Reynaud-Delaître, Chantal

Languages
French (24)

English (9)