WorldCat Identities

Le Thanh, Nhan (1952-....).

Overview
Works: 29 works in 36 publications in 2 languages and 45 library holdings
Roles: Thesis advisor, Opponent, Author, Other, Editor
Publication Timeline
.
Most widely held works by Nhan Le Thanh
Contribution à l'étude de la généralisation et de l'association dans une base de données relationnelle : les iso-dépendances et le modèle B-relationnel by Nhan Le Thanh( Book )

3 editions published in 1986 in French and held by 5 WorldCat member libraries worldwide

Définition dans le modèle relationnel de CDD, d'un type de dépendance et d'une approche de décomposition par "projection/division". Extension sémantique du modèle: le modèle B-relationnel. Etude formelle des nouvelles structures de données et des opérateurs
UN LANGAGE DE REQUETES DEDUCTIF POUR OBJETS PERSISTANTS by MARTINE POULARD-COLLARD( Book )

1 edition published in 1993 in French and held by 3 WorldCat member libraries worldwide

LE SUJET DE CETTE THESE S'INSCRIT DANS LE CONTEXTE DES LANGAGES DE BASES DE DONNEES LOGIQUES ET ORIENTEES OBJET: NOUS DEFINISSONS LE LANGAGE R & O (REGLES ET OBJETS), LANGAGE DE REGLES, TYPE, DONT LA SYNTAXE ET LA SEMANTIQUE INTEGRENT LES CONCEPTS FONDAMENTAUX DE L'APPROCHE OBJET. NOTRE PREMIER OBJECTIF EST DE PROPOSER UN LANGAGE DE REQUETES PUREMENT DECLARATIF QUI VERIFIE LES PROPRIETES DE FERMETURE ET D'ADEQUATION AU MODELE DE DONNEES. DE PLUS, CONVAINCUS DE LA NECESSITE D'INTEGRER DIFFERENTS STYLES DE PROGRAMMATION, NOUS RECHERCHONS LA COMPATIBILITE DE R & O AVEC UN LANGAGE DE PROGRAMMATION A OBJETS. LE CHOIX DU LANGAGE A OBJETS S'EST PORTE SUR EIFFEL POUR SES QUALITES DE REUTILISABILITE ET DE FIABILITE. L'ETAT DE L'ART DEBUTE PAR UNE PRESENTATION DES LIENS ENTRE LA LOGIQUE DU PREMIER ORDRE ET LE CONCEPT DE BASE DE DONNEES, ET UNE DESCRIPTION DU LANGAGE DATALOG QUI CARACTERISE LES BASES DE DONNEES DEDUCTIVES RELATIONNELLES. LA COMPLEXITE DES DONNEES DANS LES NOUVEAUX DOMAINES D'APPLICATION A MIS EN EVIDENCE LES LIMITES DU MODELE RELATIONNEL ET A CONDUIT A DEFINIR DES LANGAGES LOGIQUES ADAPTES AUX DONNEES COMPLEXES COMME COL, LDL, IQL, F-LOGIC OU LILA. CES LANGAGES SONT CLASSIFIES SELON DEUX APPROCHES: L'APPROCHE VALEUR ET L'APPROCHE OBJET. LE MODELE DE DONNEES SOUS-JACENT AU LANGAGE R & O PREND EN COMPTE LES CONCEPTS D'OBJET, IDENTIFICATEUR, CLASSE, TYPE, ATTRIBUT METHODE, HERITAGE AINSI QUE CELUI DE COLLECTION QUI JOUE UN ROLE ESSENTIEL POUR SATISFAIRE LA FERMETURE. LA SYNTAXE DU LANGAGE EST BASEE SUR TROIS TYPES DE REGLES LOGIQUES. ELLE EST DEFINIE DE MANIERE INCREMENTALE: NOUS L'ETENDONS PROGRESSIVEMENT POUR ARRIVER A DES REGLES INCLUANT DES LITTERAUX NEGATIFS, DES TERMES ENSEMBLISTES, DES LITTERAUX EVALUABLES, ET DES METHODES FONCTIONNELLES. NOUS MONTRONS QUE R & O PEUT ETRE ASSOCIE AU LANGAGE A OBJETS EIFFEL SANS GRAND PROBLEME DE COMPATIBILITE. L'INTEGRATION EST FACILITEE PAR L'UTILISATION D'UNE VERSION PERSISTANTE DU LANGAGE. ENSUITE, NOUS DEFINISSONS UNE SEMANTIQUE DU MODELE MINIMAL POUR LES PROGRAMMES R & O CONSTITUES DE REGLES PURES ET SANS LITTERAUX NEGATIFS. LA SEMANTIQUE EST ENSUITE ETENDUE AUX REGLES INCLUANT LA NEGATION: NOUS MONTRONS QUE NOUS POUVONS APPLIQUER A CES REGLES, LA SEMANTIQUE DU MODELE STABLE. LA SEMANTIQUE DES REGLES AVEC LITTERAUX EVALUABLES, ET AVEC METHODES FONCTIONNELLES EST AUSSI ETUDIEE. LE PROBLEME DE L'EVALUATION DES REGLES PURES ET SANS LITTERAUX NEGATIFS EST TRAITE EN ADAPTANT UNE METHODE ORIGINALE PROPOSEE POUR DATALOG ET REPOSANT UNIQUEMENT SUR LA RESOLUTION D'INEQUATIONS NUMERIQUES POUR LE CALCUL D'UN MODELE MINIMAL D'UN PROGRAMME LOGIQUE
Modélisation du contrôle de conformité en construction : une approche ontologique by Anastasiya Yurchyshyna( Book )

2 editions published in 2009 in English and held by 2 WorldCat member libraries worldwide

In this work we are interested in modelling the conformity-checking process in the construction domain. The main objective of this research work was to model the process of checking whether a construction project (e.g. public building) is compliant or not to a set of conformity requirements defined in construction regulations (i.e. a set of conformity constraints extracted from construction-related legal texts). We propose the formalisation of construction projects and conformity constraints, elaborate reasoning mechanisms automating the conformity-checking process by identifying eventual reasons of non-conformity, as well as developing a global conformity-checking model that integrates expert knowledge. By identifying the absence of a structured and explicit model that integrates the whole complexity of the knowledge taking part in the checking process and increases its effectiveness, we have developed a general conformity-checking model that has three main contributions : an ontological approach for the formal representation of knowledge concerning conformity-checking: conformity-checking ontology, conformity requirements, construction project oriented to conformity checking; a method for semantic annotation and organisation of conformity queries that integrates domain knowledge; modelling of the process of the conformity-checking adopted by checking engineers, which is based on matchings of project annotations and conformity queries and on scheduling of conformity queries for effective checking. The results of our research have been validated by the development of the web application that uses the semantic engine CORESE and the environment SeWeSe/Tomcat for the development of Semantic Web applications. The knowledge is formalised in the languages RDF, OWL-Lite and SPARQL. We have fulfilled the experimentations on the basis of construction projects and a set of regulation texts relating to the accessibility of public buildings that were by the Centre Scientifique et Technique du Bâtiment (CSTB)
Contribution to abductive reasoning with concepts in description logics : an application to ontology-based semantic matchmaking for tourism information systems by Viet-Hoang Vu( Book )

2 editions published in 2011 in English and held by 2 WorldCat member libraries worldwide

Today, travel and tourism is a sector that plays a more and more important role in the modern economy. To support the development of an electronic marketplace for tourism, we adopt an ontology-based Semantic Matchmaking method proposed so far in the literature to deal with the heterogeneity of the domain. The idea is to use Description Logics (DLs) to represent the semantics of demands and supplies available on the marketplace with reference to an ontology and then employ automated reasoning services to classify and the propose the best potential matches. Using Semantic Matchmaking thereby facilitates the discovery and negotiation process in the marketplace. Besides, the method can also be used to assist in the ontology mapping, an important process for providing the semantic interoperability between heterogeneous tourism systems. To realize the matchmaking process, a new non-standard inference, Concept Abduction, is developed for a quite inexpressive DL ALN. Because the representation of ontologies in the tourism domain requires generally more expressivity, we have to extend this inference method to a more expressive DL SHIQ and that is the first main objective of this thesis. Furthermore, it was acknowledged that travel and tourism is so highly heterogeneous that no one single global ontology can cover the whole domain. Instead, distributed and modular ontologies have to be used. That leads to the second objective of this thesis : developing Concept Abduction for the Package-based DL SHIQP, an expression of SHIQ for distributed and modular ontologies. Finally, we propose an architecture to realize a Semantic Matchmaker for distributed tourism information systems
METHODES D'OPTIMISATION DU CONTROLE D'INTEGRITE ET D'ETUDE DE LA CONSISTANCE DES CONTRAINTES PAR LES TABLES DE DECISION by Faten Labene dit Kalti( Book )

1 edition published in 1992 in French and held by 2 WorldCat member libraries worldwide

LE COUT DU CONTROLE D'INTEGRITE SEMANTIQUE DANS LES SGBD RELATIONNELS EST UN COUT ADDITIONNEL AU COUT DE LA MISE A JOUR DE LA BASE. L'APPROCHE PRESENTEE DANS CETTE ETUDE CONSISTE A EXPLOITER LE CONCEPT DE TABLE DE DECISION POUR L'OPTIMISATION DU COUT DU CONTROLE ET EVENTUELLEMENT POUR TROUVER DES SOLUTIONS POUR LE CALCUL DE L'ENSEMBLE MINIMAL DES CONTRAINTES AINSI QUE POUR LA DETECTION, S'IL Y A LIEU, DE LEUR INCONSISTANCE. UNE CONTRAINTE D'INTEGRITE SEMANTIQUE EST UNE FORMULE DANS LE CALCUL RELATIONNEL A VARIABLES TUPLES, FERMEE ET MISE SOUS FORME NORMALE PRENEXE DISJONCTIVE. NOUS PARTAGEONS LES CONTRAINTES EN DEUX CLASSES: LES CONTRAINTES ELEMENTAIRES (DEFINIES SUR UNE SEULE RELATION ET DEVANT ETRE VERIFIEES PAR CHAQUE TUPLE DANS LA RELATION) ET LES CONTRAINTES GENERALES (TOUS LES AUTRES TYPES DE CONTRAINTES). L'ENSEMBLE DES CONTRAINTES ELEMENTAIRES DEFINIES SUR UNE MEME RELATION DE BASE EST REPRESENTE A PARTIR D'UNE STRUCTURE DE TABLE DE DECISION ET PEUVENT ETRE AINSI EVALUEES SANS ACCES A LA BASE. CETTE REPRESENTATION A PERMIS DE TROUVER DES SOLUTIONS POUR LE CALCUL DE L'ENSEMBLE MINIMAL AINSI QUE POUR LA DETECTION DE L'INCONSISTANCE DES CONTRAINTES. CES TRAITEMENTS AINSI QUE LE CONTROLE D'INTEGRITE SONT RAMENES A UN CALCUL DANS L'ALGEBRE DES VECTEURS BINAIRES. CONCERNANT LES CONTRAINTES GENERALES, NOUS MONTRONS QU'IL EST POSSIBLE DE DEFINIR UN INDEX QUI A LA STRUCTURE D'UNE TABLE DE DECISION ET QUI EST EXPLOITE POUR ACCELERER L'EVALUATION DES SELECTIONS. L'UTILISATION DES VECTEURS BINAIRES NOUS A ORIENTE VERS UNE METHODE DE JOINTURE QUI EST PERTINENTE POUR LE CONTROLE D'INTEGRITE ET QUI PERMET UNE PRISE EN COMPTE IMPLICITE DES QUANTIFICATEURS. CETTE METHODE UTILISE LES VECTEURS BINAIRES ET N'ABOUTIT PAS A LA CONSTRUCTION DE NOUVEAUX TUPLES. ENFIN, UNE SPECIFICATION DE L'IMPLANTATION DE CES METHODES DANS LE CADRE DU SGBDOO NICE-C++ EST PROPOSEE
NICE-C++ : UNE EXTENSION C++ POUR LA PROGRAMMATION PERSISTANTE A PARTIR D'UN SERVEUR DE BASES D'OBJETS by Gabriel Mopolo-Moke( Book )

1 edition published in 1991 in French and held by 2 WorldCat member libraries worldwide

LE PROJET TOOTSI EST UN PROJET DE RECHERCHE EUROPEEN QUI A DEBUTE EN FEVRIER 1989 ET PREND FIN EN FEVRIER 1991. IL VISE L'AMELIORATION DE L'UTILISATION DES SERVEURS ET BANQUES DE DONNEES EXISTANTS. POUR MENER A BIEN CE PROJET, UN CERTAIN NOMBRE D'OUTILS DE DEVELOPPEMENT ONT ETE JUGES NECESSAIRES NOTAMMENT UN LANGAGE DE PROGRAMMATION SUSCEPTIBLE DE SUPPORTER DES TYPES COMPLEXES ET MULTIMEDIAS, LA NOTION DE LA PERSISTANCE, LE PARTAGE ET CERTAINS LIENS SEMANTIQUES TELS QUE L'HERITAGE ET L'ASSOCIATION. LE LANGAGE SELECTIONNE C++ NE POSSEDE PAS TOUTES CES CARACTERISTIQUES. CETTE ETUDE A POUR BUT DE PROPOSER, DANS LE CADRE DU PROJET TOOTSI, UNE APPROCHE D'EXTENSION DU LANGAGE C++ VERS LA PROGRAMMATION PERSISTANTE ET LA PRISE EN COMPTE DE NOUVEAUX TYPES. NOTRE DEMARCHE CONSISTE A NE PAS REECRIRE OU MODIFIER LE COMPILATEUR C++ MAIS AU CONTRAIRE A UTILISER LES CONCEPTS D'ABSTRACTION DE DONNEES, DE POLYMORPHISME ET D'HERITAGE DEJA PRESENTS DANS CE LANGAGE. DANS UNE PREMIERE PARTIE NOUS FAISONS L'ETAT DE L'ART DANS LE DOMAINE DE LA PROGRAMMATION ORIENTEE-OBJETS ET DE LA PROGRAMMATION PERSISTANTE. CE SYSTEME EST DEFINI PAR: 1) UN MODELE DE PROGRAMMATION FONDE SUR L'EXTENSION DU MODELE D'OBJETS ET DU SYSTEME DE TYPE C++; 2) DES INTERFACES DE MANIPULATION DES OBJETS ET META-OBJETS; 3) UN SERVEUR D'OBJETS POUR LA GESTION DE PERSISTANCE. L'OBJECTIF FINALE DE NOTRE TRAVAIL EST DE PERMETTRE A NICE-C++ DE SUPPORTER: LES TYPES COMPLEXES ET MULTIMEDIAS, LES LIENS D'ASSOCIATION, LA PERSISTANCE ET LE PARTAGE AFIN DE REPONDRE AUX BESOINS DU PROJET TOOTSI
Une approche basée sur LD pour l'interrogation de données relationnelles dans le Web sémantique by Thi Dieu Thu Nguyen( Book )

2 editions published in 2008 in English and held by 2 WorldCat member libraries worldwide

The Semantic Web is a new Web paradigm that provides a common framework for data to be shared and reused across applications, enterprises and community boundaries. The biggest problem we face right now is a way to ``link'' information coming from different sources that are often heterogeneous syntactically as well as semantically. Today much information is stored in relational databases. Thus data integration from relational sources into the Semantic Web is in high demand. The objective of this thesis is to provide methods and techniques to address this problem. It proposes an approach based on a combination of ontology-based schema representation and description logics. Database schemas in the approach are designed using ORM methodology. The stability and flexibility of ORM facilitate the maintenance and evolution of integration systems. A new web ontology language and its logic foundation are proposed in order to capture the semantics of relational data sources while still assuring a decidable and automated reasoning over information from the sources. An automatic translation of ORM models into ontologies is introduced to allow capturing the data semantics without laboriousness and fallibility. This mechanism foresees the coexistence of others sources, such as hypertext, integrated into the Semantic Web environment. This thesis constitutes the advances in many fields, namely data integration, ontology engineering, description logics, and conceptual modeling. It is hoped to provide a foundation for further investigations of data integration from relational sources into the Semantic Web
De l'optimisation à la décomposition de l'ontologie dans la logique de description by Thi Anh Le Pham( Book )

2 editions published in 2008 in French and held by 2 WorldCat member libraries worldwide

Reasoning with a large knowledge base of description logics (DLs) is a real challenge because of intractable inferences even with relatively inexpressive logic description languages. Indeed, the presence of axioms in the terminology (TBox) is one of the main reasons causing an exponential increase in the size of the search space explored by the inference algorithms. The reasoning in the description logics is essentially to test subsumption relations between concepts. Therefore, one always seeks to optimize this reasoning. The techniques for improving the performance of a DL reasoner therefore divide naturally into three levels. The first is the conceptual level considering techniques for optimizing the structure of axioms in the TBox. The second examines the algorithmic level techniques to reduce storage requirements of the tableaux algorithms and the number of subsumption tests. The third finds the strategies for optimizing the queries given to a knowledge base.In this thesis, we study an approach to ontology decomposition called the "overlap decomposition", which aims at optimizing the reasoning and methodology for ontology design. Optimization of reasoning is to find a method that sub-divides the axiom set of an ontology into a set of sub-ontologies. Each sub-ontology contains a subset of axioms of the original ontology. This allows a relative reduction of the reasoning time; methodology of the ontology design allows an ontology to be replaced by a set of ontologies in a more or less "optimal" organisation. For the first objective, the overlap decomposition of ontology results in the decomposing ontology (decomposing TBox) represented by the distributed description logic. Intuitively, implementing the reasoning algorithms simultaneously on these sub-ontologies - each having a reduced search space -could lead to a relative reduction of time. An important property of this ontology is that it is interpreted in the same domain as the original ontology. This is a basis which suggests two reasoning algorithms for the decomposing ontology. Regarding the goal of design methodology, we introduce two methods of ontology decomposition based on the heuristic graph decomposition. One is based on the minimal separators of graphs and the other is based on the normalised cut of the graph region
LOCALITE SEMANTIQUE ET MECANISME DLRU : APPLICATION AU SYSTEME NICE-C++ by Jing-Tong Dong( Book )

1 edition published in 1992 in French and held by 2 WorldCat member libraries worldwide

LE SUJET DE CETTE THESE S'ARTICULE AUTOUR DE DEUX FACTEURS QUI ONT PARTICULIEREMENT COMPTE DANS LA CONCEPTION PHYSIQUE D'UN SGBD: * LA COMPREHENSION DU COMPORTEMENT DE LA MANIPULATION DES DONNEES: L'INTRODUCTION DE LA LOCALITE SEMANTIQUE A POUR BUT DE FAIRE COMPRENDRE LE COMPORTEMENT DE MANIPULATION DES DONNEES DANS UN ENVIRONNEMENT AVEC LES STRUCTURES SEMANTIQUES DE DONNEES. DANS LEQUEL IL EXISTE UN PHENOMENE OU LA REFERENCE A UNE PAGE DE DONNEES PROVOQUE SOUVENT D'INTERVENIR D'AUTRES PAGES DE DONNEES L'EXISTENCE D'UNE CORRELATION SEMANTIQUE DE REFERENCES ENTRE CES PAGES. LA LOCALITE SEMANTIQUE PERMET DE PARTITIONNER UNE BASE DE DONNEES EN ENSEMBLES DES GROUPES DES PAGES. DANS CHAQUE GROUPE DE PAGES IL EXISTE UNE FORTE CORRELATION SEMANTIQUE DE REFERENCES ENTRE SES PAGES. * LE CHOIX D'UN MECANISME DE GESTION DE LA MEMOIRE: LE DEVELOPPEMENT DU MECANISME DLRU PERMET D'EXPLOITER LA LOCALITE SEMANTIQUE DANS LA GESTION DE LA MEMOIRE DE TRAVAIL. CE MECANISME COMBINE UN ENSEMBLE DE PILES DE LRU EN ASSURANT LE MAINTIEN DE PILES SEPAREES POUR LES DIFFERENTS GROUPES DE PAGES. LA MEMOIRE DE L'ALGORITHME DLRU EST GEREE COMME UNE PILE LRU A DOUBLE-NIVEAU: UN ENSEMBLE DE PILES LRU POUR LES PAGES DE DIFFERENTS GROUPES ET UNE PILE LRU GLOBALE POUR CES DIFFERENTES PILES DE PAGES. CETTE ETUDE EST COMPOSEE DE TROIS PARTIES. LA PREMIERE PARTIE CONSTITUE UN ETAT DE L'ART QUI PRESENTE LES MECANISMES DE GESTION DES MEMOIRES DANS LES SYSTEMES D'INFORMATIQUE. LA DEUXIEME PARTIE PRESENTE L'INTRODUCTION DE LA LOCALITE SEMANTIQUE, LE DEVELOPPEMENT DU MECANISME DLRU, ET L'EVALUATION DE PERFORMANCE DE L'ALGORITHME DLRU. QUANT A LA TROISIEME PARTIE, ELLE EST CONSACREE A LA SPECIFICATION ET A L'IMPLANTATION DE L'ALGORITHME DLRU DANS LE SYSTEME NICE-C++
MODELE DE DONNEES B-REL ET APPROCHE LOGIQUE B-LOG by Evelyne Vittori( Book )

1 edition published in 1992 in French and held by 2 WorldCat member libraries worldwide

DE NOMBREUX MODELES DE DONNEES ONT ETE PROPOSES POUR PALLIER LES LIMITES DU MODELE RELATIONNEL ET PERMETTRE LA MODELISATION NATURELLE DES DONNEES COMPLEXES UTILISEES DANS LES NOUVELLES APPLICATIONS DES BASES DE DONNEES. LE MODELE B-REL, DEFINI A PARTIR DU MODELE B-RELATIONNEL DE N. LE THANH SE SITUE DANS CETTE TENDANCE. SON OBJECTIF EST D'INTRODUIRE UN CADRE GENERAL UNIFORME PERMETTANT L'EXTENSION PROGRESSIVE DU MODELE RELATIONNEL VERS LE PARADIGME ORIENTE OBJET TOUT EN PRESERVANT AU MAXIMUM SES ACQUIS. LE CARACTERE PUREMENT CENTRE VALEUR DU CONCEPT DE RELATION EST AINSI CONSERVE (B-RELATION) ET LE CONCEPT DE DOMAINE EST ETENDU EN DIFFERENTES ETAPES VERS LE CONCEPT DE CLASSE D'OBJET (B-DOMAINE). LE LIEN ENTRE LES DEUX CONCEPTS EST REPRESENTE PAR LES NOTIONS D'IDENTIFICATEUR DE B-TUPLE ET D'OPERATEUR D'ACCES TRANSVERSAL PERMETTANT D'ETABLIR DES LIAISONS ENTRE DES VALEURS LOCALISEES DANS DIFFERENTES B-RELATIONS. LES FONDEMENTS LOGIQUES DU MODELE B-REL SONT DEFINIS A PARTIR D'UNE LOGIQUE CENTREE OBJET, BAPTISEE B-LOG, DERIVEE DES TRAVAUX DE M. KIFER, G. LAUSEN ET J. WU SUR F-LOGIC. CETTE LOGIQUE EST CONSTRUITE DE MANIERE INCREMENTALE EN METTANT EN EVIDENCE LA REPRESENTATION DES DIFFERENTS CONCEPTS TANT AU NIVEAU DE LA SYNTAXE FORMELLE QU'AU NIVEAU DE LA SEMANTIQUE. LA NOTION SYNTAXIQUE DE DESCRIPTION DE BASE DE DONNEES PERMET D'INTRODUIRE DEUX APPROCHES DE REPRESENTATION LOGIQUE DE L'INFORMATION CONTENUE DANS UNE BASE DE DONNEES B-REL: L'APPROCHE SEMANTIQUE CARACTERISEE PAR LA NOTION DE MODELE DE HERBRAND MINIMAL, ET L'APPROCHE AXIOMATIQUE REPRESENTEE PAR LA NOTION DE THEORIE B-LOG. CETTE DERNIERE APPROCHE EST BASEE SUR L'AXIOMATIQUE B-LOG QUI EST INTRODUITE DANS UNE OPTIQUE PLUS GENERALE ET PEUT SERVIR DE POINT DE DEPART A LA PRISE EN COMPTE DE LA DEDUCTION DANS UNE BASE DE DONNEES B-REL
Transformation d'ontologies basées sur la logique de description : application dans le commerce électronique by Chan Le Duc( Book )

2 editions published in 2004 in French and held by 2 WorldCat member libraries worldwide

This work deals with knowledge formalization for data exchange in Electronic Commerce area. This formalization which is based on Description Logics (DL) is aimed at establishing semantic Transparency for data exchange between different agents. When knowledges are formalized in ontologies of agents, the semantic transparency would be ensured by ontology transformation. Generalized from problems which arise in data exchange models in use, two instances of the semantic transparency problem are identified and formalized as interferences allowing to transform ontologies. The first instance of the problem arises when two agents carry out data exchanges in which their ontologies use different DL. This instance can be reduced to semantic approximation which allows us to compute the “best” approximation in a DL of a concept description in a more expressive DL. Based on an existing algorithm for computing the approximation ALC-ALE, we propose an optimal algorithm which improves the performance in space of the existing algorithm. The second instance of the problem arises when context information is taken into account in data exchange processes. In order to formalize context information implied in actual data models, revision operations and revision rules should be introduced into ontologies. An important part of this work is concentrated on revision problems of DL-based ontologies and develops algorithms for computing revision operations and for performing extension of knowledge base launched by revision rules. The algorithms developed in this work are implemented in ONDIL system supporting design and maintenance of ontologies used in construction sector
ORDONNANCEMENT ET PLACEMENT DANS LES S.G.B.D. PARALLELES by CHRISTOPHE SALAGNON( Book )

1 edition published in 1994 in French and held by 2 WorldCat member libraries worldwide

NOUS ABORDONS DEUX PROBLEMES FONDAMENTAUX DES SYSTEMES DE GESTION DE BASES DE DONNEES PARALLELES: L'ORDONNANCEMENT ET LE PLACEMENT DES TACHES OBTENUES APRES PARALLELISATION DES REQUETES COMPLEXES. NOUS COMMENCONS PAR MODELISER AU MOYEN D'UN FORMALISME GRAPHIQUE LES TACHES, LEUR GRANULARITE, LES SYNCHRONISATIONS, LA CIRCULATION DES DONNEES, LES VOLUMES TRAITES ET CERTAINES CONTRAINTES DE PLACEMENT. POUR L'ORDONNANCEMENT, DONT LE BUT EST DE MINIMISER LE TEMPS DE REPONSE, NOUS PROPOSONS UN ALGORITHME BASE SUR DES HEURISTIQUES SIMPLES. IL UTILISE LES DATES DE DEBUT AU PLUS TOT, CALCULEES POUR UN NOMBRE INFINI DE PROCESSEURS, AINSI QUE L'APTITUDE DES TACHES A SE DEPLACER DANS LE TEMPS SANS AUGMENTER LE TEMPS DE REPONSE. A CELA, NOUS AJOUTONS LA POSSIBILITE DE MODIFIER LA GRANULARITE DES TACHES S'IL VIENT A MANQUER DES PROCESSEURS POUR LEUR EXECUTION ; C'EST LA L'ORIGINALITE DE NOTRE METHODE. NOUS EMPLOYONS EGALEMENT UNE HEURISTIQUE POUR LE PLACEMENT DONT LE ROLE EST L'EQUILIBRAGE DES CHARGES. L'HEURISTIQUE LPT EST APPLIQUEE APRES AVOIR RASSEMBLE LES CHARGES DES PROCESSEURS. NOUS EN AVONS ETUDIE LA MISE EN UVRE DANS LE CADRE DES SYSTEMES PARALLELES A MEMOIRE PARTAGEE, PUIS DE CEUX A MEMOIRE DISTRIBUEE. AFIN D'EXPERIMENTER NOS ALGORITHMES, NOUS AVONS ENTREPRIS LA REALISATION D'UN PROTOTYPE SUR UNE MACHINE PARALLELE A MEMOIRE DISTRIBUEE
Un manageur de contextes résolvant les conflits entre les points de vue des concepteurs by The Can Do( )

1 edition published in 2019 in English and held by 1 WorldCat member library worldwide

Presently, self-adaptive systems (SAS) have become an essential feature in ubiquitous computing applications. They provide a capable of dynamically self-adapting their behavior in response to the user's current situation and needs. This thesis aims at providing several solutions for overcoming challenges on the development of selfadaptive system such as context modeling, context handling and improving adaptation ability. Our study focuses on two aspects: separating contextual concerns in context modeling and improving the responsiveness of self-adaptive system. To separate contextual concerns, we introduce the notion of independent viewpoints and provide a mechanism to use them for the context modeling process. This mechanism also simplifies the designers' task because it allows the designers to focus only on their expertise domain to modeling their contextual concern. To improve the responsiveness of SAS, we provide a new architectural pattern using context-aware management (CAM) to support self-adaptive system in handling context. In this architecture, the CAM focuses on the current context identification according to the specific viewpoints which it is in charge of, having the goal to deploy the adaptation rules to this situation. The SAS only has to manage a limited set of rules, already adapted to the current situation. In our approach, each specific viewpoint was built independently by different designers. Each viewpoint is related to a different scenario setup, but they might operate in one system at the same time. Therefore, the possibilities of conflict between viewpoints at one time always exist. Nevertheless, it is impossible to solve conflicts between viewpoints at design time because we are not able to predict all the users' scenarios and the combinations of viewpoints because they depend on the location, activity, regulation and user's choice. Therefore, we propose several solutions to detect and solve the conflicts between viewpoints in the context-aware management layer
Modélisation, détection et annotation des états émotionnels à l'aide d'un espace vectoriel multidimensionnel by Imen Tayari Meftah( )

1 edition published in 2013 in French and held by 1 WorldCat member library worldwide

This study focuses on affective computing in both fields of modeling and detecting emotions. Our contributions concern three points. First, we present a generic solution of emotional data exchange between heterogeneous multi-modal applications. This proposal is based on a new algebraic representation of emotions and is composed of three distinct layers : the psychological layer, the formal computational layer and the language layer. The first layer represents the psychological theory adopted in our approach which is the Plutchik's theory. The second layer is based on a formal multidimensional model. It matches the psychological approach of the previous layer. The final layer uses XML to generate the final emotional data to be transferred through the network. In this study we demonstrate the effectiveness of our model to represent an in infinity of emotions and to model not only the basic emotions (e.g., anger, sadness, fear) but also complex emotions like simulated and masked emotions. Moreover, our proposal provides powerful mathematical tools for the analysis and the processing of these emotions and it enables the exchange of the emotional states regardless of the modalities and sensors used in the detection step. The second contribution consists on a new monomodal method of recognizing emotional states from physiological signals. The proposed method uses signal processing techniques to analyze physiological signals. It consists of two main steps : the training step and the detection step. In the First step, our algorithm extracts the features of emotion from the data to generate an emotion training data base. Then in the second step, we apply the k-nearest-neighbor classifier to assign the predefined classes to instances in the test set. The final result is defined as an eight components vector representing the felt emotion in multidimensional space. The third contribution is focused on multimodal approach for the emotion recognition that integrates information coming from different cues and modalities. It is based on our proposed formal multidimensional model. Experimental results show how the proposed approach increases the recognition rates in comparison with the unimodal approach. Finally, we integrated our study on an automatic tool for prevention and early detection of depression using physiological sensors. It consists of two main steps : the capture of physiological features and analysis of emotional information. The first step permits to detect emotions felt throughout the day. The second step consists on analyzing these emotional information to prevent depression
OntoApp : une approche déclarative pour la simulation du fonctionnement d'un logiciel dès une étape précoce du cycle de vie de développement by Tuan Anh Pham( )

1 edition published in 2017 in French and held by 1 WorldCat member library worldwide

In this thesis, we study several models of collaboration between Software Engineering and Semantic Web. From the state of the art, we propose an approach to the use of ontology in the business application layer. The main objective of our work is to provide the developer with the tools to design, in the declarative manner, a business "executable" layer of an application in order to simulate its operation and thus show the compliance of the application with the customer requirements defined at the beginning of the software life cycle. On the other hand, another advantage of this approach is to allow the developer to share and reuse the business layer description of a typical application in a domain using ontology. This typical application description is called "Application Template". The reuse of the business layer description of an application is an interesting aspect of software engineering. That is the key point we want to consider in this thesis. In the first part of this thesis, we deal with the modeling of the business layer. We first present an ontology-based approach to represent business process and the business rules and show how to verify the consistency of business process and the set of business rules. Then, we present an automatic check mechanism of compliance of business process with a set of business rules. The second part of this thesis is devoted to define a methodology, called personalization, of creating of an application from an "Application Template". This methodology will allow the user to use an Application Template to create his own application by avoiding deadlock and semantic errors. We introduce at the end of this part the description of an experimental platform to illustrate the feasibility of the mechanisms proposed in the thesis. This platform s carried out on a relational DBMS.Finally, we present, in a final chapter, the conclusion, the perspective and other annexed works developed during this thesis
La vérification de patrons de workflow métier basés sur les flux de contrôle : une approche utilisant les systèmes à base de connaissances by Thi Hoa Hue Nguyen( )

1 edition published in 2015 in English and held by 1 WorldCat member library worldwide

This thesis tackles the problem of modelling semantically rich business workflow templates and proposes a process for developing workflow templates. The objective of the thesis is to transform a business process into a control flow-based business workflow template that guarantees syntactic and semantic validity. The main challenges are: (i) to define formalism for representing business processes; (ii) to establish automatic control mechanisms to ensure the correctness of a business workflow template based on a formal model and a set of semantic constraints; and (iii) to organize the knowledge base of workflow templates for a workflow development process. We propose a formalism which combines control flow (based on Coloured Petri Nets (CPNs)) with semantic constraints to represent business processes. The advantage of this formalism is that it allows not only syntactic checks based on the model of CPNs, but also semantic checks based on Semantic Web technologies. We start by designing an OWL ontology called the CPN ontology to represent the concepts of CPN-based business workflow templates. The design phase is followed by a thorough study of the properties of these templates in order to transform them into a set of axioms for the CPN ontology. In this formalism, a business process is syntactically transformed into an instance of the CPN ontology. Therefore, syntactic checking of a business process becomes simply verification by inference, by concepts and by axioms of the CPN ontology on the corresponding instance
Knowledge Discovery for Avionics Maintenance : An Unsupervised Concept Learning Approach by Luis Palacios Medinacelli( )

1 edition published in 2019 in English and held by 1 WorldCat member library worldwide

In this thesis we explore the problem of signature analysis in avionics maintenance, to identify failures in faulty equipment and suggest corrective actions to resolve the failure. The thesis takes place in the context of a CIFRE convention between Thales R&T and the Université Paris-Sud, thus it has both a theoretical and an industrial motivation. The signature of a failure provides all the information necessary to understand, identify and ultimately repair a failure. Thus when identifying the signature of a failure it is important to make it explainable. We propose an ontology based approach to model the domain, that provides a level of automatic interpretation of the highly technical tests performed in the equipment. Once the tests can be interpreted, corrective actions are associated to them. The approach is rooted on concept learning, used to approximate description logic concepts that represent the failure signatures. Since these signatures are not known in advance, we require an unsupervised learning algorithm to compute the approximations. In our approach the learned signatures are provided as description logics (DL) definitions which in turn are associated to a minimal set of axioms in the A-Box. These serve as explanations for the discovered signatures. Thus providing a glass-box approach to trace the reasons on how and why a signature was obtained. Current concept learning techniques are either designed for supervised learning problems, or rely on frequent patterns and large amounts of data. We use a different perspective, and rely on a bottom-up construction of the ontology. Similarly to other approaches, the learning process is achieved through a refinement operator that traverses the space of concept expressions, but an important difference is that in our algorithms this search is guided by the information of the individuals in the ontology. To this end the notions of justifications in ontologies, most specific concepts and concept refinements, are revised and adapted to our needs. The approach is then adapted to the specific avionics maintenance case in Thales Avionics, where a prototype has been implemented to test and evaluate the approach as a proof of concept
Découverte de règles d'association multi-relationnelles à partir de bases de connaissances ontologiques pour l'enrichissement d'ontologies by Duc Minh Tran( )

1 edition published in 2018 in English and held by 1 WorldCat member library worldwide

In the Semantic Web context, OWL ontologies represent explicit domain knowledge based on the conceptualization of domains of interest while the corresponding assertional knowledge is given by RDF data referring to them. In this thesis, based on ideas derived from ILP, we aim at discovering hidden knowledge patterns in the form of multi-relational association rules by exploiting the evidence coming from the assertional data of ontological knowledge bases. Specifically, discovered rules are coded in SWRL to be easily integrated within the ontology, thus enriching its expressive power and augmenting the assertional knowledge that can be derived. Two algorithms applied to populated ontological knowledge bases are proposed for finding rules with a high inductive power: (i) level-wise generated-and-test algorithm and (ii) evolutionary algorithm. We performed experiments on publicly available ontologies, validating the performances of our approach and comparing them with the main state-of-the-art systems. In addition, we carry out a comparison of popular asymmetric metrics, originally proposed for scoring association rules, as building blocks for a fitness function for evolutionary algorithm to select metrics that are suitable with data semantics. In order to improve the system performance, we proposed to build an algorithm to compute metrics instead of querying via SPARQL-DL
Évaluation de l'effectivité des systèmes ambiants by Gérald Rocher( )

1 edition published in 2020 in French and held by 1 WorldCat member library worldwide

From a closed and controlled environment, neglecting all the external disturbances, information processing is now exposed to the complexity and the hazards of the physical environment, open and uncontrolled. Indeed, as a result of the progresses made on wireless communications, energy storage and the miniaturization of computer components, the fusion of the physical and digital worlds is a reality embodied in so-called ambient systems. At the heart of these systems, everyday objects are transcended by computer and electronic information processing means (actuators, sensors, processors, etc.) offering new perspectives of interactions between the physical and digital worlds.This evolution, however, calls for an epistemological break. Being complex by their fusion with the physical environment, it is no longer a matter of predicting in silico the behavior of such systems from models established on the basis of a knowledge supposed to be complete and reliable. On the contrary, aware of the intrinsic complexity of these systems and the inability to obtain a reliable model, it is necessary to ensure their effectiveness in vivo. Indeed, without a reliable model of these systems on which to build control laws, their behavior is likely to drift until it no longer produces the expected effects.This thesis proposes to provide a solution to this problem on the basis of a methodological approach based on the systemic principles. As a result of this approach, the systemic model of an ambient system answers the question “what does the ambient system have to do?”. From this model and the observation of the effects produced in the environment, the notion of effectiveness is then formalized within the framework of the measure theory. A set of measures is applied to this formalization (probabilities, possibilities and belief functions in the framework of the transferable belief model) and the results are discussed.The results of this work are opening up numerous perspectives. In the context of agile software development methods for ambient systems, the evaluation of effectiveness can be part of a continuous testing process. In the case of self-adaptive systems, it can be used as an indicator of reward, error, etc. In the case where the systemic model represents the preferred behavior of an ambient system, the evaluation of the effectiveness is like an indicator of the quality of the user experience interacting with the system
Aide à la création et à l'exploitation de réglementations basée sur les modèles et techniques du Web sémantique by Khalil Riad Bouzidi( )

1 edition published in 2013 in French and held by 1 WorldCat member library worldwide

Regulations in the Building industry are becoming increasingly complex and involve more than one technical area. They cover products, components and project implementation. They also play an important role to ensure the quality of a building, and to minimize its environmental impact. For more than 30 years, CSTB has proved its expertise in this field through the development of the complete encyclopedia of French technical and regulatory texts in the building domain: the REEF. In the framework of collaboration between CSTB and the I3S laboratory, we are carrying on research on the acquisition of knowledge from the technical and regulatory information contained in the REEF and the automated processing of this knowledge with the final goal of assisting professionals in the use of these texts and the creation of new texts. We are implementing this work in CSTB to help industrials in the writing of Technical Assessments. The problem is how to specify these assessments and standardize their structure using models and adaptive semantic services. The research communities of Knowledge Engineering and Semantic Web play a key role in providing the models and techniques relevant for our research, whose main objective is to simplify access to technical regulatory information, to support professionals in its implementation, and to facilitate the writing of new regulations while taking into account constraints expressed in the existing regulatory corpus. We focus on Technical Assessments based on technical guides capturing both regulations and knowledge of CSTB experts when producing these documents. A Technical Assessment (in French: Avis Technique or ATec) is a document containing technical information on the usability of a product, material, component or element of construction, which has an innovative character. We chose this Technical Assessment as a case study because CSTB has the mastership and a wide experience in these kinds of technical documents. We are particularly interested in the modeling of the regulatory constraints derived from the Technical Guides used to validate the Assessment. These Guides are regulatory complements offered by CSTB to the various industrials to enable easier reading of technical regulations. They collect execution details with a wide range of possible situations of implementations. Our work aims to formalize the Technical Guides in a machine-processable model to assist the creation of Technical Assessments by automating their validation. For this purpose, we first constructed a domain-ontology, which defines the main concepts involved in the Technical Guides. This ontology called “OntoDT” is coupled with domain thesauri. Several are being developed at CSTB among which one seems the most relevant by its volume and its semantic approach: the thesaurus from the REEF project. Our second contribution is the use of standard SBVR (Semantics of Business Vocabulary and Business Rules) and SPARQL to reformulate the regulatory requirements of guides both in a controlled and formal language Third, our model incorporates expert knowledge on the verification process of Technical Documents. We have organized the SPARQL queries representing regulatory constraints into several processes. Each component involved in the Technical Document corresponds to a elementary process of compliance checking. An elementary process contains a set of SPARQL queries to check the compliance of an elementary component. A full complex process for checking a Technical Document is defined recursively and automatically built as a set of elementary processes relative to the components which have their semantic definition in OntoDT. Finally, we represent in RDF the association between the SBVR rules and SPARQL queries representing the same regulatory constraints. We use annotations to produce a compliance report in natural language to assist users in the writing of Technical Assessments
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.90 (from 0.86 for ORDONNANCE ... to 0.99 for Contributi ...)

Alternative Names
Le-Thanh, Nhan

Languages
French (17)

English (10)