WorldCat Identities

Caron, Eddy (1972-....).

Overview
Works: 21 works in 26 publications in 2 languages and 29 library holdings
Roles: Thesis advisor, Other, Opponent, Author
Publication Timeline
.
Most widely held works by Eddy Caron
Peer-to-peer prefix tree for large scale service discovery by Cédric Tedeschi( Book )

2 editions published in 2008 in English and held by 2 WorldCat member libraries worldwide

This thesis addresses the Service Discovery issue on large and dynamic platforms. The DLPT (Distributed Lexicographic Placement Table) approach, a service discovery solution based on prefix tree supporting multi-attribute searches , is proposed. It includes efficient mapping and load balancing. For tolerance, we propose some best -effort protocols. A first protocol reconnects and reorders disconnected subtrees after crashes. This first aproach makes several hypothese and is not self-stabilizing, i.e., is unable ton recover from any arbitrary configuration. A second protocol maintains a prefix trees relying on message passing. We studied the DLPT to support network provisioning and developed a prototype. Preliminary experiments were conducted on the Grid'5000 platform
Performance et fiabilité des protocoles de tolérance aux fautes by Divya Gupta( )

1 edition published in 2016 in English and held by 2 WorldCat member libraries worldwide

In the modern era of on-demand ubiquitous computing, where applications and services are deployed in well-provisioned, well-managed infrastructures, administered by large groups of cloud providers such as Amazon, Google, Microsoft, Oracle, etc., performance and dependability of the systems have become primary objectives.Cloud computing has evolved from questioning the Quality-of-Service (QoS) making factors such as availability, reliability, liveness, safety and security, extremely necessary in the complete definition of a system. Indeed, computing systems must be resilient in the presence of failures and attacks to prevent their inaccessibility which can lead to expensive maintenance costs and loss of business. With the growing components in cloud systems, faults occur more commonly resulting in frequent cloud outages and failing to guarantee the QoS. Cloud providers have seen episodic incidents of arbitrary (i.e., Byzantine) faults where systems demonstrate unpredictable conducts, which includes incorrect response of a client's request, sending corrupt messages, intentional delaying of messages, disobeying the ordering of the requests, etc.This has led researchers to extensively study Byzantine Fault Tolerance (BFT) and propose numerous protocols and software prototypes. These BFT solutions not only provide consistent and available services despite arbitrary failures, they also intend to reduce the cost and performance overhead incurred by the underlying systems. However, BFT prototypes have been evaluated in ad-hoc settings, considering either ideal conditions or very limited faulty scenarios. This fails to convince the practitioners for the adoption of BFT protocols in a distributed system. Some argue on the applicability of expensive and complex BFT to tolerate arbitrary faults while others are skeptical on the adeptness of BFT techniques. This thesis precisely addresses this problem and presents a comprehensive benchmarking environment which eases the setup of execution scenarios to analyze and compare the effectiveness and robustness of these existing BFT proposals.Specifically, contributions of this dissertation are as follows.First, we introduce a generic architecture for benchmarking distributed protocols. This architecture, comprises reusable components for building a benchmark for performance and dependability analysis of distributed protocols. The architecture allows defining workload and faultload, and their injection. It also produces performance, dependability, and low-level system and network statistics. Furthermore, the thesis presents the benefits of a general architecture.Second, we present BFT-Bench, the first BFT benchmark, for analyzing and comparing representative BFT protocols under identical scenarios. BFT-Bench allows end-users evaluate different BFT implementations under user-defined faulty behaviors and varying workloads. It allows automatic deploying these BFT protocols in a distributed setting with ability to perform monitoring and reporting of performance and dependability aspects. In our results, we empirically compare some existing state-of-the-art BFT protocols, in various workloads and fault scenarios with BFT-Bench, demonstrating its effectiveness in practice.Overall, this thesis aims to make BFT benchmarking easy to adopt by developers and end-users of BFT protocols.BFT-Bench framework intends to help users to perform efficient comparisons of competing BFT implementations, and incorporating effective solutions to the detected loopholes in the BFT prototypes. Furthermore, this dissertation strengthens the belief in the need of BFT techniques for ensuring correct and continued progress of distributed systems during critical fault occurrence
Vers des communications anonymes et efficaces by Gautier Berthou( )

1 edition published in 2014 in French and held by 2 WorldCat member libraries worldwide

This theses focuses on information dissemination in computer networks. We study two aspects of this topic : anonymous communication on Internet in presence of rational nodes and uniform total order broadcast in a computer cluster. Concerning the first aspect, we observed that no anonymous communication protocol is capable of working in presence of rational nodes while scaling existed. Therefore, we proposed RAC, the first anonymous communication protocol functioning in presence of rational nodes and able of scaling. Concerning the second aspect, we observed that no existing uniform total order broadcast protocol is capable of ensuring both a good latency and an optimal throughput. In order to fill this lack we proposed FastCast, the first uniform total order
Calcul numérique sur données de grande taille by Eddy Caron( Book )

2 editions published in 2000 in French and held by 2 WorldCat member libraries worldwide

Gestion du patrimoine logiciel et Cloud Computing by Anne-Lucie Vion( )

1 edition published in 2018 in English and held by 2 WorldCat member libraries worldwide

Dans le Cloud, peu de travaux traitent de l'analyse de l'usage réel et dynamique des logiciels consommés afin de déterminer les coûts réels engendrés et le respect des droits acquis auprès des fournisseurs de ces ressources. L'émergence de la pratique du Software Asset Management (SAM) traduit pourtant la préoccupation grandissante des industriels et des 'Telcos' (Entreprises de télécommunications) face à la complexité des modèles de licences dans des environnements virtualisés qui bouleversent nos usages de logiciel.La réponse des éditeurs de logiciel est souvent une incitation à ne plus suivre la consommation de licences, par le biais de contrats onéreux de consommation illimitée, rendant impossible une politique de maîtrise des coûts. Pour les utilisateurs finaux comme pour les fournisseurs de services cloud, il devient impératif de maîtriser et d'optimiser le déploiement des licences dans le Cloud.L'objectif devient celui de maitriser les besoins logiciels, au plus proche du temps réel, puis de générer des scénarii d'optimisation basés sur l'évolution de la consommation en modélisant les coûts réels afférents. Cela représente un levier de gains considérables pour tous les acteurs du cycle de vie du logiciel.Le contexte d'étude couvre l'ensemble du scope du Cloud (applications, plateformes, infrastructures et réseaux). Les travaux présentés ici s'attache à reconstituer tout le cycle de vie du logiciel, de l'achat jusqu'à la désinstallation, en intégrant les contraintes liées à sa nature ou à son usage. Nous proposons de résoudre le verrou majeur de l'identification du logiciel et de ses droits d'usage par la création et le suivi d'un tag.Nous proposons également une modélisation innovante s'appuyant sur une base de données graphe qui permet d'intégrer l'instantanéité des changements de configuration, de prendre en compte les différentes responsabilités impliquées par les niveaux de services offerts, tout en offrant la souplesse nécessaire pour supporter à la fois des modèles de licence classiques, ou à l'usage.Deux cas d'usages seront envisagés pour juger de la pertinence des modèles proposés : la gestion des licences dans un contexte de Plateforme as a Service (PaaS) et dans un cas de virtualisation de réseau (NFV)
Contribution to the Deployment of a Distributed and Hierarchical Middleware Applied to Cosmological Simulations by Benjamin Depardon( Book )

2 editions published in 2010 in English and held by 2 WorldCat member libraries worldwide

The results presented in this thesis deal with the execution of applications on heterogeneous and distributed environments: computing grids. We study, from end-to-end, the process allowing users to execute complex scientific applications. The contributions of this work are thus manifold. 1) Hierarchical middleware deployment: we first present an execution model for hierarchical middleware. Then, based on this model, we present several heuristics to automatically determine the shape of the hierarchy that would best fit the users' needs, depending on the platform it is executed on. We evaluate the quality of the approach on a real platform using the DIET middleware. 2) Graph clustering: we propose a distributed and self-stabilizing algorithm for clustering weighted graphs. Clustering is done based on a distance metric between nodes: within each created cluster the nodes are no farther than a distance k from an elected leader in the cluster. 3) Scheduling: we study the scheduling of independent tasks under resources usage limitations. We define linear programs to solve this problem in two cases: when tasks arrive all at the same time, and when release dates are considered. 4) Cosmological simulations: we have studied the behavior of applications required to run cosmological simulations workflows. Then, based on the DIET grid middleware, we implemented a complete infrastructure allowing non-expert users to easily submit cosmological simulations on a computing grid
Découverte automatique des caractéristiques et capacités d'une plate-forme de calcul distribué by Martin Quinson( Book )

2 editions published in 2003 in French and held by 2 WorldCat member libraries worldwide

This thesis is devoted to the monitoring of modern computational platforms in order to obtain relevant, up to date and accurate information about them. Often called Grids, those environments differ from the preceding parallel machines by their intrinsic heterogeneity and high dynamicity. This document is organized in three parts. The first one presents the specific difficulties introduced by this platform, highlighting them in a selection of grid infrastructure projects and detailing the existing solutions. The second part shows how to get efficiently quantitative informations about the grid capacities and their suitability to the needs of the routines to schedule. After a discussion of the problems encountered, we detail our approach which we call macro-benchmarking. We then present FAST, a tool implementing this methodology. We eventually detail how FAST is used in several other projects. The third part introduces how to get a more qualitative view of the grid characteristics such as the topology of the network interconnecting the hosts. After a study of the existing solutions in this domain, we present ALNeM our solution to automatically map the network without relying on specific execution privileges on the platform. This tool is based on GRAS, our framework for the development of grid infrastructure
Automatic Deployment for Application Service Provider Environments by Pushpinder Kaur Chouhan( Book )

2 editions published in 2006 in English and held by 2 WorldCat member libraries worldwide

The objective of the thesis is to improve the performance of a NES so as to use these environments efficiently. The very first problem is related to the applications scheduling on the selected servers. We have proposed and experimentally proved that the deadline scheduling with priority along with fallback mechanism can increase the efficiency of a NES. Another important factor that influences the efficiency of the NES environments is the mapping style of the environment's components on the available resources. We have shown theoretically that the optimal deployment on cluster is a Complete Spanning d-ary tree. Considering heterogeneous resources we presented a deployment heuristic, as finding the best deployment among heterogeneous resources is NP-complete. Finally, we gave a mathematical model that can analyze an existing deployment and can improve the performance of the deployment by finding and then removing the bottlenecks. Presented algorithms and heuristics are validated by implementing them to DIET, on different sites of Grid'5000
Déploiement auto-adaptatif d'intergiciel sur plate-forme élastique by Maurice-Djibril Faye( )

1 edition published in 2015 in French and held by 1 WorldCat member library worldwide

We have studied the means to make a middleware deployment self-adaptive. Our use case middleware is hierarchical and distributed and can be modeled by a graph. A vertex models a process and an edge models a communication link between two processes. The middleware provides high performance computing services to the users.Once the middleware is deployed on a computing infrastructure like a grid or cloud, how it adapt the changes in dynamic environment? If the deployment is static, it may be necessary to redo all the deployment process, which is a costly operation. A better solution would be to make the deployment self-adaptive. We have proposed a rules-based self-stabilizing algorithm to manage a faulty deployment. Thus, after the detection of an unstable deployment, caused by some transients faults (joining of new nodes or deletion of existing nodes which may modify the deployment topology), the system will eventually recover a stable state, without external help, but only by executing the algorithm.We have designed an ad hoc discrete events simulator to evaluate the proposed algorithm. The simulation results show that, a deployment, subjected to transients faults which make it unstable, adapts itself. Before the simulator design, we had proposed a model to describe a distributed infrastructure, a model to describe hierarchical middleware and a model to describe a deployment, that is the mapping between the middleware processes and the hardware on which they are running on
Scheduling and deployment of large-scale applications on Cloud platforms by Adrian Muresan( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

L'usage des plateformes de Cloud Computing offrant une Infrastructure en tant que service (IaaS) a augmenté au sein de l'industrie. Les infrastructures IaaS fournissent des ressources virtuelles depuis un catalogue de types prédéfinis. Les avancées dans le domaine de la virtualisation rendent possible la création et la destruction de machines virtuelles au fur et à mesure, avec un faible surcout d'exploitation. En conséquence, le bénéfice offert par les plate-formes IaaS est la possibilité de dimensionner une architecture virtuelle au fur et à mesure de l'utilisation, et de payer uniquement les ressources utilisées. D'un point de vue scientifique, les plateformes IaaS soulèvent de nouvelles questions concernant l'efficacité des décisions prises en terme de passage à l'échelle, et également l'ordonnancement des applications sur les plateformes dynamiques. Les travaux de cette thèse explorent ce thème et proposent des solutions à ces deux problématiques. La première contribution décrite dans cette thèse concerne la gestion des ressources. Nous avons travaillé sur le redimensionnement automatique des applications clientes de Cloud afin de modéliser les variations d'utilisation de la plateforme. De nombreuses études ont montré des autosimilarités dans le trafic web des plateformes, ce qui implique l'existence de motifs répétitifs pouvant être périodiques ou non. Nous avons développé une stratégie automatique de dimensionnement, capable de prédire le temps d'utilisation de la plateforme en identifiant les motifs répétitifs non périodiques. Dans un second temps, nous avons proposé d'étendre les fonctionnalités d'un intergiciel de grilles, en implémentant une utilisation des ressources à la demandes.Nous avons développé une extension pour l'intergiciel DIET (Distributed Interactive Engineering Toolkit), qui utilise un marché virtuel pour gérer l'allocation des ressources. Chaque utilisateur se voit attribué un montant de monnaie virtuelle qu'il utilisera pour exécuter ses tâches. Le mécanisme d'aide assure un partage équitable des ressources de la plateforme entre les différents utilisateurs. La troisième et dernière contribution vise la gestion d'applications pour les plateformes IaaS. Nous avons étudié et développé une stratégie d'allocation des ressources pour les applications de type workflow avec des contraintes budgétaires. L'abstraction des applications de type workflow est très fréquente au sein des applications scientifiques, dans des domaines variés allant de la géologie à la bioinformatique. Dans ces travaux, nous avons considéré un modèle général d'applications de type workflow qui contient des tâches parallèles et permet des transitions non déterministes. Nous avons élaboré deux stratégies d'allocations à contraintes budgétaires pour ce type d'applications. Le problème est une optimisation à deux critères dans la mesure où nous optimisons le budget et le temps total du flux d'opérations. Ces travaux ont été validés de façon expérimentale par leurs implémentations au sein de la plateforme de Cloud libre Nimbus et de moteur de workflow MADAG présent au sein de DIET. Les tests ont été effectuées sur une simulation de cosmologie appelée RAMSES. RAMSES est une application parallèle qui, dans le cadre de ces travaux, a été portée sur des plateformes virtuelles dynamiques. L'ensemble des résultats théoriques et pratiques ont débouché sur des résultats encourageants et des améliorations
Scheduling on Clouds considering energy consumption and performance trade-offs : from modelization to industrial applications by Daniel Balouek-Thomert( )

1 edition published in 2016 in English and held by 1 WorldCat member library worldwide

Modern society relies heavily on the use of computational resources. Over the last decades, the number of connected users and deviees has dramatically increased, leading to the consideration of decentralized on-demand computing as a utility, commonly named "The Cloud". Numerous fields of application such as High Performance Computing (HPC). medical research, movie rendering , industrial facto ry processes or smart city management , benefit from recent advances of on-demand computation .The maturity of Cloud technologies led to a democratization and to an explosion of connected services for companies, researchers, techies and even mere mortals, using those resources in a pay-per-use fashion.ln particular, since the Cloud Computing paradigm has since been adopted in companies . A significant reason is that the hardware running the cloud andprocessing the data does not reside at a company physical site, which means thatthe company does not have to build computer rooms (known as CAPEX, CAPitalEXpenditures) or buy equipment, nor to fill and mainta in that equipment over a normal life-cycle (known as OPEX, Operational EXpenditures).This thesis revolves around the energy efficiency of Cloud platforms by proposing an extensible and multi-criteria framework, which intends to improve the efficiency of an heterogeneous platform from an energy consumption perspective. We propose an approach based on user involvement using the notion of a cursor offering the ability to aggregate cloud operator and end user preferences to establish scheduling policies . The objective is the right sizing of active servers and computing equipments while considering exploitation constraints, thus reducing the environmental impactassociated to energy wastage.This research work has been validated on experiments and simulations on the Grid'SOOO platform, the biggest shared network in Europe dedicated to research.lt has been integrated to the DIET middleware, and a industrial valorisation has beendone in the NUVEA commercial platform, designed during this thesis . This platform constitutes an audit and optimization tool of large scale infrastructures for operatorsand end users
Generation and Dynamic Update of Attack Graphs in Cloud Providers Infrastructures by Pernelle Mensah( )

1 edition published in 2019 in English and held by 1 WorldCat member library worldwide

Dans les infrastructures traditionnelles, les graphes d'attaque permettent de brosser un tableau de la sécurité, car ils sont un modèle décrivant les différentes étapes suivies par un attaquant dans le but de compromettre un actif du réseau. Ces graphes peuvent ainsi servir de base à l'évaluation automatisée des risques, en s'appuyant sur l'identification et l'évaluation des actifs essentiels. Cela permet de concevoir des contre-mesures proactives et réactives pour la réduction des risques et peut être utilisé pour la surveillance et le renforcement de la sécurité du réseau.Cette thèse vise à appliquer une approche similaire dans les environnements Cloud, ce qui implique de prendre en compte les nouveaux défis posés par ces infrastructures modernes, la majorité des graphes d'attaque étant conçue pour une application dans des environnements traditionnels. Les nouveaux scénarios d'attaque liés à la virtualisation, ainsi que les propriétés inhérentes du Cloud, à savoir l'élasticité et le caractère dynamique, sont quelques-uns des obstacles à franchir à cette fin.Ainsi, pour atteindre cet objectif, un inventaire complet des vulnérabilités liées à la virtualisation a été effectué, permettant d'inclure cette nouvelle dimension dans les graphes d'attaque existants. Par l'utilisation d'un modèle adapté à l'échelle du Cloud, nous avons pu tirer parti des technologies Cloud et SDN, dans le but de construire des graphes d'attaque et de les maintenir à jour. Des algorithmes capables de faire face aux modifications fréquentes survenant dans les environnements virtualisés ont été conçus et testés à grande échelle sur une plateforme Cloud réelle afin d'évaluer les performances et confirmer la validité des méthodes proposées dans cette thèse pour permettre à l'administrateur de Cloud de disposer d'un graphe d'attaque à jour dans cet environnent
Scheduling Solutions for Data Stream Processing Applications on Cloud-Edge Infrastructure by Felipe Rodrigo De Souza( )

1 edition published in 2020 in English and held by 1 WorldCat member library worldwide

Technology has evolved to a point where applications and devicesare highly connected and produce ever-increasing amounts of dataused by organizations and individuals to make daily decisions. Forthe collected data to become information that can be used indecision making, it requires processing. The speed at whichinformation is extracted from data generated by a monitored systemTechnology has evolved to a point where applications and devicesare highly connected and produce ever-increasing amounts of dataused by organizations and individuals to make daily decisions. Forthe collected data to become information that can be used indecision making, it requires processing. The speed at whichinformation is extracted from data generated by a monitored systemor environment affects how fast organizations and individuals canreact to changes. One way to process the data under short delays isthrough Data Stream Processing (DSP) applications. DSPapplications can be structured as directed graphs, where the vertexesare data sources, operators, and data sinks, and the edges arestreams of data that flow throughout the graph. A data source is anapplication component responsible for data ingestion. Operatorsreceive a data stream, apply some transformation or user-definedfunction over the data stream and produce a new output stream,until the latter reaches a data sink, where the data is stored,visualized or provided to another application
Optimisation du placement des licences logicielles dans le Cloud pour un déploiement économique et efficient by Arthur Chevalier( )

1 edition published in 2020 in English and held by 1 WorldCat member library worldwide

This thesis takes place in the field of Software Asset Management, license management, use rights, and compliance with contractual rules. When talking about proprietary software, these rules are often misinterpreted or totally misunderstood. In exchange for the fact that we are free to license our use as we see fit, in compliance with the contract, the publishers have the right to make audits. They can check that the rules are being followed and, if they are not respected, they can impose penalties, often financial penalties. This can lead to disastrous situations such as the lawsuit between AbInBev and SAP, where the latter claimed a USD 600 million penalty. The emergence of the Cloud has greatly increased the problem because software usage rights were not originally intended for this type of architecture. After an academic and industrial history of Software Asset Management (SAM), from its roots to the most recent work on the Cloud and software identification, we look at the licensing methods of major publishers such as Oracle, IBM and SAP before introducing the various problems inherent in SAM. The lack of standardization in metrics, specific usage rights, and the difference in paradigm brought about by the Cloud and soon the virtualized network make the situation more complicated than it already was. Our research is oriented towards modeling these licenses and metrics in order to abstract from the legal and blurry side of contracts. This abstraction allows us to develop software placement algorithms that ensure that contractual rules are respected at all times. This licensing model also allows us to introduce a deployment heuristic that optimizes several criteria at the time of software placement such as performance, energy and cost of licenses. We then introduce the problems associated with deploying multiple software at the same time by optimizing these same criteria and prove the NP-completeness of the associated decision problem. In order to meet these criteria, we present a placement algorithm that approaches the optimal and uses the above heuristic. In parallel, we have developed a SAM tool that uses these researches to offer an automated and totally generic software management in a Cloud architecture. All this work has been conducted in collaboration with Orange and tested in different Proof-Of-Concept before being fully integrated into the SAM tool
Solutions parallèles efficaces sur le modèle CGM d'une classe de problèmes issus de la programmation dynamique by Vianney Kengne Tchendji( )

1 edition published in 2014 in French and held by 1 WorldCat member library worldwide

Several factors lead designers of parallel architectures to converge to coarse-grained multi-processor Systems. However, most parallel software has been designed for fine-grained parallel Systems and for Systems with shared memory. In this thesis. we use the BSP/CGM (Bulk Synchronous Parallel / Coarse-Grained Multicomputer) parallel Computing model, designed to close the gap between software and hardware, to provide parallels solutions to a class of dynamic programming problems, This is a polyadique non-serial dynamic programming class of problems. which is characterized by very high dependence of calculations. This class includes, for example, Matrix Chain Ordering Problem, Triangulation of Convexe Polygon problem and Optimal Binay Search Tree problem. To do this, we start by carry out a detailed study of the design tool of our solutions, i.e. the BSP/CGM parallel Computing model. Then, we present some of the problems of the class studied and some sequential algorithms to solve them. After that, we propose a load balancing mechanism of the processor for an existing generic BSP/CGM algorithm which solves ail the problems of the class discussed. From this algorithm, we propose a new generic solution with better performance. Finally, we propose two BSP/CGM algorithms for typical problems of the class. These algorithms are based on sequential solutions said accelerated. They perform better than the first
Allocation dynamique sur cloud IaaS : allocation dynamique d'infrastructure de SI sur plateforme de cloud avec maîtrise du compromis coûts/performances by Etienne Michon( )

1 edition published in 2015 in French and held by 1 WorldCat member library worldwide

In the field of cloud computing, IaaS provide virtualized on-demand computing resources on a pay-per-use model. From the user point of view, the cloud provides an inexhaustible supply of resources, which can be dynamically claimed and released. IaaS is especially useful to execute scientific computations using operating budget instead of using a big initial investment. Provisioning the resources depending on the workload is an important challenge, especially regarding the big number of jobs and resoruces to take into account, but also the large amount of available platforms and economic model. We advocate the need for brokers on the client-side with two main capabilities: (1) automate the provisioning depending on the strategy selected by the user and (2) able to simulate an execution in order to provide the user with an estimation of the costs and times of his workload's execution. Many provisioning strategies and cloud providers can be used in this broker thanks to its open architecture. Large scale experiments have been conducted on many cloud platforms and show our tool's ability to execute different kind of workloads on various platforms and to simulate these executions with high accuracy
FreeCore : un système d'indexation de résumés de document sur une Table de Hachage Distribuée (DHT) by Bassirou Ngom( )

1 edition published in 2018 in French and held by 1 WorldCat member library worldwide

This thesis examines the problem of indexing and searching in Distributed Hash Table (DHT). It provides a distributed system for storing document summaries based on their content. Concretely, the thesis uses Bloom filters (BF) to represent document summaries and proposes an efficient method for inserting and retrieving documents represented by BFs in an index distributed on a DHT. Content-based storage has a dual advantage. It allows to group similar documents together and to find and retrieve them more quickly at the same by using Bloom filters for keywords searches. However, processing a keyword query represented by a Bloom filter is a difficult operation and requires a mechanism to locate the Bloom filters that represent documents stored in the DHT. Thus, the thesis proposes in a second time, two Bloom filters indexes schemes distributed on DHT. The first proposed index system combines the principles of content-based indexing and inverted lists and addresses the issue of the large amount of data stored by content-based indexes. Indeed, by using Bloom filters with long length, this solution allows to store documents on a large number of servers and to index them using less space. Next, the thesis proposes a second index system that efficiently supports superset queries processing (keywords-queries) using a prefix tree. This solution exploits the distribution of the data and proposes a configurable distribution function that allow to index documents with a balanced binary tree. In this way, documents are distributed efficiently on indexing servers. In addition, the thesis proposes in the third solution, an efficient method for locating documents containing a set of keywords. Compared to solutions of the same category, the latter solution makes it possible to perform subset searches at a lower cost and can be considered as a solid foundation for supersets queries processing on over-dht index systems. Finally, the thesis proposes a prototype of a peer-to-peer system for indexing content and searching by keywords. This prototype, ready to be deployed in a real environment, is experimented with peersim that allowed to measure the theoretical performances of the algorithms developed throughout the thesis
Security for Virtualized Distributed Systems : from Modelization to Deployment by Arnaud Lefray( )

1 edition published in 2015 in English and held by 1 WorldCat member library worldwide

This Thesis deals with security for virtualized distributed environments such as Clouds. In these environments, a client can access resources or services (compute, storage, etc.) on-demand without prior knowledge of the infrastructure underneath. These services are low-cost due to the mutualization of resources. As a result, the clients share a common infrastructure. However, the concentration of businesses and critical data makes Clouds more attractive for malicious users, especially when considering new attack vectors between tenants.Nowadays, Cloud providers offer default security or security by design which does not fit tenants' custom needs. This gap allows for multiple attacks (data thieft, malicious usage, etc.)In this Thesis, we propose a user-centric approach where a tenant models both its security needs as high-level properties and its virtualized application. These security objectives are based on a new logic dedicated to expressing system-based information flow properties. Then, we propose security-aware algorithm to automatically deploy the application and enforce the security properties. The enforcement can be realized by taking into account shared resources during placement decision and/or through the configuration of existing security mechanisms
Services auto-adaptatifs pour les grilles pair-à-pair by Bassirou Gueye( )

1 edition published in 2016 in French and held by 1 WorldCat member library worldwide

La gestion de ressources distribuées à l'échelle planétaire dans plusieurs organisations virtuelles implique de nombreux défis. Dans cette thèse, nous proposons un modèle pour la gestion dynamique de services dans un environnement de grille pair-à-pair à large échelle.Ce modèle, nommé P2P4GS, présente l'originalité de ne pas lier l'infrastructure pair-à-pair à la plate-forme d'exécution de services.De plus, il est générique, c'est-à-dire applicable sur toute architecture pair-à-pair. Pour garantir cette propriété, vu que les systèmes distribués à large échelle ont tendance à évoluer en termes de ressources, d'entités et d'utilisateurs, nous avons proposé de structurer le système de grille pair-à-pair en communautés virtuelles (clusters).L'approche de structuration est complètement distribuée et se base uniquement sur le voisinage des noeuds pour l'élection des responsables de clusters appelés PSI (Proxy Système d'Information). D'autre part, afin de bien orchestrer les communications au sein des différentes communautés virtuelles et aussi permettre une recherche efficace et exhaustive de service, lors de la phase de structuration, un arbre couvrant constitué uniquement des PSI est maintenu. Les requêtes de recherche vont ainsi être acheminées le long de cet arbre.Outre la découverte de services, nous avons proposé des mécanismes de déploiement, de publication et d'invocation de services. Enfin, nous avons implémenté et analysé les performances de P2P4GS. Afin d'illustrer sa généricité, nous l'avons implémenté sur Gia, Pastry et Kademlia des protocoles pair-à-pair opérant de manières totalement différentes.Les tests de performances ont montré que le P2P4GS fournit une bonne résistance aux pannes et garantit un passage à l'échelle en termes de dimensionnement du réseau et également de coût de communications
Redistribution dynamique parallèle efficace de la charge pour les problèmes numériques de très grande taille by Sébastien Fourestier( )

1 edition published in 2013 in French and held by 1 WorldCat member library worldwide

This thesis concerns efficient parallel dynamic load balancing for large scale numerical problems. First, we present a state of the art of the algorithms used to solve the partitioning, repartitioning, mapping and remapping problems. Our first contribution, in the context of sequential processing, is to define the desirable features that parallel repartitioning tools need to possess. We present our contribution to the conception of a k-way multilevel framework for sequential repartitioning. The most challenging part of this work regards the uncoarsening phase. One of our main contributions is the adaptation of influence methods to a global diffusion-based heuristic for the repartitioning problem. Our second contribution is the parallelization of these methods. The adaptation of the aforementioned algorithms required some modification of the algorithms and data structure used by existing parallel partitioning routines. This work is backed by a thorough experimental analysis, which is made possible thanks to the implementation of our algorithms into the Scotch library
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.94 (from 0.77 for Services a ... to 1.00 for Calcul num ...)

Languages
English (14)

French (11)