WorldCat Identities

École doctorale Informatique, télécommunications et électronique de Paris

Overview
Works: 1,140 works in 1,143 publications in 2 languages and 1,143 library holdings
Roles: Other, Degree grantor
Publication Timeline
.
Most widely held works by télécommunications et électronique de Paris École doctorale Informatique
Contribution aux techniques pour enrichir l'espace moteur et l'espace visuel des dispositifs d'interaction bureautique by Rodrigo Andrade Botelho de Almeida( )

2 editions published in 2009 in French and held by 2 WorldCat member libraries worldwide

Past research has suggested that among the reasons for the limitations of present desktop interaction style is the lack of both motor and visual space. The goal ofthis thesis is to optimize the use of such spaces. Based on the fact that one can control an object's position and orientation through a natural movement, the first main contributioin of this thesis is to explorethe advantages of enhancing the sensing of the standard mouse througha rotation sensor. This < rotary mouse > allows one to easily control three continuous variables of a computer task. A survey presents the perceptual and motorissues of some rotary manipulations and also the technical and ergonomic requirements of such device. Two interaction techniques, aimed to simplify repetitive tasks, are proposed : the < nearly-integral selection > and the < satellite palette >.Furthermore, an experimental evaluation compares the performance of the rotarymouse with that of a standard one. The other main contribution of this work is to investigate document visualization issues in the context of digital libraries. First, it analyses the advantages and the technical feasibility of integrating an immersive display to an interface aimed to support navigation in a virtual catalog. Second, in order to inspect the quality of a batch of digitized pages, it explores some zoomable and multi-focal visualization techniques. The overview and the panoramic detail browsing enabled by such techniques try to help users, which have to identify the flaws resulted from the digitization process, to quickly grasp the visual characteristics of a large set of pages
Sociabilités en ligne, usages et réseaux by Raphaël Charbey( )

1 edition published in 2018 in French and held by 2 WorldCat member libraries worldwide

With the digital advent, it is now possible for researchers to collect important amounts of data and online social network platforms are surely part of it. Sociologists, among others, seized those new resources to investigate over interaction modalities between individuals as well as their impact on the structure of sociability. Following this lead, this thesis work aims at analyzing a large number of Facebook accounts, through data analysis and graph theory classical tools, and to bring methodological contributions. Two main factors encourage to study Facebook social activities. On one hand, the importance of time spent on this platform by many Internet users justifies by itself the sociologists interest. On the other, and contrarily to what we observe on other social network websites, ties between individuals are similar to the ones that appear offline. First, the thesis proposes to detangle the multiple meanings that are behind the fact of ”being on Facebook”. The uses of our surveyed are not compacted in fantasized normative practices but vary depending on how they appropriate the different composers of the platform tools. These uses, as we will see it, do not concern all the socioprofessional categories in the same way and they also influence how the respondents interact with their online friends. The manuscript also explores these interactions, as well as the lover role into the relational structure. Second part of the thesis builds a typology of these relational structures. They are said as egocentred, which means that they are taken from the perspective of the respondent. This typology of social networks is based on their graphlet counts, that are the number of times each type of subnetwork appear in them. This approach offers a meso perspective (between micro and macro), that is propitious to underline some new social phenomena. With a high pluri-disciplinary potential, the graphlet methodology is also discussed and explored itself
Visualisation dans les systèmes informatiques coopératifs by Faryel Allouti( Book )

2 editions published in 2011 in French and held by 2 WorldCat member libraries worldwide

Les techniques de classification non supervisée et les outils de visualisation de données complexes sont deux thèmes récurrents dans la communauté d'Extraction et Gestion des Connaissances. A l'intersection de ces deux thèmes se trouvent les méthodes de visualisation tels que le MultiDimensional Scaling ou encore la méthode des cartes auto-organisatrices de Kohonen appelée SOM. La méthode SOM se construit à l'aide d'un algorithme des K-means auquel est ajouté la notion de voisinage permettant de cette façon la conservation de la topologie des données. Ainsi, l'apprentissage rapproche, dans l'espace des données, les centres qui sont voisins sur une grille généralement 2D, jusqu'à former une surface discrète qui est une représentation squelettique de la distribution du nuage à explorer. Dans cette thèse, nous nous sommes intéressés à la visualisation dans un contexte coopératif, où la coopération s'etablit via une communication asynchrone dont le média est la messagerie électronique. Cet outil est apparu avec l'avènement des technologies de l'information et de la communication. Il est très utilisé dans les organisations, il permet une diffusion instantanée et rapide de l'information à plusieurs personnes en même temps,sans se préoccuper de leur présence. Notre objectif consistait en la proposition d'un outil d'exploration visuelle de données textuelles qui sont les fichiers attachés aux messages électroniques. Pour ce faire, nous avons combiné des méthodes de classification automatique et de visualisation. Nous avons étudié l'approche modèle de mélange qui est une contribution très utile pour la classification. Dans notre contexte, nous avons utilisé le modèle de mélange multinomial (Govaert et Nadif, 2007) pour déterminer les classes de fichiers. D'autre part, nous avons étudié l'aspect de visualisation à la fois des classes et des documents à l'aide du positionnement multidimensionnel et DC (Difference of Convex functions) et des cartes auto-organisatrices de Kohonen
Le suivi de l'apprenant dans le cadre du serious gaming by Pradeepa Thomas Benjamin( )

1 edition published in 2015 in French and held by 1 WorldCat member library worldwide

Le " serious gaming " est une approche récente pour mener des activités " sérieuses " telles que communiquer, sensibiliser ou apprendre, en utilisant les techniques mises en œuvre dans les jeux vidéo. Les jeux sérieux sont devenus aujourd'hui un élément incontournable de la formation en ligne. C'est dans ce cadre du serious gaming pour la formation que se situe ce sujet de thèse. En effet, quelle que soit l'acception privilégiée, de nombreuses questions de recherche se posent. En particulier, comment peut-on évaluer les connaissances acquises par le joueur/apprenant à travers le jeu ? Nous sommes concentrés sur les jeux de type étude de cas utilisés notamment en gestion ou en médecine et proposons une méthode basée sur l'Evidence Centered Design pour concevoir le suivi de l'apprenant à des fins de diagnostic à destination de l'enseignant et de l'apprenant. Les actions liées aux études de cas sont très proches des actions métiers et recourent à des règles bien précises. Nous avons fait le choix de les représenter à l'aide de réseaux de Petri. Pour apporter de la sémantique à l'analyse par réseau de Petri, nous l'avons adossé à une ontologie du domaine et des actions de jeu. L'ontologie apporte une complémentarité non négligeable au réseau de Petri qui a une dimension purement procédurale. Nous combinons des réseaux de Petri et les ontologies afin de produire des indicateurs de performance pour cette catégorie particulière de jeux sérieux. L'étude des erreurs nous a conduits à proposer une taxinomie particulière pour les jeux sérieux en nous inspirant notamment des travaux réalisés dans le domaine de la sécurité
Quantum Algorithms for Cryptanalysis and Quantum-safe Symmetric Cryptography by André Schrottenloher( )

1 edition published in 2021 in English and held by 1 WorldCat member library worldwide

Modern cryptography relies on the notion of computational security. The level of security given by a cryptosystem is expressed as an amount of computational resources required to break it. The goal of cryptanalysis is to find attacks, that is, algorithms with lower complexities than the conjectural bounds.With the advent of quantum computing devices, these levels of security have to be updated to take a whole new notion of algorithms into account. At the same time, cryptography is becoming widely used in small devices (smart cards, sensors), with new cost constraints.In this thesis, we study the security of secret-key cryptosystems against quantum adversaries.We first build new quantum algorithms for k-list (k-XOR or k-SUM) problems, by composing exhaustive search procedures. Next, we present dedicated cryptanalysis results, starting with a new quantum cryptanalysis tool, the offline Simon's algorithm. We describe new attacks against the lightweight algorithms Spook and Gimli and we perform the first quantum security analysis of the standard cipher AES.Finally, we specify Saturnin, a family of lightweight cryptosystems oriented towards post-quantum security. Thanks to a very similar structure, its security relies largely on the analysis of AES
Spatial data focusing using direct sequence spread spectrum modulation by Michael Derrick Odhiambo( )

1 edition published in 2021 in English and held by 1 WorldCat member library worldwide

This work proposes the implementation of Spatial Data Focusing (SDF) using spread spectrum techniques. SDF was recently proposed as a candidate alternative to classical power focusing schemes in wireless geocasting applications. Unlike power focusing approaches where radiated power is directed to a defined direction, in SDF, it is the data to be transmitted that is processed in such a manner that it can only be decoded at a predefined location. This work exploits the dual orthogonality due to classical quadrature components and orthogonal Gold spreading sequences to design the IQ and spread spectrum based spatial data focusing (DSSS-SDF-IQ) scheme. It is demonstrated that SDF attains better spatial selectivity than classical power focusing for a given antenna array size. The robustness of the proposed scheme is subsequently demonstrated by implementing it over a classical Urban Canyon 6-ray multipath channel model, where it is shown that the scheme can exhibit beamwidth as narrow as 1 degree with only a 4-antenna array. In SDF, the beamwidth is defined as the area within which data can be decoded as opposed to classical half power beamwidth. Chapter 1 introduces the concept of geocasting. Chapter 2 reviews the different techniques that enable directional capabilities on base stations. Chapter 3 introduces the principles of direct sequence spread spectrum based SDF. Chapter 4 investigates the influence of multipath channel on DSSS-SDF scheme. For all the cases studied above, relevant simulations are implemeneted to validate the discussions. Chapter 5 summarizes the work with a conclusion and perspective on possible future research directions
K-Separator problem by Mohamed Ahmed Mohamed Sidi( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

Let G be a vertex-weighted undirected graph. We aim to compute a minimum weight subset of vertices whose removal leads to a graph where the size of each connected component is less than or equal to a given positive number k. If k = 1 we get the classical vertex cover problem. Many formulations are proposed for the problem. The linear relaxations of these formulations are theoretically compared. A polyhedral study is proposed (valid inequalities, facets, separation algorithms). It is shown that the problem can be solved in polynomial time for many special cases including the path, the cycle and the tree cases and also for graphs not containing some special induced sub-graphs. Some (k + 1)-approximation algorithms are also exhibited. Most of the algorithms are implemented and compared. The k-separator problem has many applications. If vertex weights are equal to 1, the size of a minimum k-separator can be used to evaluate the robustness of a graph or a network. Another application consists in partitioning a graph/network into different sub-graphs with respect to different criteria. For example, in the context of social networks, many approaches are proposed to detect communities. By solving a minimum k-separator problem, we get different connected components that may represent communities. The k-separator vertices represent persons making connections between communities. The k-separator problem can then be seen as a special partitioning/clustering graph problem
Caractérisation du phosphore noir pour des applications optoélectroniques hyperfréquences by Anne Penillard( )

1 edition published in 2018 in French and held by 1 WorldCat member library worldwide

The research project conducted focuses on the optoelectronic and high frequency characterization of black phosphorus. The context of this project is the trend of downscaling and multi-physical coupling seen today in industrial electronics. The characterization carried out is directed for a specific application, the realisation of microwave photoswitch controlled by a laser optical excitation at 1.55 µm. For this purpose, during this PhD a production process of thin and large bi-dimensional layers of black phosphorus has been performed, along with the fabrication of characterization devices, and a discussion to determine suitable appendices for substrate, capping layer and metallization. The technological development is coupled with optical, electronic (DC) and radiofrequency characterizations of the bi-dimensional layers for the determination of inherent black phosphorus properties like the photogenerated carrier lifetime, the material permittivity, the resistivity and the mobility of the carriers. Those parameters are essential to understanding design and simulate high frequency optoelectronic devices on black phosphorus such as the microwave photoswitch controlled at 1.55 µm. The obtained results assert black phosphorus as a promising material for this kind of application. The first performances obtained with the use of bP as an active material for photoconductive switching are very encouraging and open the way for high frequency and high speed applications
A stepwise compositional approach to model and analyze system C designs at the transactional level and the delta cycle level by Nesrine Harrath( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

Les systèmes embarqués sont de plus en plus intégrés dans les applications temps réel actuelles. Ils sont généralement constitués de composants matériels et logiciels profondément Intégrés mais hétérogènes. Ces composants sont développés sous des contraintes très strictes. En conséquence, le travail des ingénieurs de conception est devenu plus difficile. Pour répondre aux normes de haute qualité dans les systèmes embarqués de nos jours et pour satisfaire aux besoins quotidiens de l'industrie, l'automatisation du processus de développement de ces systèmes prend de plus en plus d'ampleur. Un défi majeur est de développer une approche automatisée qui peut être utilisée pour la vérification intégrée et la validation de systèmes complexes et hétérogènes.Dans le cadre de cette thèse, nous proposons une nouvelle approche compositionnelle pour la modélisation et la vérification des systèmes complexes décrits en langage SystemC. Cette approche est basée sur le modèle des SystemC Waiting State Automata (WSA). Les SystemC Waiting State Automata sont des automates permettant de modéliser le comportement abstrait des systèmes matériels et logiciels décrits en SystemC tout en préservant la sémantique de l'ordonnanceur SystemC au niveau des cycles temporels et au niveau des delta-cycles. Ce modèle permet de réduire la complexité de la modélisation des systèmes complexes due au problème de l'explosion combinatoire tout en restant fidèle au système initial. Ce modèle est compositionnel et supporte le rafinement. De plus, il est étendu par des paramètres temps ainsi que des compteurs afin de prendre en compte les aspects relatifs à la temporalité et aux propriétés fonctionnelles comme notamment la qualité de service. Nous proposons ensuite une chaîne de construction automatique des WSAs à partir de la description SystemC. Cette construction repose sur l'exécution symbolique et l'abstraction des prédicats. Nous proposons un ensemble d'algorithmes de composition et de réduction de ces automates afin de pouvoir étudier, analyser et vérifier les comportements concurrents des systèmes décrits ainsi que les échanges de données entre les différents composants. Nous proposons enfin d'appliquer notre approche dans le cadre de la modélisation et la simulation des systèmes complexes. Ensuite l'expérimenter pour donner une estimation du pire temps d'exécution (worst-case execution time (WCET)) en utilisant le modèle du Timed SystemC WSA. Enfin, on définit l'application des techniques du model checking pour prouver la correction de l'analyse abstraite de notre approche
Topics in Delay Tolerant Networks (DTNs) : reliable transports, estimation and tracking by Arshad Ali( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

Mobile Ad hoc NETworks (MANETs) aim at making communication between mobile nodes feasible without any infrastructure support. Sparse MANETs fall into the class of Delay Tolerant Networks which are intermittently connected networks and where there is no contemporaneous end-to-end path at any given time. We first, propose a new reliable transport scheme for DTNs based on the use of ACKnowledgments and random linear coding. We model the evolution of the network under our scheme using a fluid-limit approach. We optimize our scheme to obtain mean file transfer times on certain optimal parameters obtained through differential evolution approach. Secondly, we propose and study a novel and enhanced ACK to improve reliable transport for DTNs covering both unicast and multicast flows. We make use of random linear coding at relays so that packets can reach the destination faster. We obtain reliability based on the use of so-called Global Selective ACKnowledgment. We obtain significant improvement through G-SACKs and coding at relays. Finally, we tackle the problem of estimating file-spread in DTNs with direct delivery and epidemic routing. We estimate and track the degree of spread of a message in the network. We provide analytical basis to our estimation framework alongwith insights validated with simulations. We observe that the deterministic fluid model can indeed be a good predictor with a large of nodes. Moreover, we use Kalman filter and Minimum- Mean-Squared-Error (MMSE) to track the spreading process and find that Kalman filter provides more accurate results as compared to MMSE
Subjective quality assessment : a study on the grading scales : illustrations for stereoscopic and 2D video content by Rania Bensaied Ghaly( )

1 edition published in 2018 in English and held by 1 WorldCat member library worldwide

Quality evaluation is an ever-fascinating field, covering at least a century of research works emerging from psychology, psychophysics, sociology, marketing, medicine... While for visual quality evaluation the IUT recommendations pave the way towards well-configured, consensual evaluation conditions granting reproducibility and comparability of the experimental results, an in-depth analysis of the state-of-the-art studies shows at least three open challenges related to the: (1) the continuous vs. discrete evaluation scales, (2) the statistical distribution of the scores assigned by the observers and (3) the usage of semantic labels on the grading scales. Thus, the present thesis turns these challenges into three research objectives: 1. bridging at the theoretical level the continuous and the discrete scale evaluation procedures and investigating whether the number of the classes on the discrete scales is a criterion meaningful in the results interpretations or just a parameter; studying the theoretical influence of the statistical model of evolution results and of the size of the panel (number of observers) in the accuracy of the results are also targeted; 2. quantifying the bias induced in subjective video quality experiments by the semantic labels (e.g. Excellent, Good, Fair, Poor and Bad) generally associated to the discrete grading scales; 3. designing and deploying an experimental test-bed able to support their precision and statistical relevance. With respect to these objectives, the main contributions are at theoretical, methodological and experimental levels
Virtual networked infrastructure provisioning in distributed cloud environments by Marouen Mechtri( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

L'informatique en nuage (Cloud Computing) a émergé comme un nouveau paradigme pour offrir des ressources informatiques à la demande et pour externaliser des infrastructures logicielles et matérielles. Le Cloud Computing est rapidement et fondamentalement en train de révolutionner la façon dont les services informatiques sont mis à disposition et gérés. Ces services peuvent être demandés à partir d'un ou plusieurs fournisseurs de Cloud d'où le besoin de la mise en réseau entre les composants des services informatiques distribués dans des emplacements géographiquement répartis. Les utilisateurs du Cloud veulent aussi déployer et instancier facilement leurs ressources entre les différentes plateformes hétérogènes de Cloud Computing. Les fournisseurs de Cloud assurent la mise à disposition des ressources de calcul sous forme des machines virtuelles à leurs utilisateurs. Par contre, ces clients veulent aussi la mise en réseau entre leurs ressources virtuelles. En plus, ils veulent non seulement contrôler et gérer leurs applications, mais aussi contrôler la connectivité réseau et déployer des fonctions et des services de réseaux complexes dans leurs infrastructures virtuelles dédiées. Les besoins des utilisateurs avaient évolué au-delà d'avoir une simple machine virtuelle à l'acquisition de ressources et de services virtuels complexes, flexibles, élastiques et intelligents. L'objectif de cette thèse est de permettre le placement et l'instanciation des ressources complexes dans des infrastructures de Cloud distribués tout en permettant aux utilisateurs le contrôle et la gestion de leurs ressources. En plus, notre objectif est d'assurer la convergence entre les services de cloud et de réseau. Pour atteindre cela, nous proposons des algorithmes de mapping d'infrastructures virtuelles dans les centres de données et dans le réseau tout en respectant les exigences des utilisateurs. Avec l'apparition du Cloud Computing, les réseaux traditionnels sont étendus et renforcés avec des réseaux logiciels reposant sur la virtualisation des ressources et des fonctions réseaux. En plus, le nouveau paradigme d'architecture réseau (Software Defined Networks) est particulièrement pertinent car il vise à offrir la programmation du réseau et à découpler, dans un équipement réseau, la partie plan de données de la partie plan de contrôle. Dans ce contexte, la première partie propose des algorithmes optimaux (exacts) et heuristiques de placement pour trouver le meilleur mapping entre les demandes des utilisateurs et les infrastructures sous-jacentes, tout en respectant les exigences exprimées dans les demandes. Cela inclut des contraintes de localisation permettant de placer une partie des ressources virtuelles dans le même nœud physique. Ces contraintes assurent aussi le placement des ressources dans des nœuds distincts. Les algorithmes proposés assurent le placement simultané des nœuds et des liens virtuels sur l'infrastructure physique. Nous avons proposé aussi un algorithme heuristique afin d'accélérer le temps de résolution et de réduire la complexité du problème. L'approche proposée se base sur la technique de décomposition des graphes et la technique de couplage des graphes bipartis. Dans la troisième partie, nous proposons un cadriciel open source (framework) permettant d'assurer la mise en réseau dynamique entre des ressources Cloud distribués et l'instanciation des fonctions réseau dans l'infrastructure virtuelle de l'utilisateur. Ce cadriciel permettra de déployer et d'activer les composants réseaux afin de mettre en place les demandes des utilisateurs. Cette solution se base sur un gestionnaire des ressources réseaux "Cloud Network Gateway Manager" et des passerelles logicielles permettant d'établir la connectivité dynamique et à la demande entre des ressources cloud et réseau. Le CNG-Manager offre le contrôle de la partie réseau et prend en charge le déploiement des fonctions réseau nécessaires dans l'infrastructure virtuelle des utilisateurs
Designing safe and highly available distributed applications by Sreeja Sasidhara Nair( )

1 edition published in 2021 in English and held by 1 WorldCat member library worldwide

La conception d'applications distribuées implique fondamentalement un compromis entre la sûreté et les performances. Nous nous concentrons sur les cas où la sûreté est la principale exigence. Dans le cadre des systèmes distribués basés sur l'état, nous proposons une méthodologie de preuve pour établir qu'une application donnée maintient un invariant donné. Notre approche permet de raisonner sur les opérations individuelles séparément. Nous démontrons que nos règles sont correctes et, à l'aide d'un moteur de preuve, nous illustrons leur utilisation par quelques exemples représentatifs. Pour les opérations conflictuelles, le développeur peut choisir entre la résolution de conflit ou la coordination. Nous présentons une nouvelle structure de données en forme d'arbre répliqué qui prend en charge les déplacements atomiques concurrents sans coordination et qui maintient l'invariant de l'arbre. Notre analyse identifie les cas où les déplacements concurrents sont intrinsèquement sûrs. Pour les autres cas, nous concevons un algorithme de résolution des conflits. La contrepartie est que dans certains cas, une opération de déplacement est "perdante". Étant donné la coordination requise par certaines applications pour la sûreté, elle peut être implémentée de nombreuses façons différentes. Même en se limitant aux verrous, ceux-ci peuvent utiliser diverses configurations qui diffèrent par: la granularité, le type et le placement. La performance de chaque configuration dépend de la charge de travail. Nous étudions le "treillis de coordination", l'espace de conception des configurations de verrouillage, et définissons un ensemble de mesures pour y naviguer systématiquement
Méthode de conception de systèmes temps réels embarqués multi-coeurs en milieu automobile by Enagnon Cédric Klikpo( )

1 edition published in 2018 in English and held by 1 WorldCat member library worldwide

The increasing complexity of embedded applications in modern cars has increased the need of computing power. To meet this need, the European automotive standard AUTOSAR has introduced the use of \multicore platforms. However, \multicore platform for critical automotive applications raises several issues. In particular, it is necessary to respect the functional specification and to guarantee deterministically the data exchanges between cores. In this thesis, we consider multi-periodic systems specified and validated with \mat. So, we developed a framework to deploy \mat applications on AUTOSAR \multicore. This framework guarantees the functional and temporal determinism and exploits the parallelism. Our contribution is threefold. First, we identify the communication mechanisms in \mat. Then, we prove that the dataflow in a multi-periodic \mat system is modeled by a SDFG. The SDFG formalism is an excellent analysis tool to exploit the parallelism. In fact, it is very popular in the literature and it is widely studied for the deployment of dataflow applications on multi/many-core. Then, we develop methods to realize the dataflow expressed by the SDFG in a preemptive \rt scheduling. These methods use theoretical results on SDFGs to guarantee deterministic precedence constraints without using blocking synchronization mechanisms. As such, both the functional and temporal determinism are guaranteed. Finally, we characterize the impact of dataflow requirements on tasks. We propose a partitioning technique that minimizes this impact. We show that this technique promotes the construction of a partitioning and a feasible scheduling when it is used to initiate multi-objective research and optimization algorithms. %As such, we reduce the number of design iterations and shorten the design time
Conception et optimisation de système multi-électrodes pour les implants cardiaques by Islam Seoudi( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

Cardiac implants like ICD are life saving devices for cardiac arrhythmias. In other conditions like heart failure, CRT implants are prescribed to restore the heart rhythm. Such treatment consists of the delivery of electrical stimuli to the cardiac tissue via electrodes in the stimulation lead. Conventionally the stimulation lead come either in unipolar or bipolar configuration which have been found to be sufficient for pacing the right atrium and right ventricle, studies have shown the benefits of a multi-electrode system for pacing left ventricle essential for cardiac resynchronization. This thesis discusses the design and optimization of a multi-electrode system capable of alleviating the limitations and constraints related to left ventricular stimulation. We first present implementation of such system that was taped out in 0.18 µm technology. The chip also features a specially designed communication protocol which enables low power operation and quick configuration. Thereafter we present the design and implementation of a default connection unit to ensure the compatibility of our multi-electrode lead with in the market. This unit was taped out in 0.18 µm technology. Finally we present a proof of concept study for the adaptation and integration of non-volatile memory technologies within the multi-electrode system. The employment of such technologies enhanced our multi-electrode system by eliminating the repetitive configuration of electrodes, thereby saving power and reducing latency. This also included smaller area and compatibility with any pacemaker in the market. Through simulations we proved the feasibility of these technologies for our implant applications
Representation learning for relational data by Ludovic Dos Santos( )

1 edition published in 2017 in English and held by 1 WorldCat member library worldwide

The increasing use of social and sensor networks generates a large quantity of data that can be represented as complex graphs. There are many tasks from information analysis, to prediction and retrieval one can imagine on those data where relation between graph nodes should be informative. In this thesis, we proposed different models for three different tasks: - Graph node classification - Relational time series forecasting - Collaborative filtering. All the proposed models use the representation learning framework in its deterministic or Gaussian variant. First, we proposed two algorithms for the heterogeneous graph labeling task, one using deterministic representations and the other one Gaussian representations. Contrary to other state of the art models, our solution is able to learn edge weights when learning simultaneously the representations and the classifiers. Second, we proposed an algorithm for relational time series forecasting where the observations are not only correlated inside each series, but also across the different series. We use Gaussian representations in this contribution. This was an opportunity to see in which way using Gaussian representations instead of deterministic ones was profitable. At last, we apply the Gaussian representation learning approach to the collaborative filtering task. This is a preliminary work to see if the properties of Gaussian representations found on the two previous tasks were also verified for the ranking one. The goal of this work was to then generalize the approach to more relational data and not only bipartite graphs between users and items
Accéler la préparation des données pour l'analyse du big data by Yongchao Tian( )

1 edition published in 2017 in English and held by 1 WorldCat member library worldwide

We are living in a big data world, where data is being generated in high volume, high velocity and high variety. Big data brings enormous values and benefits, so that data analytics has become a critically important driver of business success across all sectors. However, if the data is not analyzed fast enough, the benefits of big data will be limited or even lost. Despite the existence of many modern large-scale data analysis systems, data preparation which is the most time-consuming process in data analytics has not received sufficient attention yet. In this thesis, we study the problem of how to accelerate data preparation for big data analytics. In particular, we focus on two major data preparation steps, data loading and data cleaning. As the first contribution of this thesis, we design DiNoDB, a SQL-on-Hadoop system which achieves interactive-speed query execution without requiring data loading. Modern applications involve heavy batch processing jobs over large volume of data and at the same time require efficient ad-hoc interactive analytics on temporary data generated in batch processing jobs. Existing solutions largely ignore the synergy between these two aspects, requiring to load the entire temporary dataset to achieve interactive queries. In contrast, DiNoDB avoids the expensive data loading and transformation phase. The key innovation of DiNoDB is to piggyback on the batch processing phase the creation of metadata, that DiNoDB exploits to expedite the interactive queries. The second contribution is a distributed stream data cleaning system, called Bleach. Existing scalable data cleaning approaches rely on batch processing to improve data quality, which are very time-consuming in nature. We target at stream data cleaning in which data is cleaned incrementally in real-time. Bleach is the first qualitative stream data cleaning system, which achieves both real-time violation detection and data repair on a dirty data stream. It relies on efficient, compact and distributed data structures to maintain the necessary state to clean data, and also supports rule dynamics. We demonstrate that the two resulting systems, DiNoDB and Bleach, both of which achieve excellent performance compared to state-of-the-art approaches in our experimental evaluations, and can help data scientists significantly reduce their time spent on data preparation
Automated RRM optimization of LTE networks using statistical learning by Moazzam Islam Tiwana( )

1 edition published in 2010 in English and held by 1 WorldCat member library worldwide

The mobile telecommunication industry has experienced a very rapid growth in the recent past. This has resulted in significant technological and architectural evolution in the wireless networks. The expansion and the heterogenity of these networks have made their operational cost more and more important. Typical faults in these networks may be related to equipment breakdown and inappropriate planning and configuration. In this context, automated troubleshooting in wireless networks receives a growing importance, aiming at reducing the operational cost and providing high-quality services for the end-users. Automated troubleshooting can reduce service breakdown time for the clients, resulting in the decrease in client switchover to competing network operators. The Radio Access Network (RAN) of a wireless network constitutes its biggest part. Hence, the automated troubleshooting of RAN of the wireless networks is very important. The troubleshooting comprises the isolation of the faulty cells (fault detection), identifying the causes of the fault (fault diagnosis) and the proposal and deployement of the healing action (solution deployement). First of all, in this thesis, the previous work related to the troubleshooting of the wireless networks has been explored. It turns out that the fault detection and the diagnosis of wireless networks have been well studied in the scientific literature. Surprisingly, no significant references for the research work related to the automated healing of wireless networks have been reported. Thus, the aim of this thesis is to describe my research advances on "Automated healing of LTE wireless networks using statistical learning". We focus on the faults related to Radio Resource Management (RRM) parameters. This thesis explores the use of statistical learning for the automated healing process. In this context, the effectiveness of statistical learning for automated RRM has been investigated. This is achieved by modeling the functional relationships between the RRM parameters and Key Performance Indicators (KPIs). A generic automated RRM architecture has been proposed. This generic architecture has been used to study the application of statistical learning approach to auto-tuning and performance monitoring of the wireless networks. The use of statistical learning in the automated healing of wireless networks introduces two important diculties: Firstly, the KPI measurements obtained from the network are noisy, hence this noise can partially mask the actual behaviour of KPIs. Secondly, these automated healing algorithms are iterative. After each iteration the network performance is typically evaluated over the duration of a day with new network parameter settings. Hence, the iterative algorithms should achieve their QoS objective in a minimum number of iterations. Automated healing methodology developped in this thesis, based on statistical modeling, addresses these two issues. The automated healing algorithms developped are computationaly light and converge in a few number of iterations. This enables the implemenation of these algorithms in the Operation and Maintenance Center (OMC) in the off-line mode. The automated healing methodolgy has been applied to 3G Long Term Evolution (LTE) use cases for healing the mobility and intereference mitigation parameter settings. It has been observed that our healing objective is achieved in a few number of iterations. An automated healing process using the sequential optimization of interference mitigation and packet scheduling parameters has also been investigated. The incorporation of the a priori knowledge into the automated healing process, further reduces the number of iterations required for automated healing. Furthermore, the automated healing process becomes more robust, hence, more feasible and practical for the implementation in the wireless networks
Techniques de transmission et d'accès sans fil dans les réseaux ad-hoc véhiculaires (VANETS) by Abdel Mehsen Ahmad( )

1 edition published in 2012 in French and held by 1 WorldCat member library worldwide

Vehicular networks are the subject of active research in the field of networks as well as transport. The potential for vehicular networks to provide services such as traffic information in real time or accident makes this technology a very important research domain. These networks may support vehicle-to-vehicle communications (V2V), vehicle-to-infrastructure (V2I), or a combination of both. The IEEE 1609.4 is the specification of multichannel operations for IEEE802 .11p/WAVE vehicular networks (VANETs). It uses seven channels; one being a control channel (CCH) which is listened periodically by the vehicles and the other six channels are used as service channels (SCH). It also defines a time division between alternating CCH and SCH intervals. The purpose of this thesis is to evaluate the performance of VANETs in the case of vehicular communications without infrastructure, and at the lower layers of IEEE 802.11p standard. In the first part, we propose an opportunistic multichannel MAC allocation in an environment without infrastructure. This approach is consistent with the standard IEEE1609.4 -2010/WAVE for a multi-channel operation, and it is designed for data services applications (non-urgent), while ensuring the transmission of road safety messages and control packets. To maintain the quality of service of the two types of messages (urgent and non-urgent) by exploiting the channel capacity, two solutions are proposed. In the second part, when the vehicle selects its channel and controls its temporal alternation between CCH and SCH, it starts transmitting its packets, particularly on the CCH, which have an expiration time. We present an approach to minimize collisions between transmitters while avoiding contention at the beginning of CCH interval, especially in a context of high vehicular density. Although the mechanisms proposed above reduce the collision rate, it is not possible to completely remove these collisions. In the third part, we address the problem of collisions between broadcast packets on the CCH, especially when the load of transmitted messages exceeds the channel capacity. For this purpose, we propose a new analog network coding mechanism adapted to QPSK modulation for broadcast messages on the CCH. In this approach, known symbols are sent before sending the packets to estimate the channel parameters and an explicit solution is used to reverse the system of the superposition of two packets
Etude de cryptosystèmes à clé publique basés sur les codes MDPC quasi-cycliques by Julia Chaulet( )

1 edition published in 2017 in French and held by 1 WorldCat member library worldwide

L'utilisation des codes MDPC (Moderate Density Parity Check) quasi-cycliques dans le cryptosystème de McEliece offre un schéma de chiffrement post-quantique dont les clés ont une taille raisonnable et dont le chiffrement et le déchiffrement n'utilisent que des opérations binaires. C'est donc un bon candidat pour l'implémentation embarquée ou à bas coût.Dans ce contexte, certaines informations peuvent être exploitées pour construire des attaques par canaux cachés.Ici, le déchiffrement consiste principalement à décoder un mot de code bruité. Le décodeur utilisé est itératif et probabiliste : le nombre d'itérations de l'algorithme varie en fonction des instances et certains décodages peuvent échouer. Ces comportements ne sont pas souhaitables car ils peuvent permettre d'extraire des informations sur le secret.Une contremesure possible est de limiter le nombre d'instances de chiffrement avec les mêmes clés. Une autre façon serait de recourir à un décodage à temps constant dont la probabilité d'échec au décodage est négligeable. L'enjeu principal de cette thèse est de fournir de nouveaux outils pour analyser du comportement du décodeur pour la cryptographie.Dans un second temps, nous expliquons pourquoi l'utilisation des codes polaires n'est pas sûre pour le cryptosystème de McEliece. Pour ce faire, nous utilisons de nouvelles techniques afin de résoudre une équivalence de codes. Nous exhibons de nombreux liens entre les codes polaires et les codes de Reed-Muller et ainsi d'introduire une nouvelle famille de codes : les codes monomiaux décroissants. Ces résultats sont donc aussi d'un intérêt indépendant pour la théorie des codes
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  General Special  
Audience level: 0.93 (from 0.87 for Sociabilit ... to 1.00 for Visualisat ...)

Alternative Names
École doctorale 130

École doctorale EDITE

École doctorale Informatique, télécommunications et électronique

ED 130

ED EDITE

ED Informatique, télécommunications et électronique

ED Informatique, télécommunications et électronique de Paris

ED130

Languages
English (13)

French (9)