WorldCat Identities

Turletti, Thierry

Overview
Works: 26 works in 34 publications in 2 languages and 39 library holdings
Roles: Thesis advisor, Other, Opponent, Author, Contributor
Publication Timeline
.
Most widely held works by Thierry Turletti
Source and Channel Adaptive Rate Control for Multicast Layered Video Transmission Based on a Clustering Algorithm by Jérôme Viéron( )

2 editions published between 2002 and 2004 in English and held by 3 WorldCat member libraries worldwide

CONTROLE DE TRANSMISSION POUR LOGICIEL DE VIDEOCONFERENCE SUR L'INTERNET by Thierry Turletti( Book )

2 editions published in 1995 in French and held by 3 WorldCat member libraries worldwide

LA VIDEOCONFERENCE SUR ORDINATEUR EST DEVENUE UNE REALITE: AUJOURD'HUI ON PEUT RECEVOIR A PARTIR DE SON MICRO-ORDINATEUR DE BUREAU UN SEMINAIRE A DISTANCE, PARTICIPER A UNE TELE-REUNION, ETC. LES LOGICIELS DE VIDEOCONFERENCE SUR L'INTERNET SONT D'AILLEURS DEVENUS D'USAGE COURANT. LA CONTRIBUTION DE NOS TRAVAUX PORTE SUR DEUX POINTS PRINCIPAUX. LE PREMIER EST DE MONTRER QUE LA TECHNOLOGIE ACTUELLE PERMET DE REALISER DES CODEURS-DECODEURS (CODECS) VIDEO EN LOGICIEL ET DONC A MOINDRE COUT. LE DEUXIEME POINT EST DE MONTRER QU'IL EST POSSIBLE DE TRANSMETTRE DE LA VIDEO DE BONNE QUALITE SUR L'INTERNET ACTUEL, ET DONC EN GENERAL SUR DES RESEAUX SANS RESERVATION DE RESSOURCES. CES TRAVAUX ONT ETE ILLUSTRES GRACE AU LOGICIEL DE VIDEOCONFERENCE IVS (INRIA VIDEOCONFERENCING SYSTEM) QUE NOUS AVONS DEVELOPPE. CE LOGICIEL INCORPORE UN CODEC VIDEO QUI SUIT LE STANDARD DE COMPRESSION H.261 DE L'UNION INTERNATIONALE DES TELECOMMUNICATIONS (UIT). CE STANDARD A ETE A L'ORIGINE CONCU POUR ETRE UTILISE SUR DES RESEAUX A COMMUTATION DE CIRCUITS DE TYPE RESEAU NUMERIQUE A INTEGRATION DE SERVICE (RNIS). CETTE THESE DECRIT LES ALGORITHMES QUE NOUS AVONS ELABORES ET MIS EN UVRE POUR POUVOIR EMETTRE EFFICACEMENT DE LA VIDEO SUR DES RESEAUX A COMMUTATION DE PAQUETS COMME L'INTERNET. PLUS PARTICULIEREMENT, NOUS PROPOSONS UN ALGORITHME DE DECOUPAGE EN PAQUETS DU FLOT DE BITS H.261, DES ALGORITHMES DE CONTROLE D'ERREURS CONTRE LA PERTE DES PAQUETS, DES ALGORITHMES DE CONTROLE DE DEBIT POUR CODEUR H.261 ET ENFIN UN ALGORITHME DE CONTROLE DE CONGESTION QUI ADAPTE LE DEBIT DU CODEUR A LA BANDE PASSANTE DISPONIBLE DU RESEAU. ENFIN, NOUS DISCUTONS DU PROBLEME GENERAL DE LA TRANSMISSION VIDEO VERS UN ENSEMBLE HETEROGENE DE RECEPTEURS ET PROPOSONS DES SOLUTIONS. CERTAINES D'ENTRE ELLES SONT APPLICABLES DANS L'INTERNET ACTUEL ET D'AUTRES NE LE SERONT QU'A MOYEN TERME AVEC LES CHANGEMENTS ESCOMPTES DANS L'ARCHITECTURE DE L'INTERNET
Conception et évaluation des systèmes logiciels de classifications de paquets haute-performance by Peng He( )

2 editions published in 2015 in French and held by 3 WorldCat member libraries worldwide

Packet classification consists of matching packet headers against a set of pre-defined rules, and performing the action(s) associated with the matched rule(s). As a key technology in the data-plane of network devices, packet classification has been widely deployed in many network applications and services, such as firewalling, load balancing, VPNs etc. Packet classification has been extensively studied in the past two decades. Traditional packet classification methods are usually based on specific hardware. With the development of data center networking, software-defined networking, and application-aware networking technology, packet classification methods based on multi/many processor platform are becoming a new research interest. In this dissertation, packet classification has been studied mainly in three aspects: algorithm design framework, rule-set features analysis and algorithm implementation and optimization. In the dissertation, we review multiple proposed algorithms and present a decision tree based algorithm design framework. The framework decomposes various existing packet classification algorithms into a combination of different types of “meta-methods”, revealing the connection between different algorithms. Based on this framework, we combine different “meta-methods” from different algorithms, and propose two new algorithms, HyperSplit-op and HiCuts-op. The experiment results show that HiCuts-op achieves 2~20x less memory size, and 10% less memory accesses than HiCuts, while HyperSplit-op achieves 2~200x less memory size, and 10%~30% less memory accesses than HyperSplit. We also explore the connections between the rule-set features and the performance of various algorithms. We find that the “coverage uniformity” of the rule-set has a significant impact on the classification speed, and the size of “orthogonal structure” rules usually determines the memory size of algorithms. Based on these two observations, we propose a memory consumption model and a quantified method for coverage uniformity. Using the two tools, we propose a new multi-decision tree algorithm, SmartSplit and an algorithm policy framework, AutoPC. Compared to EffiCuts algorithm, SmartSplit achieves around 2.9x speedup and up to 10x memory size reduction. For a given rule-set, AutoPC can automatically recommend a “right” algorithm for the rule-set. Compared to using a single algorithm on all the rulesets, AutoPC achieves in average 3.8 times faster. We also analyze the connection between prefix length and the update overhead for IP lookup algorithms. We observe that long prefixes will always result in more memory accesses using Tree Bitmap algorithm while short prefixes will always result in large update overhead in DIR-24-8. Through combining two algorithms, a hybrid algorithm, SplitLookup, is proposed to reduce the update overhead. Experimental results show that, the hybrid algorithm achieves 2 orders of magnitudes less in memory accesses when performing short prefixes updating, but its lookup speed with DIR-24-8 is close. In the dissertation, we implement and optimize multiple algorithms on the multi/many core platform. For IP lookup, we implement two typical algorithms: DIR-24-8 and Tree Bitmap, and present several optimization tricks for these two algorithms. For multi-dimensional packet classification, we have implemented HyperCuts/HiCuts and the variants of these two algorithms, such as Adaptive Binary Cuttings, EffiCuts, HiCuts-op and HyperSplit-op. The SplitLookup algorithm has achieved up to 40Gbps throughput on TILEPro64 many-core processor. The HiCuts-op and HyperSplit-op have achieved up to 10 to 20Gbps throughput on a single core of Intel processors. In general, our study reveals the connections between the algorithmic tricks and rule-set features. Results in this dissertation provide insight for new algorithm design and the guidelines for efficient algorithm implementation
Congestion inference and traffic engineering in networks by Vijay Arya( Book )

2 editions published in 2005 in English and held by 2 WorldCat member libraries worldwide

This thesis presents methods which help to improve the quality of congestion inference on both en-to-end paths and internal network links in the Internet and a method which help to perform multicast traffic engineering in Overlay Networks. First, we propose an explicit loss differentiation scheme which allows unreliable transport protocols to accurately infer congestion on end-to-end paths by correctly differentiating congestion losses from wireless losses. Second, we present two contributions related to Multicast-based Inference of Network Characteristics (MINC). MINC is a method of performing network tomography which infers loss rates, i. e., congestion on internal network links from end-to-end multicast measurements. We propose a statistical verification algorithm which can verify the integrity of binary multicast measurements used by MINC to perform loss inference. This algorithm helps to ensure a trustworthy inference of link loss rates. Next, we propose an extended MINC loss estimator which can infer loss rates of network links using aggregate multicast feedbacks. This estimator can be used to perform loss inference in situations where the bandwidth to report multicast feedbacks is low. Third, we present efficient ways of encoding multicast trees within data packets. These encodings can be used to perform stateless and explicit multicast routing in overlay networks and thus achieve goals of multicast traffic engineering
H.261 software codec for videoconferencing over the Internet by Thierry Turletti( Book )

2 editions published in 1993 in English and held by 2 WorldCat member libraries worldwide

Benchmarking in wireless networks by Shafqat Ur Rehman( Book )

2 editions published in 2012 in English and held by 2 WorldCat member libraries worldwide

The objective of this thesis is to enable realistic yet fair comparison of the performance of protocols and applications in wireless networks. Simulation is the predominant approach for comparable evaluation of networking protocols however it lacks realism and can lead to misleading results. Real-world experiments guarantee realism but complicate fair comparison. Fair comparison depends on correct interpretation of the results and repeatability of the experiment. Correct interpretation of results is an issue in wireless experiments because it is not easy to record all the factors (e. g. channel condition s, calibration settings tools and test scenario configurations) that can influence the network performance. Repeatability of experiments is almost impossible because of channel randomness. In wireless experiments, “realism” can be taken of granted but “fair comparison” requires a lot of hard work and is impossible without a standard methodology. Therefore, we design a workable experimentation methodology which tackles the aforementioned issues as follows. To ensure correct interpretation of the results, we need to accomplish the following : channel characterization to determine the exact channel conditions, calibration of tools to avoid pitfalls, a simple mechanism to specify scenario configurations. Channel conditions such as path loss, fading an interference are a direct result of radio propagation, movement of objects and co-existing Wi-Fi networks/devices in the environment respectively. Pitfalls mainly result from imperfections / bugs or wrong configurations of a tool. Scenario description consists of a precise specification of the sequence of steps and tasks to be performed at each step. Tasks include traffic generation, packet trace capture (using a sniffer). RF traces capture (using spectrum analyzer) and System/network workload collection. Correct interpretation of results requires that all this information be organized and presented in an easily digestible way to the reviewer. We propose Full disclosure report (FDR) for this purpose. Repeatable experimentation requires additional work. As repeatability is impractical in the wild wireless environment, we propose statistical repeatability of results where experiments are clustered based on the similarity of networking conditions (channel conditions, station workload, network traffic load) and scenario configurations. The, it is possible to make a comparison based on the similarity of conditions. Providing tools to allow a user-friendly mechanism to apply the methodology is also equally important. We need tools to easily describe scenarios, manage scheduling and large number of runs (possibly hundreds or thousands) of them. We also need tools to manage huge amount of packet trace data, metadata and provenance (chronological record of measurement and analysis steps) of results (figures, tables, graphs etc.). Therefore, in addition to the methodology, we developed a toolbox for a wireless experimentation and carried out of two case studies to validate the methodology. In short, we present a holistic view of benchmarking in wireless networks, formulate a methodology complemented by tools and case studies to help drive future efforts on benchmarking of protocols and applications in wireless networks
An Evaluation of Media-Oriented Rate Selection Algorithm for Multimedia Transmission in MANETs by Mohammad Hossein Manshaei( )

1 edition published in 2005 in English and held by 2 WorldCat member libraries worldwide

Enhancing experimentation in wireless networks by Diego Dujovne( Book )

2 editions published in 2009 in English and held by 2 WorldCat member libraries worldwide

Since the inception of the 802.11 standard in 1999, WLANs, which used to be exceptional, became a massive phenomenon together with the evolution of portable devices. At the same pace, research on wireless networks, where both simulation and experimentation are used to validate protocols, has evolved rapidly. Nevertheless, the models used in this area were adapted from the wired paradigm, which has led to a significant gap between the simulated and the experimental results. Therefore, to validate wireless protocols or algorithms, wireless experimentation becomes an important resource. This thesis explores the improvements to experimentation on WLANs, from a methodological point of view. In this thesis, we first create a new model for data abstraction to represent network events and aggregated data logs; second, we design a methodology to manage data in order to support a database model; and third we replace custom made processing scripts with data-oriented filter modules. These three key points converge into a wireless experimentation methodology to specify the experimental conditions and improve reproductibility. Furthermore, we present Wextool, an implementation of a Wireless Experimentation Tool which applies the methodology and finally we show the improvements through a wireless multicast experimentation use case and a performance evaluation of the process
Communication mechanisms for message delivery in heterogeneous networks prone to episodic connectivity by Rao Naveed Bin Rais( Book )

2 editions published in 2011 in English and held by 2 WorldCat member libraries worldwide

Il est prévu que l'Internet du futur interconnectera les différents types de réseau, y compris des réseaux infrastructures et des réseaux ad-hoc sans fil. D'ailleurs, de nouvelles applications comme environnemental monitoring exigent que l'Internet du futur soit tolérant aux perturbations de la connectivité. L'interconnexion de ces réseaux hétérogènes pose de nombreux défis, y compris la remise des messages et l'identification des nœuds mobiles. Il y a trois contributions de cette thèse. Tout d'abord, nous présentons une classification des protocoles de routage DTN en nous basant sur les stratégies de routage. Deuxièmement, nous proposons un nouveau framework appelé MeDeHa pour assurer une livraison de messages sur des réseaux hétérogènes à connectivité intermittente. MeDeHa est capable d'interconnecter des réseaux infrastructures avec des réseaux ad-hoc, en utilisant plusieurs interfaces de nœuds et il permet l'intégration des protocoles existants. Nous évaluons MeDeHa avec des scénarios simulés mais réalistes en utilisant la mobilité des traces synthétiques et réelles, et en effectuant des expériences hybrides qui fonctionnent en partie sur simulateur et en partie sur des machines réelles. Troisièmement, nous proposons un mécanisme de nommage appelé HeNNa pour des réseaux hétérogènes à connectivité » épisodique, qui permette la remise des messages aux nœuds indépendamment de leur adresse IP. HeNNA est compatible avec le routage actuel de l'Internet. Nous avons aussi mis en œuvre HeNNa avec MeDeHa afin de présenter le fonctionnement de la pile protocole complète
FHCF: A Simple and Efficient Scheduling Scheme for IEEE 802.11e Wireless LAN by Pierre Ansel( )

1 edition published in 2006 in English and held by 2 WorldCat member libraries worldwide

Cross layer interactions for adaptive communications in IEEE 802.11 wireless LANs by Mohammad Hossein Manshaei( Book )

in English and held by 1 WorldCat member library worldwide

The main goal of this thesis is to propose efficient adaptive communication mechanisms using cross layer interactions in IEEE 802.11 WLANs. First, we present a detailed performance evaluation of 802.11a/b PHY layer transmission modes. The second contribution of the thesis concerns 802.11 MAC/PHY layers modelling. An analytical model that accounts for the positions of stations with respect to the access point while evaluating the performance of 802.11 MAC layer, has been proposed. The third contribution of the thesis concerns rate adaptation mechanisms and especially cross layer algorithms between MAC and PHY layers. An adaptive rate selection algorithm, called AARF for low latency systems that improves upon ARF to provide both short-term and long-term adaptation has been proposed. In this field, we also present a new rate adaptation algorithm designed for high latency systems named AMRR that has been implemented and evaluated on an AR5212-based device. We then propose a closed-loop, dynamic rate selection algorithm that can be implemented in all 802.11a/b/g compliant wireless local area networks. This algorithm called CLARA is a culmination of the best attributes of the transmitter-based ARF and the RBAR control mechanisms with additional practical features to facilitate multipath fading channel sensing and feedback control signalling. The last contribution of the thesis is on the optimization of real time multimedia transmission over 802.11 based networks. In particular, we propose a simple and efficient cross layer mechanism, called MORSA, for dynamically selecting the transmission mode considering both the channel conditions and characteristics of the media
Livraison de contenus sur un réseau hybride satellite / terrestre by Elie Bernard Bouttier( )

1 edition published in 2018 in French and held by 1 WorldCat member library worldwide

The increase and reinforcement of Internet uses make necessary to improve existing networks.However, we observe strong inequalities between urban areas, well served and which concentratethe major part of investments, and rural areas, underserved and forkasen. To face this situation,users in underserved areas are moving to others Internet access, and in particular satellite Internetaccess. However, the latter suffer from a limitation which is the long delay induced by thepropagation time between the earth and the geostationnary orbit. In this thesis, we are interresedin the simultaneous use of a terrestrial access network, characterized by a low delay and a lowthroughput, and a satellite access network, characterized by a high throughput and an long delay.Elsewhere, Content Delivery Networks (CDNs), consisting of a large number of cache servers,bring an answer to the increase in trafic and needs in terms of latency and throughput. However,located in core networks, cache servers stay far from end users and do not reach accessnetworks. Thus, Internet Service Providers (ISPs) have taken an interest in deploying their ownCDNs, which will be referred to as TelCo CDNs. The content delivery ideally needs theinterconnection between CDN operators and TelCo CDNS, allowing the delegation of the contentdelivery to the TelCo CDNs. The latter are then able to optimize the content delivery on theirnetwork, for which they have a better knowledge. Thus, we will study the optimization of thecontents delivery on a hybrid satellite / terrestrial network, integrated in a CDN delivery chain. Wewill initially focus on the description of a architecture allowing, thanks to a CDN interconnection,handling contents delivery on the hybrid network. In a second stage, we will study the value of theinformation provided by the CDN context in the routing on such architecture. In this framework, wewill propose a routing mechanism based on contents size. Finally, we will show the superiority ofour approach over the multipath transport protocol MP-TCP
Une approche générique pour l'automatisation des expériences sur les réseaux informatiques by Alina Quereilhac( )

1 edition published in 2015 in English and held by 1 WorldCat member library worldwide

This thesis proposes a generic approach to automate network experiments for scenarios involving any networking technology on any type of network evaluation platform. The proposed approach is based on abstracting the experiment life cycle of the evaluation platforms into generic steps from which a generic experiment model and experimentation primitives are derived. A generic experimentation architecture is proposed, composed of an experiment model, a programmable experiment interface and an orchestration algorithm that can be adapted to network simulators, emulators and testbeds alike. The feasibility of the approach is demonstrated through the implementation of a framework capable of automating experiments using any combination of these platforms. Three main aspects of the framework are evaluated: its extensibility to support any type of platform, its efficiency to orchestrate experiments and its flexibility to support diverse use cases including education, platform management and experimentation with multiple platforms. The results show that the proposed approach can be used to efficiently automate experimentation on diverse platforms for a wide range of scenarios
Apport de la gestion des interférences aux réseaux sans-fil multi-sauts : le cas du Physical-Layer Network Coding by Raphaël Naves( )

1 edition published in 2018 in French and held by 1 WorldCat member library worldwide

Fréquemment exploités pour venir en complément aux réseaux mobiles traditionnels, les réseaux sans-fil multi-sauts, aussi appelés réseaux ad-hoc, sont particulièrement mis à profit dans le domaine des communications d'urgence du fait de leur capacité à s'affranchir de toute infrastructure. Néanmoins, la capacité de ces réseaux étant limitée dès lors que le nombre d'utilisateurs augmente, la communauté scientifique s'efforce à en redéfinir les contours afin d'étendre leur utilisation aux communications civiles. La gestion des interférences, considérée comme l'un des principaux défis à relever pour augmenter les débits atteignables dans les réseaux sans-fil multi-sauts, a notamment connu un changement de paradigme au cours des dernières années. Alors qu'historiquement cette gestion est régie par les protocoles de la couche d'accès dont l'objectif consiste à éviter les interférences entre utilisateurs, il est désormais possible, grâce à différentes techniques avancées de communication numérique, de traiter ces interférences, et même de les exploiter. Ces techniques de transmission, dites techniques de gestion des interférences, viennent alors concurrencer les mécanismes d'ordonnancement traditionnels en autorisant plusieurs transmissions simultanées et dans la même bande de fréquence vers un même récepteur. Dans cette thèse, nous nous intéressons à l'une de ces techniques, le Physical-Layer Network Coding (PLNC), en vue de son intégration dans des réseaux ad-hoc composés de plusieurs dizaines de nœuds. Les premiers travaux se concentrant principalement sur des petites topologies, nous avons tout d'abord développé un framework permettant d'évaluer les gains en débit à large échelle du PLNC par rapport à des transmissions traditionnelles sans interférence. Motivés par les résultats obtenus, nous avons ensuite défini un nouveau cadre d'utilisation à cette technique visant à élargir sa sphère d'application. Le schéma de PLNC proposé, testé à la fois sur de vrais équipements radio et par simulation, s'est alors révélé offrir des gains significatifs en débit et en fiabilité en comparaison aux solutions existantes
Efficient cqi feedback resource utilisation for multi-user multi-carrier wireless systems. by Mohammad abdul Awal( )

1 edition published in 2011 in English and held by 1 WorldCat member library worldwide

La technologie OFDMA (Orthogonal frequency division multiple access) a été adoptée par les systèmes de télécommunications de 4ème génération (4G) comme technique de transmission et d'accès multiple pour ses performances supérieures en termes d'efficacité spectrale. Dans ce type de systèmes, l'adaptation dynamique du débit en fonction de la qualité du canal CQI (Channel Quality Indicator) constitue une problématique de recherche d'actualité qui attire l'attention de plusieurs acteurs académiques et industriels. Ce problème d'adaptation dynamique est encore plus complexe à gérer dans des environnements multi-utilisateurs hétérogènes et à ressources limitées tels que les systèmes OFDMA comme WiMAX Mobile et Long-term Evolution (LTE). Dans cette thèse, nous nous intéressons au problème d'allocation de ressources de l'information de feedback relative au CQI dans le cadre de systèmes OFDMA multi-porteuses multi-utilisateurs. Dans le but de réduire la charge (overhead) du feedback, nous proposons une méthode de prédiction du CQI basée sur l'exploitation de la corrélation temporelle de ce dernier et d'une solution inter-couches. L'objectif est de trouver des schémas d'allocation de ressources adaptatifs respectant les contraintes de qualité de service (QoS) applicatives.Nous proposons en premier lieu un algorithme de réduction de feedback PBF (Prediction Based Feedack) qui permet à la station de base (BS) à prédire certaines occurrences du CQI en se basant sur l'algorithme des moindres carrés récursif RLS (Recursive least-square). Les résultats de simulation montrent que l'outil de prédiction du CQI réduit sensiblement l'overhead du feedback et améliore par conséquent le débit de la liaison montante. Nous proposons, par la suite, une version opportuniste de PBF pour atténuer les éventuels effets de sur et sous estimations liées à l'algorithme de prédiction. Dans ce mécanisme, nous exploitons les informations inter-couches pour améliorer les performances des mécanismes de feedbacks périodiques dont PBF fait partie. L'approche opportuniste améliore sensiblement les performances du système pour les cas de mobilité élevée comparés aux cas de faible mobilité.Dans un second temps, nous proposons une plateforme (FEREP : feedback resource allocation and prediction) basée sur une approche inter-couches. Implémentée au niveau de la station BS, FEREP intègre les fonctionnalités de prédiction, d'adaptation dynamique du CQI et d'ordonnancement des demandes de feedback. Elle comporte trois modules. Le module FWA (feedback window adaptation) gère dynamiquement la fenêtre de feedbacks de chaque station mobile (MS) en se basant sur les messages ARQ (Automatic Repeat Request) reçus qui reflètent l'état actuel des canaux respectifs. Le module PBFS (priority-based feedback scheduling) effectue ensuite l'ordonnancement des feedbacks en tenant compte de la taille de la fenêtre de feedback, du profil de l'utilisateur sous la contrainte de la limitation des ressources globales du systèmes réservées au feedback. Afin de choisir les paramètres de transmission MCS (modulation and coding schemes), le module PBF (prediction based feedback) est utilisé pour les utilisateurs dont le feedabck n'a pas pu être ordonnancé dans la trame courante. Les résultats de simulation ont montré un gain significatif des performances de FREREP en comparaison à un mécanisme de référence, en particulier, sous de fortes contraintes de limitation des ressources du feedback.Le protocole ARQ génère un accusé de réception uniquement si l'utilisateur est sélectionné par l'ordonnanceur pour envoyer des données sur la liaison descendante. Dans le cas où la fréquence d'ordonnancement des utilisateurs sur le lien descendant est réduite, les messages ARQ s'en trouvent également réduits, dégradant par conséquent les performances de la plateforme FEREP proposée ci-dessus. En effet, dans ce cas la signalisation ARQ devient insuffisante pour adapter efficacement la fenêtre de feedback de chaque utilisateur. Pour pallier à ce problème, nous proposons l'algorithme DCRA (dynamic CQI resource allocation) qui utilise deux modes d'estimation de la fenêtre de feedback. Le premier est un mode hors-ligne basé sur des études empiriques permettant d'estimer la fenêtre moyenne optimale de feedback en utilisant les profils applicatif et de mobilité de l'utilisateur. Notre analyse de performance par simulation montre que la fenêtre de feedback peut être estimée en fonction de la classe de service des utilisateurs et de leurs profils de mobilité pour un environnement cellulaire donné. Le second mode de fonctionnement de DCRA effectue une adaptation dynamique de la fenêtre en temps réel dans le cas où la signalisation ARQ est suffisante. Une étude comparative avec les mécanismes DFS (deterministic feedback scheduling) et OFS (opportunistic feedback scheduling), a montré que DCRA arrive à réaliser un meilleur gain en ressources montantes grâce à la réduction de l'overhead des feedbacks, sans pour autant trop dégrader le débit descendant des utilisateurs. Du point de vue des utilisateurs, DCRA améliore les contraintes de QoS tels que le taux de perte de paquets et réduit la consommation énergétique des terminaux grâce à la réduction de feedback
Vers les réseaux de nouvelle génération avec SDN et NFV by Andrea Tomassilli( )

1 edition published in 2019 in English and held by 1 WorldCat member library worldwide

Recent advances in networks, such as Software Defined Networking (SDN) and Network Function Virtualization (NFV), are changing the way network operators deploy and manage Internet services. On one hand, SDN introduces a logically centralized controller with a global view of the network state. On the other hand, NFV enables the complete decoupling of network functions from proprietary appliances and runs them as software applications on general-purpose servers. In such a way, network operators can dynamically deploy Virtual Network Functions (VNFs). SDN and NFV benefit network operators by providing new opportunities for reducing costs, enhancing network flexibility and scalability, and shortening the time-to-market of new applications and services. Moreover, the centralized routing model of SDN jointly with the possibility of instantiating VNFs on-demand, may open the way for an even more efficient operation and resource management of networks. For instance, an SDN/NFV-enabled network may simplify the Service Function Chain (SFC) deployment and provisioning by making the process easier and cheaper. In this study, we aim at investigating how to leverage both SDN and NFV in order to exploit their potential benefits. We took steps to address the new opportunities offered in terms of network design, network resilience, and energy savings, and the new problems that arise in this new context, such as the optimal network function placement in the network. We show that a symbiosis between SDN and NFV can improve network performance and significantly reduce the network's Capital Expenditure (CapEx) and Operational Expenditure (OpEx)
Réseaux virtualisés de prochaine génération basés sur SDN by Myriana Rifai( )

1 edition published in 2017 in English and held by 1 WorldCat member library worldwide

Software Defined Networking (SDN) was created to provide network programmability and ease complex configuration. Though SDN enhances network performance, it still faces multiple limitations. In this thesis, we build solutions that form a first step towards creating next-generation SDN based networks. In the first part, we present MINNIE to scale the number of rules of SDN switches far beyond the few thousands rules commonly available in TCAM memory, which permits to handle typical data center traffic at very fine grain. To do so MINNIE dynamically compresses the routing rules installed in the TCAM, increasing the number of rules that can be installed. In the second part, we tackle the degraded performance of short flows and present a coarse grained scheduling prototype that leverages SDN switch statistics to decrease their end-to-end delay. Then, we aim at decreasing the 50ms failure protection interval which is not adapted to current broadband speeds and can lead to degraded Quality of Experience. Our solution PRoPHYS leverages the switch statistics in hybrid networks to anticipate link failures by drastically decreasing the number of packets lost. Finally, we tackle the greening problem where often energy efficiency comes at the cost of performance degradation. We present SENAtoR, our solution that leverages SDN nodes in hybrid networks to turn off network devices without hindering the network performance. Finally, we present SEaMLESS that converts idle virtual machines into virtual network functions (VNF) to enable the administrator to further consolidate the data center by turning off more physical servers and reuse resources (e.g. RAM) that are otherwise monopolized
Virtualisation résiliente des fonctions réseau pour les centres de données et les environnements décentralisés by Ghada Moualla( )

1 edition published in 2019 in English and held by 1 WorldCat member library worldwide

Traditional networks are based on an ever-growing variety of network functions that run on proprietary hardware devices called middleboxes. Designing these vendor-specific appliances and deploying them is very complex, costly and time-consuming. Moreover, with the ever-increasing and heterogeneous short-term services requirements, service providers have to scale up their physical infrastructure periodically, which results in high CAPEX and OPEX. This traditional paradigm leads to network ossification and high complexity in network management and services provisioning to address emerging use cases. Network Function Virtualization (NFV) has attracted notable attention as a promising paradigm to tackle such challenges by decoupling network functions from the underlying proprietary hardware and implementing them as software, named Virtual Network Functions (VNFs), able to work on inexpensive commodity hardware. These VNFs can be arranged and chained together in a predefined order, the so-called Service Function chaining (SFC), to provide end-to-end services. Despite all the benefits associated with the new paradigm, NFV comes with the challenge of how to place the functions of the users' requested services within the physical network while providing the same resiliency as if a dedicated infrastructure were used, given that commodity hardware is less reliable than the dedicated one. This problem becomes particularly challenging when service requests have to be fulfilled as soon as they arise (i.e., in an online manner). In light of these new challenges, we propose new solutions to tackle the problem of online SFC placement while ensuring the robustness of the placed services against physical failures in data-center (DC) topologies. Although recovery solutions exist, they still require time in which the impacted services will be unavailable while taking smart placement decisions can help in avoiding the need for reacting against simple network failures. First, we provide a comprehensive study on how the placement choices can affect the overall robustness of the placed services. Based on this study we propose a deterministic solution applicable when the service provider has full knowledge and control on the infrastructure. Thereafter, we move from this deterministic solution to a stochastic approach for the case where SFCs are requested by tenants oblivious to the physical DC network, where users only have to provide the SFC they want to place and the required availability level (e.g., 5 nines). We simulated several solutions and the evaluation results show the effectiveness of our algorithms and the feasibility of our propositions in very large scale data center topologies, which make it possible to use them in a productive environment. All these solutions work well in trusted environments with a central authority that controls the infrastructure. However, in some cases, many enterprises need to collaborate together in order to run tenants' application, e.g., MapReduce applications. In such a scenario, we move to a completely untrusted decentralized environment with no trust guarantees in the presence of not only byzantine nodes but also rational nodes. We considered the case of MapReduce applications in such an environment and present an adapted MapReduce framework called MARS, which is able to work correctly in such a context without the need of any trusted third party. Our simulations show that MARS grants the execution integrity in MapReduce linearly with the number of byzantine nodes in the system
User-Centric Slicing with Functional Splits in 5G Cloud-RAN by Salma Matoussi( )

1 edition published in 2021 in English and held by 1 WorldCat member library worldwide

Le réseau d'accès radio (RAN) 5G vise à faire évoluer de nouvelles technologies couvrant l'infrastructure Cloud, les techniques de virtualisation et le réseau défini par logiciel (SDN). Des solutions avancées sont introduites pour répartir les fonctions du réseau d'accès radio entre des emplacements centralisés et distribués (découpage fonctionnel) afin d'améliorer la flexibilité du RAN. Cependant, l'une des préoccupations majeures est d'allouer efficacement les ressources RAN, tout en prenant en compte les exigences hétérogènes des services 5G. Dans cette thèse, nous abordons la problématique du provisionnement des ressources Cloud RAN centré sur l'utilisateur (appelé tranche d'utilisateurs ). Nous adoptons un déploiement flexible du découpage fonctionnel. Notre recherche vise à répondre conjointement aux besoins des utilisateurs finaux, tout en minimisant le coût de déploiement. Pour surmonter la grande complexité impliquée, nous proposons d'abord une nouvelle implémentation d'une architecture Cloud RAN, permettant le déploiement à la demande des ressources, désignée par AgilRAN. Deuxièmement, nous considérons le sous-problème de placement des fonctions de réseau et proposons une nouvelle stratégie de sélection de découpage fonctionnel centrée sur l'utilisateur nommée SPLIT-HPSO. Troisièmement, nous intégrons l'allocation des ressources radio. Pour ce faire, nous proposons une nouvelle heuristique appelée E2E-USA. Dans la quatrième étape, nous envisageons une approche basée sur l'apprentissage en profondeur pour proposer un schéma d'allocation temps réel des tranches d'utilisateurs, appelé DL-USA. Les résultats obtenus prouvent l'efficacité de nos stratégies proposées
Amélioration des performances des réseaux d'entreprise by Jinbang Chen( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

Dans l'évaluation d'Internet au cours des années, un grand nombre d'applications apparaissent, avec différentes exigences de service en termes de bande passante, délai et ainsi de suite. Pourtant, le trafic Internet présente encore une propriété de haute variabilité. Plusieurs études révèlent que les flux court sont en général liés à des applications interactives-pour ceux-ci, on s'attend à obtenir de bonne performance que l'utilisateur perçoit, le plus souvent en termes de temps de réponse court. Cependant, le schéma classique FIFO/drop-tail déployé des routeurs/commutateurs d'aujourd'hui est bien connu de parti pris contre les flux courts. Pour résoudre ce problème sur un réseau best-effort, nous avons proposé un nouveau et simple algorithme d'ordonnancement appelé EFD (Early Flow Discard). Dans ce manuscrit, nous avons d'abord évaluer la performance d'EFD dans un réseau câblé avec un seul goulot d'étranglement au moyen d'étendu simulations. Nous discutons aussi des variantes possibles de EFD et les adaptations de EFD à 802.11 WLAN - se réfèrent principalement à EFDACK et PEFD, qui enregistre les volumes échangés dans deux directions ou compte simplement les paquets dans une direction, visant à améliorer l'équité à niveau flot et l'interactivité dans les WLANs. Enfin, nous nous consacrons à profiler le trafic de l'entreprise, en plus de elaborer deux modèles de trafic-l'une qui considère la structure topologique de l'entreprise et l'autre qui intègre l'impact des applications au-dessus de TCP - pour aider à évaluer et à comparer les performances des politiques d'ordonnancement dans les réseaux d'entreprise classiques
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.93 (from 0.89 for FHCF: A Si ... to 1.00 for User-Centr ...)

Languages
English (22)

French (6)