WorldCat Identities

Girault, Alain (chercheur en informatique)

Overview
Works: 26 works in 30 publications in 2 languages and 50 library holdings
Roles: Other, Thesis advisor, Opponent, Author, Editor
Classifications: QA76.758, 004
Publication Timeline
.
Most widely held works by Alain Girault
Proceedings of the 12th International Conference on Embedded Software by Alain Girault( )

2 editions published in 2015 in English and held by 14 WorldCat member libraries worldwide

Sur la répartition de programmes synchrones by Alain Girault( Book )

3 editions published in 1994 in French and held by 3 WorldCat member libraries worldwide

La programmation synchrone a ete proposee pour faciliter la conception et la programmation des systemes reactifs (systemes dont le role est de reagir continument a leur environnement physique, celui-ci etant incapable de se synchroniser avec le systeme). Ces systemes sont tres souvent repartis, que ce soit pour des raisons d'implantation physique, d'amelioration des performances ou de tolerance aux pannes. En outre, les travaux sur la compilation des langages synchrones ont conduit a utiliser une representation interne des programmes sous forme d'un automate d'etats fini : c'est le format OC. Ce travail porte donc sur la repartition automatique des programmes OC. La principale difficulte est d'assurer l'equivalence fonctionnelle et temporelle entre le programme centralise initial et le programme reparti, et de prouver cette equivalence, ce qui est indispensable dans le domaine du temps reel critique. Nous nous attachons egalement a minimiser localement la structure de controle de chaque programme reparti. Pour cela nous developpons un algorithme original de reduction des tests ``a la volee'' utilisant des techniques de bisimulation. D'autre part nous definissons completement l'environnement d'execution des programmes repartis. Ici notre principal souci est de fournir une solution la plus proche possible de l'execution centralisee. Enfin dans le but d'expliquer les desynchronisations introduites par la repartition, nous proposons une semantique originale du langage synchrone Lustre, semantique definie par des ordres partiels
Quantitative Verification and Synthesis by Christian Von Essen( )

1 edition published in 2014 in English and held by 2 WorldCat member libraries worldwide

This thesis contributes to the theoretical study and application of quantitative verification and synthesis. We first study strategies that optimize the ratio of two rewards in MDPs. The goal is the synthesis of efficient controllers in probabilistic environments. We prove that deterministic and memoryless strategies are sufficient. Based on these results we suggest 3 algorithms to treat explicitly encoded models. Our evaluation of these algorithms shows that one of these is clearly faster than the others. To extend its scope, we propose and implement a symbolic variant based on binary decision diagrams, and show that it cope with millions of states. Second, we study the problem of program repair from a quantitative perspective. This leads to a reformulation of program repair with the requirement that only faulty runs of the program be changed. We study the limitations of this approach and show how we can relax the new requirement. We devise and implement an algorithm to automatically find repairs, and show that it improves the changes made to programs.Third, we study a novel approach to a quantitative verification and synthesis framework. In this, verification and synthesis work in tandem to analyze the quality of a controller with respect to, e.g., robustness against modeling errors. We also include the possibility to approximate the Pareto curve that emerges from combining the model with multiple rewards. This allows us to both study the trade-offs inherent in the system and choose a configuration to our liking. We apply our framework to several case studies. The major case study is concerned with the currently proposed next generation airborne collision avoidance system (ACAS X). We use our framework to help analyze the design space of the system and to validate the controller as currently under investigation by the FAA. In particular, we contribute analysis via PCTL and stochastic model checking to add to the confidence in the controller
Analyses pour l'ordonnançabilité et la flexibilité de systèmes temps-réel by Christophe Prévot( )

1 edition published in 2019 in French and held by 2 WorldCat member libraries worldwide

Real time system are often used for applications in avionic or automotive domains. For those systems, timing constraints are as important as functional constraints. Thus, for each functionality it is necessary to compute the maximum duration between the acquisition of the inputs and the production of the corresponding outputs, duration denoted worst case latency. Given the needs that we identify at Thales, we consider uni-processor systems scheduled under fixed priority preemptive scheduling with task chains mapped on them. Each chain implements a functionality and each task has a fix and unique priority. The finishing time of a task corresponds to the activation of the following one in the chain. The processor always executes the task with the highest priority. Then, we use scheduling analysis to characterize the timing behavior of systems and to compute the worst case latency of chains. If the worst case latency of each chain is lower than or equal to its timing constraint, then the system is schedulable. To guarantee the schedulability of a system, we compute upper bounds that are higher than or equal to the worst case latency. Depending on the precision of the analysis, these bounds may be more of less over-approximated. For a given system, if the over-approximations are too large, then it is necessary to over-dimension the system in order to guarantee its schedulability, which is not desirable in a industrial context. To solve this over-dimensioning problem, we compute more precise upper bounds and on more general systems than in the state of the art. When computing these upper bounds, it is useful to measure the gap between the worst case latency and its upper bound. To achieve this, we compute a lower bound on the worst case latency to evaluate the precision of the upper bound. In the state of the art, lower bounds are computed using simulation. We propose instead to use schedulability analysis to compute lower bounds that we define as execution scenarios that are realizable considering the system model. Our lower bounds are computed with equations similar to those established to compute our upper bounds. Finally, considering the very long lifetime of systems and the quick evolution of technologies, many systems have to evolve during their lifetime. A relevant evolution in an industrial context consists in adding a new chain to an existing system. To guarantee the schedulability of a system in this context, we present a methodology to compute the worst-case latency of a new chain while providing guarantees on the schedulability of the new system. This analysis can be used to determine the sensitivity and/or the robustness of a system with respect to timing parameter changes, such as the execution times of its tasks. This consists in finding extreme values of its timing parameters such that the system remains schedulable. In this thesis, we present analyses to compute upper and lower bounds on the worst case latency, and to schedule a new chain while giving guarantee on its schedulability. These results may be extended in the future to handle more complex systems, and the computation of lower bound may be adapted to other analyses. Finally, the developed analyses are complex, so it would be interesting to certify that they are correct using a proof assistant to guarantee that they are exact
Modèles de calculs flot de données avec paramètres entiers et booléens. Modélisation - Analyses - Mise en oeuvre by Evangelos Bempelis( )

1 edition published in 2015 in English and held by 2 WorldCat member libraries worldwide

Les applications de gestion de flux sont responsables de la majorité des calculs des systèmes embarqués (vidéo conférence, vision par ordinateur). Leurs exigences de haute performance rendent leur mise en œuvre parallèle nécessaire. Par conséquent, il est de plus en plus courant que les systèmes embarqués modernes incluent des processeurs multi-cœurs qui permettent un parallélisme massif. La mise en œuvre des applications de gestion de flux sur des multi-cœurs est difficile à cause de leur complexité, qui tend à augmenter, et de leurs exigences strictes à la fois qualitatives (robustesse, fiabilité) et quantitatives (débit, consommation d'énergie). Ceci est observé dans l'évolution de codecs vidéo qui ne cessent d'augmenter en complexité, tandis que leurs exigences de performance demeurent les mêmes. Les modèles de calcul (MdC) flot de données ont été développés pour faciliter la conception de ces applications qui sont typiquement composées de filtres qui échangent des flux de données via des liens de communication. Ces modèles fournissent une représentation intuitive des applications de gestion de flux, tout en exposant le parallélisme de tâches de l'application. En outre, ils fournissent des analyses statiques pour la vivacité et l'exécution en mémoire bornée. Cependant, les applications de gestion de flux modernes comportent des filtres qui échangent des quantités de données variables, et des liens de communication qui peuvent être activés / désactivés. Dans cette thèse, nous présentons un nouveau MdC flot de données, le Boolean Parametric Data Flow (BPDF), qui permet le paramétrage de la quantité de données échangées entre les filtres en utilisant des paramètres entiers et l'activation et la désactivation de liens de communication en utilisant des paramètres booléens. De cette manière, BPDF est capable de exprimer des applications plus complexes, comme les décodeurs vidéo modernes. Malgré l'augmentation de l'expressivité, les applications BPDF restent statiquement analysables pour la vivacité et l'exécution en mémoire bornée. Cependant, l'expressivité accrue complique grandement la mise en œuvre. Les paramètres entiers entraînent des dépendances de données de type paramétrique et les paramètres booléens peuvent désactiver des liens de communication et ainsi éliminer des dépendances de données. Pour cette raison, nous proposons un cadre d'ordonnancement qui produit des ordonnancements de type ``aussi tôt que possible'' (ASAP) pour un placement statique donné. Il utilise des contraintes d'ordonnancement, soit issues de l'application (dépendance de données) ou de l'utilisateur (optimisations d'ordonnancement). Les contraintes sont analysées pour la vivacité et, si possible, simplifiées. De cette façon, notre cadre permet une grande variété de politiques d'ordonnancement, tout en garantissant la vivacité de l'application. Enfin, le calcul du débit d'une application est important tant avant que pendant l'exécution. Il permet de vérifier que l'application satisfait ses exigences de performance et il permet de prendre des décisions d'ordonnancement à l'exécution qui peuvent améliorer la performance ou la consommation d'énergie. Nous traitons ce problème en trouvant des expressions paramétriques pour le débit maximum d'un sous-ensemble de BPDF. Enfin, nous proposons un algorithme qui calcule une taille des buffers suffisante pour que l'application BPDF ait un débit maximum
Tradeoff exploration between reliability, power consumption, and execution time for embedded systems The TSH tricriteria scheduling heuristic by Ismail Assayad( )

1 edition published in 2012 in English and held by 2 WorldCat member libraries worldwide

Logico-Numerical Verification Methods for Discrete and Hybrid Systems by Peter Schrammel( )

1 edition published in 2012 in English and held by 2 WorldCat member libraries worldwide

This thesis studies the automatic verification of safety properties of logico-numerical discrete and hybrid systems. These systems have Boolean and numerical variables and exhibit discrete and continuous behavior. Our approach is based on static analysis using abstract interpretation. We address the following issues: Numerical abstract interpretation methods require the enumeration of the Boolean states, and hence, they suffer from the state space explosion problem. Moreover, there is a precision loss due to widening operators used to guarantee termination of the analysis. Furthermore, we want to make abstract interpretation-based analysis methods accessible to simulation languages for hybrid systems. In this thesis, we first generalize abstract acceleration, a method that improves the precision of the inferred numerical invariants. Then, we show how to extend abstract acceleration and max-strategy iteration to logico-numerical programs while improving the trade-off between efficiency and precision. Concerning hybrid systems, we translate the Zelus hybrid synchronous programming language to logico-numerical hybrid automata and extend logico-numerical analysis methods to hybrid systems. Finally, we implemented the proposed methods in ReaVer, a REActive System VERification tool, and provide experimental results. Concluding, this thesis proposes a unified approach to the verification of discrete and hybrid logico-numerical systems based on abstract interpretation, which is capable of integrating sophisticated numerical abstract interpretation methods while successfully trading precision for efficiency
Génération de code pour un many-core avec des contraintes temps réel fortes by Amaury Graillat( )

1 edition published in 2018 in French and held by 2 WorldCat member libraries worldwide

Most critical systems are subject to hard real-time requirements. These systems are more and more complex and the computational power of the predictable single-core processors is no longer sufficient. Multi- or many-core architectures are good alternatives but interferences on shared resources must be taken into account to avoid unpredictable timing effects. For many-core, the Network-on-Chip (NoC) must be configured such that deadlocks are avoided and a tight Worst Case Traversal Time (WCTT) of the communications can be computed. The Kalray MPPA2 is a many-core architecture with good timing properties.Dataflow Synchronous languages such as Lustre or Scade are widely used for avionics critical software. In these languages, programs are described by networks of computational nodes. We introduce a method to extract parallel tasks from synchronous programs. Then, we generate parallel code to deploy tasks on the chip and implement NoC and shared-memory communications. The generated code enables traceability. It is based on a time-triggered execution model which relies on a static schedule and minimizes the memory interferences thanks to usage of memory banks. The code enables the computation of a worst case execution time bound accounting for the memory interferences and the WCTT of NoC transmissions. We generate a configuration of the platform to enable a fair bandwidth attribution on the NoC, bounded transmissions through the NoC and clock synchronization. Finally, we apply this toolchain on avionic case studies and synthetic benchmarks running on 64 cores
Génération automatique de distributions/ordonnancements temps réel, fiables et tolérants aux fautes by Hamoudi Kalla( Book )

2 editions published in 2004 in French and held by 2 WorldCat member libraries worldwide

Reactive systems are increasingly used in fields such as automotive, telecommunication, and aeronautic. These systems carry out complex tasks which are often critical. Within catastrophic consequences that could involve a fault in these systems, due to the presence of hardware fault (processor and communication media), it is essential to take into account fault-tolerance in their design. Moreover, several fields require a quantitative evaluation of their system behavior with respect to fault occurrence and fault activation. In order to design dependable systems, I propose in this thesis three design methodologies, based on scheduling theory, and on active and passive software redundancy. These three methodologies allow me to solve the problem of automatic generation of fault-tolerant real-time and reliable schedules. This problem is NP-hard, so these three methodologies are based on list scheduling heuristics. More precisely, the first two methodologies deal with the problem of hardware fault-tolerance (processors and communication media faults), respectively for architectures with point-to-point and buses communication links. The third methodology deals with the problem of quantitative evaluation of schedules in terms of reliability through an original bi-criteria heuristic. These methodologies offer good performances on algorithm and architecture graphs radomly generated
Design, Optimization, and Formal Verification of Circuit Fault-Tolerance Techniques by Dmitry Burlyaev( )

1 edition published in 2015 in English and held by 2 WorldCat member libraries worldwide

Technology shrinking and voltage scaling increase the risk of fault occurrences in digital circuits. To address this challenge, engineers use fault-tolerance techniques to mask or, at least, to detect faults. These techniques are especially needed in safety critical domains (e.g., aerospace, medical, nuclear, etc.), where ensuring the circuit functionality and fault-tolerance is crucial. However, the verification of functional and fault-tolerance properties is a complex problem that cannot be solved with simulation-based methodologies due to the need to check a huge number of executions and fault occurrence scenarios. The optimization of the overheads imposed by fault-tolerance techniques also requires the proof that the circuit keeps its fault-tolerance properties after the optimization.In this work, we propose a verification-based optimization of existing fault-tolerance techniques as well as the design of new techniques and their formal verification using theorem proving. We first investigate how some majority voters can be removed from Triple-Modular Redundant (TMR) circuits without violating their fault-tolerance properties. The developed methodology clarifies how to take into account circuit native error-masking capabilities that may exist due to the structure of the combinational part or due to the way the circuit is used and communicates with the surrounding device.Second, we propose a family of time-redundant fault-tolerance techniques as automatic circuit transformations. They require less hardware resources than TMR alternatives and could be easily integrated in EDA tools. The transformations are based on the novel idea of dynamic time redundancy that allows the redundancy level to be changed "on-the-fly" without interrupting the computation. Therefore, time-redundancy can be used only in critical situations (e.g., above Earth poles where the radiation level is increased), during the processing of crucial data (e.g., the encryption of selected data), or during critical processes (e.g., a satellite computer reboot).Third, merging dynamic time redundancy with a micro-checkpointing mechanism, we have created a double-time redundancy transformation capable of masking transient faults. Our technique makes the recovery procedure transparent and the circuit input/output behavior remains unchanged even under faults. Due to the complexity of that method and the need to provide full assurance of its fault-tolerance capabilities, we have formally certified the technique using the Coq proof assistant. The developed proof methodology can be applied to certify other fault-tolerance techniques implemented through circuit transformations at the netlist level
Mapping and scheduling on multi-core processors using SMT solvers by Pranav Tendulkar( )

1 edition published in 2014 in English and held by 2 WorldCat member libraries worldwide

In order to achieve performance gains, computers have evolved to multi-core and many-core platforms abounding with multiple processor cores. However the problem of finding efficient ways to execute parallel software on them is hard. With a large number of processor cores available, the software must orchestrate the communication, synchronization along with the code execution. Communication corresponds to the transport of data between different processors, handled transparently by the hardware or explicitly by the software.Models which represent the algorithms in a structured and formal way expose the available parallelism. Deployment of the software algorithms represented by such models needs a specification of which processor to execute the tasks on (mapping) and when to execute them (scheduling). Mapping and scheduling is a hard combinatorial problem with exponential number of solutions. In addition, the solutions have multiple costs that need to be optimized, such as memory consumption, time to execute, resources used etc. Such a problem with multiple costs is called a multi-criteria optimization problem. The solution to this problem is a set of incomparable solutions called Pareto solutions which need special algorithms to approximate them.We target a class of applications called streaming applications, which process a continuous stream of data. These applications apply similar computation on different data items, can be conveniently expressed by a class of models called dataflow models. We encode mapping and scheduling problem in form of logical constraints and present it to satisfiability modulo theory (SMT) solvers. SMT solvers, solve the encoded problem by using a combination of search techniques and constraint propagation to find an assignment to the problem variables satisfying the given cost constraints.In dataflow applications, the design space explodes with increased number of tasks and processors. In this thesis, we tackle this problem by introduction symmetry reduction techniques and demonstrate that symmetry breaking accelerates search in SMT solver, increasing the size of the problem that can be solved. Our design-space exploration algorithm approximates Pareto front of the problem and produces solutions with different cost trade-offs. Further we extend the scheduling problem to the many-core platforms which are a group of multi-core platforms connected by network-on-chip. We provide a design flow which performs mapping of the applications on such platforms and automatic insertion of additional elements to model the communication using bounded memory. We provide experimental results obtained on the 256-processor Kalray and the Tilera TILE-64 platforms.The multi-core processors have typically a small amount of memory close to the processor, generally insufficient for all application data to fit. We study a class of parallel applications having a regular data access pattern and large amount of data to be processed by a uniform computation. The data must be brought from main memory to local memory, processed and then the results written back to main memory, all in batches. Selecting the proper granularity of the data that is brought into local memory is an optimization problem. We formalize this problem and provide a way to determine the optimal transfer granularity depending on the characteristics of application and the hardware platform.In addition to the scheduling problems and local memory management, we study a part of the problem of runtime management of the applications. Applications in modern embedded systems can start and stop dynamically. In order to execute all the applications efficiently and to optimize global costs such as power consumption, execution time etc., the applications must be reconfigured dynamically at runtime. We present a predictable and composable (executing independently without affecting others) way of migrating tasks according to the reconfiguration decision
SLAP 2003 : synchronous languages, applications, and programming : proceedings by Applications, and Programming International Workshop on Synchronous Languages( Book )

1 edition published in 2003 in English and held by 2 WorldCat member libraries worldwide

Des réseaux de processus cyclo-statiques à la génération de code pour le pipeline multi-dimensionnel by Mohammed Fellahi( )

1 edition published in 2011 in English and held by 1 WorldCat member library worldwide

Les applications de flux de données sont des cibles importantes de l'optimisation de programme en raison de leur haute exigence de calcul et la diversité de leurs domaines d'application: communication, systèmes embarqués, multimédia, etc. L'un des problèmes les plus importants et difficiles dans la conception des langages de programmation destinés à ce genre d'applications est comment les ordonnancer à grain fin à fin d'exploiter les ressources disponibles de la machine.Dans cette thèse on propose un "framework" pour l'ordonnancement à grain fin des applications de flux de données et des boucles imbriquées en général. Premièrement on essaye de paralléliser le nombre maximum de boucles en appliquant le pipeline logiciel. Après on merge le prologue et l'épilogue de chaque boucle (phase) parallélisée pour éviter l'augmentation de la taille du code. Ce processus est un pipeline multidimensionnel, quelques occurrences (ou instructions) sont décalées par des iterations de la boucle interne et d'autres occurrences (instructions) par des iterationsde la boucle externe. Les expériences montrent que l'application de cette technique permet l'amélioration des performances, extraction du parallélisme sans augmenter la taille du code, à la fois dans le cas des applications de flux des donnée et des boucles imbriquées en général
Compilation optimisante et formellement prouvée pour un processeur VLIW by Cyril Six( )

1 edition published in 2021 in English and held by 1 WorldCat member library worldwide

Software programs are used for many critical roles. A bug in those can have a devastatingcost, possibly leading to the loss of human lives. Such bugs are usually found at a source level(which can be ruled out with source-level verification methods), but they can also be inserted bythe compiler unknowingly. CompCert is the first compiler with a formal proof of correctness:compiled programs are proven to behave the same as their source programs. However, because ofthe challenges involved in proving compiler optimizations, CompCert only has a limited numberof them. As such, CompCert usually generates low-performance code compared to classicalcompilers such as GCC. While this may not significantly impact out-of-order architectures suchas x86, on in-order architectures, particularly on VLIW processors, the slowness is significant(code running half as fast as GCC -O2). On VLIW processors, the intra-level parallelism isexplicit and specified in the assembly code through “bundles” of instructions: the compiler mustbundlize instructions to achieve good performance.In this thesis, we identify, investigate, implement and formally verify several classical optimiza-tions missing in CompCert. We start by introducing a formal model for VLIW bundles executionon an interlocked core and generate those bundles through a postpass (after register allocation)scheduling. Then, we introduce a prepass (before register allocation) superblock scheduling,implementing static branch prediction and tail-duplication along the way. Finally, we furtherincrease the performance of our generated code by implementing loop unrolling, loop rotationand loop peeling - the latter being used for Loop-Invariant Code Motion. These transformationsare verified by translation validation, some of them with hash-consing to achieve reasonablecompilation time.We evaluate each introduced optimization on benchmarks, including Polybench and TACleBench,on the KV3 VLIW core, ARM Cortex A53, and RiscV “Rocket” core. Thanks to this work, ourversion of CompCert is now only 16% slower (respectively 12% slower and 30% slower) thanGCC -O2 on the KV3 (respectively ARM and RiscV), instead of 50% (respectively 38% and45%)
Ordonnancement d'applications à flux de données pour les MPSoC embarqués hybrides comprenant des unités de calcul programmables et des accélérateurs matériels by Paul-Antoine Arras( )

1 edition published in 2015 in French and held by 1 WorldCat member library worldwide

Bien que de nombreux appareils numériques soient aujourd'hui capables de lire des contenus vidéo en temps réel et d'offrir une restitution de grande qualité, le décodage vidéo dans les systèmes embarqués n'en est pas pour autant devenu une opération anodine. En effet, les codecs récents tels que H.264 et HEVC sont d'une complexité telle que le recours à des architectures mixtes logiciel/matériel est presque incontournable. Or les plateformes de ce type sont notoirement difficiles à programmer efficacement. Cette thèse relève le défi du développement d'applications à flux de données pour les cibles embarquées hybrides et de leur exécution efficace, et propose plusieurs contributions. La première est une extension des heuristiques d'ordonnancement de liste pour tenir compte des contraintes mémorielles. La seconde est un modèle d'exécution à flot de données compatible avec la plupart des modèles existants et avec une large classe de plateformes matérielles, ainsi qu'un ordonnanceur dynamique. Enfin, de nombreux développements ont été menés sur une architecture réelle de STMicroelectronics pour démontrer la faisabilité de l'approche
Algorithmes d'ordonnancement et schémas de résilience pour les pannes et les erreurs silencieuses by Aurélien Cavelan( )

1 edition published in 2017 in English and held by 1 WorldCat member library worldwide

Cette thèse s'intéresse à la résilience pour les applications haute performance à très grande échelle. Sur ce type de plateforme, qui compte jusqu'à plusieurs millions d'unités de calculs, les erreurs représentent la norme plutôt que l'exception. On distingue principalement deux types d'erreurs : les pannes (typiquement, l'arrêt de l'application suite au crash d'un nœud de calcul) et les corruptions de données silencieuses (se traduisant généralement par des résultats erronés). Ces dernières posent de nombreux problèmes car elles sont autant difficile à détecter qu'à corriger. Dans cette thèse, nous commençons par étudier plusieurs mécanismes de détection pour les erreurs silencieuses. Nous modélisons l'impact des détecteurs sur l'exécution d'applications scientifiques, permettant notamment de décider lequel utiliser lorsque plusieurs choix sont possibles. Dans un deuxième temps, nous combinons pannes et erreurs silencieuses au sein d'un même schéma de résilience : périodiquement, l'application vérifie les résultats, puis créer un point de sauvegarde. Ainsi, en cas d'erreur, il n'est pas nécessaire de tout ré-exécuter. L'objectif est alors de minimiser le temps d'exécution ou la consommation d'énergie. Dans ce contexte, nous étendons plusieurs résultats de la littérature en caractérisant le schéma de résilience optimal pour différents types d'applications. Nous fournissons également plusieurs algorithmes d'ordonnancement exacts polynomiaux, ainsi que des heuristiques pour les graphes de tâches. Enfin, les modèles sont validés au travers de nombreuses simulations, en comparant notamment les résultats obtenus avec l'état de l'art lorsque cela est possible
RDF : un modèle de calcul flot de données reconfigurable by Arash Shafiei( )

1 edition published in 2021 in English and held by 1 WorldCat member library worldwide

Les modèles de calcul flot de données sont utilisés pour modéliser les applications de traitement de flux de façon parallèle. Ces modèles sont très utilisés dans le traitement de signal, les applications multimédia, de télécommunication ou de contrôle.Un modèle flot de données modélise une application comme un graphe d'acteurs connectés par des canaux de communication. Les acteurs sont des unités de calcul consommant et produisant des données sur leurs canaux d'entrée et de sortie.Les acteurs s'exécutent en parallèle et peuvent s'ordonnancer selon différentes politiques. Le modèle le plus connu est le modèle SDF (Synchronous DataFlow).Il permet des analyses statiques qui garantissent que les applications SDF s'exécutent en mémoire bornée et sans blocage. Un graphe SDF est statique et est spécifié une fois pour toute à la compilation. Des extensions ont été proposées pour permettent un peu de dynamisme à l'exécution. Pourtant, tous les modèles doivent prévoir l'ensemble des graphes possibles. En conséquence, le nombre de topologies différentes d'une application doit rester petit.Nous adressons ce problème en proposant un nouveau modèle de calcul nommé RDF (pour Reconfigurable DataFlow). RDF étend SDF avec des programmes spécifiant comment et quand changer la topologie à l'exécution. Un programme est composé de règles de transformation de graphe décrivant comment le graphe est modifié et de conditions spécifiant quand ces reconfigurations doivent avoir lieu.En partant d'un graphe initial et d'un petit nombre de règles de transformation, un nombre arbitraire de graphes peuvent être générés à l'exécution. Une application RDF peut toujours être analysée afin de garantir de tous les graphes générés dynamiquement s'exécuteront en mémoire bornée et sans blocage.L'impact des règles de transformation sur le débit et la latence peut également être estimé. Dans cette thèse, nous introduisons le modèle RDF, décrivons les analyses statiques de sûreté et de performance associées. Nous présentons également notre implémentation de RDF qui nous a permis d'évaluer les coûts de reconfiguration et de traiter une étude de cas
SLAP '03 - Synchronous languages, applications, and programming : proceedings by SLAP / Satellite workshop on ETAPS 2002 2 ; 1st July 2003 ; Grenoble( Book )

1 edition published in 2003 in English and held by 1 WorldCat member library worldwide

Models, Analysis and Execution of Audio Graphs in Interactive Multimedia Systems by Pierre Donat-Bouillud( )

1 edition published in 2019 in English and held by 1 WorldCat member library worldwide

Interactive Multimedia Systems (IMSs) are used in concert for interactive performances, which combine in real time acoustic instruments, electronic instruments, data from various sensors (gestures, midi interface, etc.) and the control of different media (video, light, etc.). This thesis presents a formal model of audio graphs, via a type system and a denotational semantics, with multirate timestamped bufferized data streams that make it possible to represent with more or less precision the interleaving of the control (for example a low frequency oscillator, velocities from an accelerometer) and audio processing in an MIS. An audio extension of Antescofo, an IMS that acts as a score follower and includes a dedicated synchronous timed language, has motivated the development of this model. This extension makes it possible to connect Faust effects and native effects on the fly safely. The approach has been validated on a mixed music piece and an example of audio and video interactions. At last, this thesis proposes offline optimizations based on the automatic resampling of parts of an audio graph to be executed. A quality and execution time model in the graph has been defined. Its experimental study was carried out using a prototype IMS based on the automatic generation of audio graphs, which has also made it possible to characterize resampling strategies proposed for the online case in real time
Resilient and energy-efficient scheduling algorithms at scale by Guillaume Aupy( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

This thesis deals with two issues for future Exascale platforms, namelyresilience and energy.In the first part of this thesis, we focus on the optimal placement ofperiodic coordinated checkpoints to minimize execution time.We consider fault predictors, a software used by system administratorsthat tries to predict (through the study of passed events) where andwhen faults will strike. In this context, we propose efficientalgorithms, and give a first-order optimal formula for the amount ofwork that should be done between two checkpoints.We then focus on silent data corruption errors. Contrarily to fail-stopfailures, such latent errors cannot be detected immediately, and amechanism to detect them must be provided. We compute the optimal periodin order to minimize the waste.In the second part of the thesis we address the energy consumptionchallenge.The speed scaling technique consists in diminishing the voltage of theprocessor, hence diminishing its execution speed. Unfortunately, it waspointed out that DVFS increases the probability of failures. In thiscontext, we consider the speed scaling technique coupled withreliability-increasing techniques such as re-execution, replication orcheckpointing. For these different problems, we propose variousalgorithms whose efficiency is shown either through thoroughsimulations, or approximation results relatively to the optimalsolution. Finally, we consider the different energetic costs involved inperiodic coordinated checkpointing and compute the optimal period tominimize energy consumption, as we did for execution time
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  General Special  
Audience level: 0.00 (from 0.00 for Proceeding ... to 0.00 for Proceeding ...)

Languages
English (16)

French (8)