WorldCat Identities

Linköpings universitet Institutionen för datavetenskap

Works: 591 works in 660 publications in 2 languages and 665 library holdings
Roles: Publisher, pub, Other, Editor
Publication Timeline
Most widely held works by Linköpings universitet
Optimistic replication with forward conflict resolution in distributed real-time databases by Sanny Syberfeldt( Book )

2 editions published in 2007 in English and held by 3 WorldCat member libraries worldwide

In this thesis a replication protocol - PRiDe - is presented, which supports optimistic replication in distributed real-time databases with deterministic detection and forward resolution of transaction conflicts. The protocol is designed to emphasize node autonomy, allowing individual applications to proceed without being affected by distributed operation. For conflict management, PRiDe groups distributed operations into generations of logically concurrent and potentially conflicting operations. Conflicts between operations in a generation can be resolved with no need for coordination among nodes, and it is shown that nodes eventually converge to mutually consistent states. A generic framework for conflict resolution is presented that allows semantics-based conflict resolution policies and application-specific compensation procedures to be plugged in by the database designer and application developer. It is explained how transaction semantics are supported by the protocol, and how applications can tolerate exposure to temporary database inconsistencies. Transactions can detect inconsistent reads and compensate for inconsistencies through callbacks to application-specific compensation procedures. A tool - VADer - has been constructed, which allows database designers and application programmers to quickly construct prototype applications, conflict resolution policies and compensation procedures. VADer can be used to simulate application and database behavior, and supports run-time visualization of relationships between concurrent transactions. Thus, VADer assists the application programmer in conquering the complexity inherent in optimistic replication and forward conflict resolution
Towards an approach for efficiency evaluation of enterprise modeling methods by Banafsheh Khademhosseinieh( )

3 editions published in 2013 in English and held by 3 WorldCat member libraries worldwide

Nowadays, there is a belief that organizations should keep improving different aspects of theirenterprise to remain competitive in their business segment. For this purpose, it is required to understand the current state of the enterprise, analyze and evaluate it to be able to figure out suitable change measures. To perform such a process in a systematic and structured way, receiving support from powerful tools is inevitable. Enterprise Modeling is a field that can support improvement processes by developing models to show different aspects of an enterprise. An Enterprise Modeling Method is an important support for the Enterprise Modeling. A method is comprised of different conceptual parts: Perspective, Framework , Method Component (which itself contains Procedure, Notation and Concepts ), and Cooperation Principles . In an ideal modeling process, both the process and the results are of high quality. One dimension of quality which is in focus in this thesis is efficiency. The issue of efficiency evaluation in Enterprise Modeling still seems to be a rather unexploited research area. The thesis investigates three aspects of Enterprise Modeling Methods: what is the meaning of efficiency in this context, how can efficiency be evaluated and in what phases of a modeling process could efficiency be evaluated. The contribution of the thesis is an approach for evaluation of efficiency in Enterprise Modeling Methods based also on several case studies. The evaluation approach is constituted by efficiency criteria that should be met by (different parts of) a method. While a subset of these criteria always need to be fulfilled in a congruent way, fulfillment of the rest of the criteria depends on the application case. To help the user in initial evaluation of a method, a structure of driving questions is presented
Anomaly detection and its adaptation : studies on cyber-physical systems by Massimiliano Raciti( )

3 editions published in 2013 in English and held by 3 WorldCat member libraries worldwide

Cyber-Physical Systems (CPS) are complex systems where physical operations are supported and coordinated by Information and Communication Technology (ICT). From the point of view of security, ICT technology offers new opportunities to increase vigilance and real-time responsiveness to physical security faults. On the other hand, the cyber domain carries all the security vulnerabilities typical to information systems, making security a new big challenge in critical systems. This thesis addresses anomaly detection as security measure in CPS. Anomaly detection consists of modelling the good behaviour of a system using machine learning and data mining algorithms, detecting anomalies when deviations from the normality model occur at runtime. Its main feature is the ability to discover the kinds of attack not seen before, making it suitable as a second line of defence
Exploiting Structure in CSP-related Problems by Tommy Färnqvist( )

3 editions published in 2013 in English and held by 3 WorldCat member libraries worldwide

In this thesis we investigate the computational complexity and approximability of computational problems from the constraint satisfactio n framework. An instance of a constraint satisfaction problem (CSP) has three components; a set V of variables, a set D of domain values, and a set of constraints C. The constraints specify a set of variables and associated local conditions on the domain values allowed for each variable, and the objective of a CSP is to assign domain values to the variables, subject to these constraints. The first main part of the thesis is concerned with studying restrictions on the structure induced by the constraints on the variables for different computational problems related to the CSP. In particular, we examine how to exploit various graph, and hypergraph, acyclicity measures from the literature to find classes of relational structures for which our computational problems become efficiently solvable. Among the problems studied are, such where, in addition to the constraints of a CSP, lists of allowed domain values for each variable are specified (LHom). We also study variants of the CSP where the objective is changed to: counting the number of possible assignments of domain values to the variables given the constraints of a CSP (#CSP), minimising or maximising the cost of an assignment satisfying all constraints given various different ways of assigning costs to assignments (MinHom, Max Sol, and CSP), or maximising the number of satisfied constraints (Max CSP). In several cases, our investigations uncover the largest known (or possible) classes of relational structures for which our problems are efficiently solvable. Moreover, we take a different view on our optimisation problems MinHom and VCSP; instead of considering fixed arbitrary values for some (hyper)graph acyclicity measure associated with the underlying CSP, we consider the problems parameterised by such measures in combination with other basic parameters such as domain size and maximum arity of constraints. In this way, we identify numerous combinations of the considered parameters which make these optimisation problems admit fixed-parameter algorithms. In the second part of the thesis, we explore the approximability properties of the (weighted) Max CSP problem for graphs. This is a problem which is known to be approximable within some constant ratio, but not believed to be approximable within an arbitrarily small constant ratio. Thus it is of interest to determine the best ratio within which the problem can be approximated, or at least give some bound on this constant. We introduce a novel method for studying approximation ratios which, in the context of Max CSP for graphs, takes the form of a new binary parameter on the space of all graphs. This parameter may, informally, be thought of as a sort of distance between two graphs; knowing the distance between two graphs, we can bound the approximation ratio of one of them, given a bound for the other
Towards an ontology design pattern quality model by Karl Hammar( )

3 editions published in 2013 in English and held by 3 WorldCat member libraries worldwide

The use of semantic technologies and Semantic Web ontologies in particular have enabled many recent developments in information integration, search engines, and reasoning over formalised knowledge. Ontology Design Patterns have been proposed to be useful in simplifying the development of Semantic Web ontologies by codifying and reusing modelling best practices. This thesis investigates the quality of Ontology Design Patterns. The main contribution of the thesis is a theoretically grounded and partially empirically evaluated quality model for such patterns including a set of quality characteristics, indicators, measurement methods and recommendations. The quality model is based on established theory on information system quality, conceptual model quality, and ontology evaluation. It has been tested in a case study setting and in two experiments. The main findings of this thesis are that the quality of Ontology Design Patterns can be identified, formalised and measured, and furthermore, that these qualities interact in such a way that ontology engineers using patterns need to make tradeoffs regarding which qualities they wish to prioritise. The developed model may aid them in making these choices. This work has been supported by Jönköing University
Resilience in high risk work : analysing adaptive performance by Amy Rankin( )

3 editions published in 2013 in English and held by 3 WorldCat member libraries worldwide

In today’s complexsocio-technical systems it is not possible to foresee and prepare for allfuture events. To cope with the intricacy and coupling between people,technical systems and the dynamic environment people are required tocontinuously adapt. To design resilient systems a deepened understanding ofwhat supports and enables adaptive performance is needed. In this thesis two studiesare presented that investigate how adaptive abilities can be identified andanalysed in complex work settings across domains. The studies focus onunderstanding adaptive performance, what enables successful adaptation and how contextual factors affect the performance. The first study examines how acrisis command team adapts as they lose important functions of their teamduring a response operation.  The secondstudy presents a framework to analyse adaptive behaviour in everyday work wheresystems are working near the margins of safety. The examples that underlie theframework are based on findings from focus group discussion withrepresentatives from different organisations, including health care, nuclear,transportation and emergency services. Main contributions of this thesis includethe examination of adaptive performance and of how it can be analysed as ameans to learn about and strengthen resilience. By using contextual analysis enablersof adaptive performance and its effects the overall system are identified. Theanalysis further demonstrates that resilience is not a system property but aresult of situational circumstances and organisational structures. Theframework supports practitioners and researchers in reporting findings,structuring cases and making sense of sharp-end adaptations. The analysismethod can be used to better understand system adaptive capacities, monitoradaptive patterns and enhance current methods for safety management
Performance-aware component composition for GPU-based systems by Usman Dastgeer( Book )

3 editions published in 2014 in English and held by 3 WorldCat member libraries worldwide

This thesis addresses issues associated with efficiently programming modern heterogeneous GPU-based systems, containing multicore CPUs and one or more programmable Graphics Processing Units (GPUs). We use ideas from component-based programming to address programming, performance and portability issues of these heterogeneous systems. Specifically, we present three approaches that all use the idea of having multiple implementations for each computation; performance is achieved/retained either a) by selecting a suitable implementation for each computation on a given platform or b) by dividing the computation work across different implementations running on CPU and GPU devices in parallel. In the first approach, we work on a skeleton programming library (SkePU) that provides high-level abstraction while making intelligent implementation selection decisions underneath either before or during the actual program execution. In the second approach, we develop a composition tool that parses extra information (metadata) from XML files, makes certain decisions online, and, in the end, generates code for making the final decisions at runtime. The third approach is a framework that uses source-code annotations and program analysis to generate code for the runtime library to make the selection decision at runtime. With a generic performance modeling API alongside program analysis capabilities, it supports online tuning as well as complex program transformations. These approaches differ in terms of genericity, intrusiveness, capabilities and knowledge about the program source-code; however, they all demonstrate usefulness of component programming techniques for programming GPU-based systems. With experimental evaluation, we demonstrate how all three approaches, although different in their own way, provide good performance on different GPU-based systems for a variety of applications
Metodisk systemstrukturering : att skapa samstämmighet mellan informationssystemarkitektur och verksamhet by Karin Axelsson( Book )

1 edition published in 1998 in English and held by 2 WorldCat member libraries worldwide

Mission experience how to model and capture it to enable vicarious learning by Dennis Andersson( Book )

2 editions published in 2013 in English and held by 2 WorldCat member libraries worldwide

Hardware/Software Codesign of Embedded Systems with Reconfigurable and Heterogeneous Platforms by Adrian Alin Lifa( Book )

2 editions published in 2015 in English and held by 2 WorldCat member libraries worldwide

Compound processing for phrase-based statistical machine translation by Sara Stymne( Book )

2 editions published in 2009 in English and held by 2 WorldCat member libraries worldwide

"In this thesis I explore how compound processing can be used to improve phrase-based statistical machine translation (PBSMT) between English and German/Swedish. Both German and Swedish generally use closed compounds, which are written as one word without spaces or other indicators of word boundaries. Compounding is both common and productive, which makes it problematic for PBSMT, mainly due to sparse data problems."
Complexity dichotomies for CSP-related problems by Gustav Nordh( Book )

2 editions published in 2007 in English and held by 2 WorldCat member libraries worldwide

Discrete and Continuous Shape Writing for Text Entry and Control by Per Ola Kristensson( Book )

2 editions published in 2007 in English and held by 2 WorldCat member libraries worldwide

Mobile devices gain increasing computational power and storage capabilities, and there are already mobile phones that can show movies, act as digital music players and offer full-scale web browsing. The bottleneck for information flow is however limited by the inefficient communication channel between the user and the small device. The small mobile phone form factor has proven to be surprisingly difficult to overcome and limited text entry capabilities are in effect crippling mobile devices' use experience. The desktop keyboard is too large for mobile phones, and the keypad too limited. In recent years, advanced mobile phones have come equipped with touch-screens that enable new text entry solutions. This dissertation explores how software keyboards on touch-screens can be improved to provide an efficient and practical text and command entry experience on mobile devices. The central hypothesis is that it is possible to combine three elements: software keyboard, language redundancy and pattern recognition, and create new effective interfaces for text entry and control. These are collectively called "shape writing" interfaces. Words form shapes on the software keyboard layout. Users write words by articulating the shapes for words on the software keyboard. Two classes of shape writing interfaces are developed and analyzed: discrete and continuous shape writing. The former recognizes users' pen or finger tapping motion as discrete patterns on the touch-screen. The latter recognizes users' continuous motion patterns. Experimental results show that novice users can write text with an average entry rate of 25 wpm and an error rate of 1% after 35 minutes of practice. An accelerated novice learning experiment shows that users can exactly copy a single well-practiced phrase with an average entry rate of 46.5 wpm, with individual phrase entry rate measurements up to 99 wpm. When used as a control interface, users can send commands to applications 1.6 times faster than using de-facto standard linear pull-down menus. Visual command preview leads to significantly less errors and shorter gestures for unpracticed commands. Taken together, the quantitative results show that shape writing is among the fastest mobile interfaces for text entry and control, both initially and after practice, that are currently known
Representing Future Situations of Service : Prototyping in Service Design by Johan Blomkvist( Book )

2 editions published in 2014 in English and held by 2 WorldCat member libraries worldwide

Code Generation and Global Optimization Techniques for a Reconfigurable PRAM-NUMA Multicore Architecture by Erik Hansson( Book )

2 editions published in 2014 in English and held by 2 WorldCat member libraries worldwide

Tools and Methods for Analysis, Debugging, and Performance Improvement of Equation-Based Models by Martin Sjölund( Book )

2 editions published in 2015 in English and held by 2 WorldCat member libraries worldwide

Troubleshooting Trucks Automated Planning and Diagnosis by Håkan Warnquist( Book )

2 editions published in 2015 in English and held by 2 WorldCat member libraries worldwide

Content ontology design patterns : qualities, methods, and tools by Karl Hammar( Book )

2 editions published in 2017 in English and held by 2 WorldCat member libraries worldwide

Ontologies are formal knowledge models that describe concepts and relationships and enable data integration, information search, and reasoning. Ontology Design Patterns (ODPs) are reusable solutions intended to simplify ontology development and support the use of semantic technologies by ontology engineers. ODPs document and package good modelling practices for reuse, ideally enabling inexperienced ontologists to construct high-quality ontologies. Although ODPs are already used for development, there are still remaining challenges that have not been addressed in the literature. These research gaps include a lack of knowledge about (1) which ODP features are important for ontology engineering, (2) less experienced developers' preferences and barriers for employing ODP tooling, and (3) the suitability of the eXtreme Design (XD) ODP usage methodology in non-academic contexts. This dissertation aims to close these gaps by combining quantitative and qualitative methods, primarily based on five ontology engineering projects involving inexperienced ontologists. A series of ontology engineering workshops and surveys provided data about developer preferences regarding ODP features, ODP usage methodology, and ODP tooling needs. Other data sources are ontologies and ODPs published on the web, which have been studied in detail. To evaluate tooling improvements, experimental approaches provide data from comparison of new tools and techniques against established alternatives. The analysis of the gathered data resulted in a set of measurable quality indicators that cover aspects of ODP documentation, formal representation or axiomatisation, and usage by ontologists. These indicators highlight quality trade-offs: for instance, between ODP Learnability and Reusability, or between Functional Suitability and Performance Efficiency . Furthermore, the results demonstrate a need for ODP tools that support three novel property specialisation strategies, and highlight the preference of inexperienced developers for template-based ODP instantiation--neither of which are supported in prior tooling. The studies also resulted in improvements to ODP search engines based on ODP-specific attributes. Finally, the analysis shows that XD should include guidance for the developer roles and responsibilities in ontology engineering projects, suggestions on how to reuse existing ontology resources, and approaches for adapting XD to project-specific contexts
Yrke: polis : yrkeskunskap, motivation, IT-system och andra förutsättningar för polisarbete by Stefan Holgersson( Book )

1 edition published in 2005 in Swedish and held by 2 WorldCat member libraries worldwide

Management of Real-Time Data Consistency and Transient Overloads in Embedded Systems by Thomas Gustafsson( Book )

2 editions published in 2007 in English and held by 2 WorldCat member libraries worldwide

This thesis addresses the issues of data management in embedded systems' software. The complexity of developing and maintaining software has increased over the years due to increased availability of resources, e.g., more powerful CPUs and larger memories, as more functionality can be accommodated using these resources. In this thesis, it is proposed that part of the increasing complexity can be addressed by using a real-time database since data management is one constituent of software in embedded systems. This thesis investigates which functionality a real-time database should have in order to be suitable for embedded software that control an external environment. We use an engine control software as a case study of an embedded system. The findings are that a real-time database should have support for keeping data items up-todate, providing snapshots of values, i.e., the values are derived from the same system state, and overload handling. Algorithms are developed for each one of these functionalities and implemented in a real-time database for embedded systems. Performance evaluations are conducted using the database implementation. The evaluations show that the real-time performance is improved by utilizing the added functionality. Moreover, two algorithms for examining whether the system may become overloaded are also outlined; one algorithm for off-line use and the second algorithm for on-line use. Evaluations show the algorithms are accurate and fast and can be used for embedded systems
moreShow More Titles
fewerShow Fewer Titles
Audience Level
Audience Level
  General Special  
Audience level: 0.81 (from 0.22 for Tools and ... to 0.96 for Performanc ...)

Associated Subjects
Alternative Names

Linköping Institute of Technology. Department of Computer and Information Science

Linköping University. Department of computer and Information Science