WorldCat Identities

Granier, Xavier (1975-....).

Overview
Works: 20 works in 22 publications in 2 languages and 23 library holdings
Roles: Thesis advisor, Opponent, Author, Other
Publication Timeline
.
Most widely held works by Xavier Granier
Contrôle automatique de qualité pour l'éclairage global by Xavier Granier( Book )

3 editions published between 2001 and 2010 in French and held by 3 WorldCat member libraries worldwide

Dans ce document, nous présentons une nouvelle approche qui par l'intégration d'une méthode de radiosté hiérarchique avec regroupement, avec une méthode de lancer de particules, permet de simuler efficacement l'ensemble des chemins lumineux
Représentations efficaces pour les réflectances mesurées : modélisation et édition by Alban Fichet( )

1 edition published in 2019 in English and held by 2 WorldCat member libraries worldwide

L'informatique graphique a prouvé son utilité dans de nombreux domaines. Du prototypage industriel en passant par les loisirs avec le cinéma ou le jeu vidéo ou encore pour l'archivage du patrimoine historique. Elle permet la création d'image à partir de scènes virtuelles. L'un des objectifs de l'informatique graphique est la génération de rendus photo-réalistes. Une part importante est due à la fidélité des modèles de matériaux. Ainsi, leur étude ainsi que celle des réflectances est capitale.Les matériaux mesurés ont pris de l'importance en informatique graphique. Ils sont efficaces pour la représentation de matériaux homogènes. Cependant, pour les matériaux avec une variation structurelle spatiale, leur numérisation, utilisation pour le rendu, stockage et édition révèle de nombreuses difficultés. Cette thèse se propose de présenter des méthodes pour le traitement et l'édition de tels matériaux.Nous proposons une technique pour l'acquisition rapide de matériaux anisotropes avec variation spatiale à partir d'un nombre restreint d'illuminations et avec un point de vue fixe. Cette méthode est adaptée pour des mesures ne requérant pas une fidélité critique. Alors que d'autres techniques nécessitent un système de mesure couteux et une durée d'acquisition importante, notre technique permet d'obtenir un point de départ pour l'édition à partir de paramètres issus de matériaux réels.Ensuite, nous proposons une procédure pour l'approximation de données mesurées par une fonction analytique. Elle permet de réduire significativement l'emprunte mémoire requise par le stockage des données mesurées tout en proposant l'édition aisée du matériau ainsi représenté. Il est aussi possible de modifier la distribution spatiale des matériaux et ainsi remplacer des matériaux d'une surface par ceux d'une autre.Enfin, nous proposons une méthode permettant l'utiliser une partie des propriétés de réflectances de matériaux pour les fusionner avec celles d'un second. Cette méthode permet aussi l'édition des deux caractéristiques indépendamment l'une de l'autre de façon intuitive
Real-time 2D manipulation of plausible 3D appearance using shading and geometry buffers by Carlos Jorge Zubiaga Pena( )

1 edition published in 2016 in English and held by 1 WorldCat member library worldwide

Les artistes traditionnels peignent directement sur une toile et créent des apparences plausibles de scènes qui ressemblent au monde réel. A l'opposé, les artistes en informatique graphique définissent des objets dans une scène virtuelle (maillages 3D, matériaux et sources de lumière), et utilisent des algorithmes complexes (rendu) pour reproduire leur apparence. D'un côté, les techniques de peinture permettent de librement définir l'apparence. D'un autre côté, les techniques de rendu permettent de modifier séparément et dynamiquement les différents éléments qui définissent l'apparence. Dans cette thèse, nous présentons une approche intermédiaire pour manipuler l'apparence, qui permettent certaines manipulations en 3D en travaillant dans l'espace 2D. Mous étudions d'abord l'impact sur l'ombrage des matériaux, tenant en compte des matériaux comme des filtres passe-bande d'éclairage. Nous présentons ensuite un petit ensemble de relations statistiques locales entre les matériaux / l'éclairage et l'ombrage. Ces relations sont utilisées pour imiter les modifications sur le matériaux ou l'éclairage d'une image d'une sphère créée par un artiste. Les techniques connues sons le nom de LitSpheres / MatCaps utilisent ce genre d'images pour transférer leur apparence `a des objets de forme quelconque. Notre technique prouve la possibilité d'imiter les modifications 3D de la lumière et de matériaux à partir d'une image en 2D. Nous présentons une technique différente pour modifier le troisième élément impliqué dans l'aspect visuel d'un objet, sa géométrie. Dans ce cas, on utilise des rendus comme images d'entrée avec des images auxiliaires qui contiennent des informations 3D de la scène. Nous récupérons un ombrage indépendant de la géométrie pour chaque surface, ce qui nous demande de supposer qu'il n'y a pas de variations spatiales d'éclairage pour chaque surface. L'ombrage récupéré peut être utilisé pour modifier arbitrairement la forme locale de l'objet de manière interactive sans la nécessité de rendre `a nouveau la scène
Image structures : from augmented reality to image stylization by Jiazhou Chen( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

Dans cette thèse, nous nous intéressons aux structures d'une image en général, et plus particulièrement aux gradients et aux contours. Ces dernières se sont révélées très importantes ces dernières années pour de nombreuses applications en infographie,telles que la réalité augmentée et la stylisation d'images et de vidéos. Le but de toute analyse des structures d'une image est de décrire à un haut-niveau la compréhension que l'on peut avoir de son contenu et de fournir les bases nécessaires à l'amélioration de la qualité des applications citées au-dessus, notamment la lisibilité, la précision, la cohérence spatiale et temporelle.Dans une premier temps, nous démontrons le rôle important que ces structures jouent pour des applications de type composition “Focus+Context”. Une telle approche est utilisée en réalité augmentée pour permettre la visualisation de parties d'une scènes qui sont normalement derrières ce que l'on peut observer dans un flux vidéo. L'utilisation d'une segmentation et de lignes caractéristiques permettent de mettre en avant et/ou de révéler les relations d'ordre entre les différents objets de la scène. Pour la synthèse d'images guidée par une fonction d'importance, de multiples styles de rendu sont combinés de manière cohérente grâce à l'utilisation d'une carte de gradients et une de saillance.Dans un deuxième temps, nous introduisons une nouvelle techniques qui permet de reconstruire de manière continue un champ de gradient, et ceci sans trop lisser les détails originaux contenus dans l'image. Pour cela, nous développons une nouvelle méthode d'approximation locale et de plus haut-degré pour des champs de gradients discrets et non-orientés. Cette méthode est basée sur le formalisme“moving least square” (MLS). Nous démontrons que notre approximation isotrope et linéaire est de meilleure qualité que le classique tenseur de structure : les détails sont mieux préservés et les instabilités sont réduites de manière significative. Nous démontrons aussi que notre nouveau champ de gradients apporte des améliorations à de nombreuses techniques de stylisation.Finalement, nous démontrons que l'utilisation d'une technique d'analyse de profil caractéristique par approximation polynomiale permet de distinguer les variations douces des zones dures. Les paramètres du profil sont utilisés comme des paramètres de stylisation tels que l'orientation des coups de pinceau, leur taille et leur opacité. Cela permet la création d'une large variété de styles de ligne
Tools for the paraxial optical design of light field imaging systems by Lois Mignard-Debise( )

1 edition published in 2018 in English and held by 1 WorldCat member library worldwide

Light field imaging is often presented as a revolution of standard imaging. Indeed, it does bring more control to the user over the final image as the spatio-angular dimensions of the light field offer the possibility to change the viewpoint and refocus after the shot and compute the scene depth map.However, it complicates the work of the optical designer of the system for two reasons. The first is that there exist a multitude of different light field acquisition devices, each with its own specific design. The second is that there is no model that relates the camera design to its optical properties of acquisition and that would guide the designer in his task. This thesis addresses these observations by proposing a first-order optical model to represent any light field acquisition device. This model abstracts a light field camera as en equivalent array of virtual cameras that exists in object space and that performs the same sampling of the scene. The model is used to study and compare several light field cameras as well as a light field microscope setup which reveals guidelines for the conception of light field optical systems. The simulations of the model are also validated through experimentation with a light field camera and a light field microscope that was constructed in our laboratory
Optical and software tools for the design of a new transparent 3D display. by Thomas Crespel( )

1 edition published in 2019 in English and held by 1 WorldCat member library worldwide

We live exciting times where new types of displays are made possible, and current challenges focus on enhancing user experience. As examples, we witness the emergence of curved, volumetric, head-mounted, autostereoscopic, or transparent displays, among others, with more complex sensors and algorithms that enable sophisticated interactions.This thesis aims at contributing to the creation of such novel displays. In three concrete projects, we combine both optical and software tools to address specific applications with the ultimate goal of designing a three-dimensional display. Each of these projects led to the development of a working prototype based on the use of picoprojectors, cameras, optical elements, and custom software.In a first project, we investigated spherical displays: they are more suitable for visualizing spherical data than regular flat 2D displays, however, existing solutions are costly and difficult to build due to the requirement of tailored optics. We propose a low-cost multitouch spherical display that uses only off-the-shelf, low-cost, and 3D-printed elements to make it more accessible and reproducible. Our solution uses a focus-free projector and an optical system to cover a sphere from the inside, infrared finger tracking for multitouch interaction, and custom software to link both. We leverage the use of low-cost material by software calibrations and corrections.We then extensively studied wedge-shaped light guides, in which we see great potential and that became the center component of the rest of our work. Such light guides were initially devised for flat and compact projection-based displays but in this project we exploit them in a context of acquisition. We seek to image constrained locations that are not easily accessible with regular cameras due to the lack of space in front of the object of interest. Our idea is to fold the imaging distance into a wedge guide thanks to prismatic elements. With our prototype, we validated various applications in the archaeological field.The skills and expertise that we acquired during both projects allowed us to design a new transparent autostereoscopic display. Our solution overcomes some limitations of augmented reality displays allowing a user to see both a direct view of the real world as well as a stereoscopic and view-dependent augmentation without any wearable or tracking. The principle idea is to use a wedge light guide, a holographic optical element, and several projectors, each of them generating a different viewpoint. Our current prototype has five viewpoints, and more can be added. This new display has a wide range of potential applications in the augmented reality field
Techniques d'interaction, affichage personnalisé et reconstruction de surfaces pour la réalité augmentée spatiale by Brett Ridel( )

1 edition published in 2016 in French and held by 1 WorldCat member library worldwide

This thesis extends the field of spatial augmented reality (SAR). Spatial augmented reality allows to improve or modify the perception of the reality with virtual information displayed directly in the real world, using video-projection. Many fields such as tourism, entertainment, education, medicine, industry or cultural heritage may benefit from it. Recent computer science techniques allow to measure, analyse and visualise the geometry of the surface of real objects, as for instance archeological artefacts. We propose a SAR interaction and visualisation technique that combines the advantages of the study of both real and 3D archeological artefacts. Thus, we superimpose on the object an expressive rendering based on curvatures with SAR, allowing for example to show details of engravings. Next, we simulate the use of a flashlight with the help of a 6-degree-of-freedom controller. The user can then specify the area on the object to be augmented and adjust the various necessary parameters of the expressive rendering. One of the main caracteristics of SAR is to enable multiple users to simultaneously participate to the same experience. However, depending on the target application, this can be seen as a drawback. We propose a new display device that allows to create experiences in SAR that are both multi-user and personalised by taking into account the user point of view. In order to do so, the projection display, set in front of the object to augment, is made from a material that is both retro-reflective and semi-transparent. We suggest two different uses of this new device, as well as two scenarios of application. Most of recent tracking solutions, even with the knowledge of the geometry of the object beforehand, fail when applied to the augmentation of deformable objects. In order to adress this issue, we propose a reconstruction technique for developable surfaces using parabolic-cylinders based on MLS. This kind of surface may represent cloth or fabric for instance. We show a solution addressing approximation issues in areas where the problem becomes ambiguous
Acquisition opto-numérique de vêtements asiatiques anciens by Antoine Lucat( )

1 edition published in 2020 in French and held by 1 WorldCat member library worldwide

La numérisation est un enjeu majeur dans le domaine du patrimoine, permettant d'une part la sauvegarde à long terme des pièces de collections, et d'autre part de les valoriser sous un jour nouveau. En collaboration avec le Musée d'Ethnographie de Bordeaux (MEB), ce travail de thèse se fixe pour objectif de proposer une solution novatrice en terme de numérisation en proposant une réponse à la problématique suivante : comment restituer fidèlement l'apparence d'une pièce de collection, tel que la version numérique soit, à l'oeil, indiscernable du véritable objet ? La taille des objets à numériser ainsi que la résolution nécessaire pour obtenir la qualité souhaitée impliquent de fait une quantité astronomique de mesures à effectuer, formant un véritable défi technique et scientifique. Cette thèse se propose de répondre à cette observation par la réalisation d'un prototype d'acquisition innovant basé image, formé d'un dôme couvert de 1080 LED au sein duquel évolue une caméra sur bras robotisé. Ce travail est d'abord appuyé par un faisceau de recherches préliminaires, s'attardant sur les problématiques théoriques et pratiques attenant à une telle mesure. Cela a notamment permis de mettre en évidence à quel point la diffraction joue un rôle important dans la mesure de BRDF, bien au-delà des critères habituels. Dans ce sens, un nouvel algorithme de traitement des données, métrologiquement viable, a pu être proposé. Appuyé par ces acquis, le prototype de numérisation a pu être conçu, réalisé, calibré et finalement exploité avec succès pour la sauvegarde du patrimoine. Ce nouvel instrument, en constante évolution, jette alors la première pierre de nombreux axes de recherches futurs, tant portés sur l'optimisation du processus de mesure que sur l'exploitation des données générées
imagerie plénoptique : de la lumière visible aux rayons X by Charlotte Herzog( )

1 edition published in 2020 in English and held by 1 WorldCat member library worldwide

Plenoptic imaging is a technique that acquires spatial and angular information of the light rays incoming from a scene. After a single acquisition, numerical data treatment allows image manipulation such as synthetic aperture, changing viewpoint, refocusing at different depths, and consequently 3D reconstruction of the scene. Visible plenoptic has been widely studied. However, transposition from visible to X-rays has never been done and remains challenging. X-ray plenoptic would be beneficial to the X-ray imaging panorama. A single acquisition should be sufficient to reconstruct a volume, against 1000's for X-ray tomography that is the today reference in 3D X-ray imaging.In this thesis, we consider plenoptic camera composed of a main lens, a microlens array and a detector. So far, two different configurations have been developed: the traditional and the focused plenoptic setups. Although these configurations are usually studied separately, they only differ by the distances between the optical elements. These two configurations were studied in detail to choose the most suitable for X-ray imaging, considering the constraints of X-ray optics. We observed a full continuity between the two systems. Therefore, we extended the previous work to more general formulas about optical configuration and theoretical resolutions. Theory about resolution along the depth axis was refined, as depth reconstruction and extraction are the main interest of X-ray plenoptic. Specific study was done on the evolution of contrast along depth as being a key parameter for depth reconstruction. We realized that contrast decreases when moving away from a privileged depth. This is important to consider as it can affect image reconstruction and quality of depth extraction.We also worked on refocusing algorithms. The refocusing algorithms are usually developed for each configuration separately. We worked to go beyond this separation. We developed a new algorithm valid for any configurations. Moreover, our algorithm is based on real distances between the optical elements, allowing generating images at any distances from the plenoptic camera. We defined a new parameterization between object and image spaces. Using geometrical optics, we calculated the matrix transformation between the two spaces. This allows back-projecting data from the acquired raw image to the object space, and reconstructing the pixels one by one, until the whole object. With this algorithm, we were able to simulate the process of image acquisition, and create synthetic plenoptic data. Reconstruction of these data was used to quantify the accuracy of the novel algorithm and prove its consistency.The refocusing algorithm allows reconstructing the depth planes one by one. Each refocused plane contains information about the whole 3D scene that has to be disentangled. The elements physically present at the refocused depth are intrinsically sharp, whereas the ones located at other depths are blurred. We used this contrast property to extract depth from the refocused images. We tested several existing methods derived from the field of depth from focus and studied their efficiency when applied to our images.In collaboration with European teams, we realized the first X-ray plenoptic camera that was tested at P05 beamline of PETRA III synchrotron. Based on the theoretical work developed in this thesis, we defined the best optical configuration, mounted the plenoptic camera, acquired X-ray plenoptic images, numerically refocused them using the new algorithm and verified the experimental resolutions and contrasts. Depth from focus techniques applied on the refocused stack allow to retrieve the expected depth plane. These are the first images acquired with an X-ray plenoptic camera
High quality adaptive rendering of complex photometry virtual environments by Arthur Dufay( )

1 edition published in 2017 in English and held by 1 WorldCat member library worldwide

Image synthesis for movie production never stopped evolving over the last decades. It seems it has reached a level of realism that cannot be outperformed. However, the software tools available for visual effects (VFX) artists still need to progress. Indeed, too much time is still wasted waiting for results of long computations, especially when previewing VFX. The delays or poor quality of previsualization software poses a real problem for artists. However, the evolution of graphics processing units (GPUs) in recent years suggests a potential improvement of these tools. In particular, by implementing hybrid rasterization/ray tracing algorithms, taking advantage of the computing power of these processors and their massively parallel architecture. This thesis explores the different software bricks needed to set up a complex rendering pipeline on the GPU, that enables a better previsualization of VFX. Several contributions have been brought during this thesis. First, a hybrid rendering pipeline was developed (cf. Chapter 2). Subsequently, various implementation schemes of the Path Tracing algorithm have been tested (cf. Chapter 3), in order to increase the performance of the rendering pipeline on the GPU. A spatial acceleration structure has been implemented (cf. Chapter 4), and an improvement of the traversal algorithm of this structure on GPU has been proposed (cf. Section 4.3.2). Then, a new sample decorrelation method, in the context of random number generation was proposed (cf. Section 5.4) and resulted in a publication [Dufay et al., 2016]. Finally, we combined the Path Tracing algorithm with the Many Lights solution, always with the aim of improving the preview of global illumination. This thesis also led to the submission of three patents and allowed the development of two software tools presented in Appendix A
Modes de représentation pour l'éclairage en synthèse d'images by Romain Pacanowski( )

1 edition published in 2009 in French and held by 1 WorldCat member library worldwide

In image synthesis, the main computation involved to generate an image is characterized by an equation named rendering equation [Kajiya1986]. This equation represents the law of energy conservation. It stipulates that the light emanating from the scene objects is the sum of the emitted energy and the reflected energy. Moreover, the reflected energy at a surface point is defined as the convolution of the incoming lighting with a reflectance function. The reflectance function models the object material and represents, in the rendering equation, a directional and energetic filter that describes the surface behavior regarding the reflection. In this thesis, we introduce new representations for the reflectance function and the incoming lighting. In the first part of this thesis, we propose two new models for the reflectance function. The first model is targeted for artists to help them create and edit highlights. Our main idea is to let the user paint and sketch highlight characteristics (shape, color, gradient and texture) in a plane parametrized by the incident lighting direction. The second model is designed to represent efficiently isotropic material data. To achieve this result, we introduce a new representation of the reflectance function that uses rational polynomials. Their coefficients are computed using a fitting process that guarantees an optimal solution regarding convergence. In the second part of this thesis, we introduce a new volumetric structure for indirect illumination that is directionally represented with irradiance vector. We show that our representation is compact and robust to geometric variations, that it can be used as caching system for interactive and offline rendering and that it can also be transmitted with streaming techniques. Finally, we introduce two modifications of the incoming lighting to improve the shape depiction of a surface. The first modification consists in warping the incoming light directions whereas the second one consists in scaling the intensity of each light source
Communication expressive de la forme au travers de l'éclairement et du rendu au trait by Romain Vergne( )

1 edition published in 2010 in French and held by 1 WorldCat member library worldwide

Expressive rendering aims at designing algorithms that give users the possibility to create artistic images. It allows to produce traditional styles, but also to convey a specific message with its corresponding style. In this thesis, we propose new solutions for enhancing shape, often hidden in realistic images. We first show how to extract relevant surface features on 3D dynamic scenes, taking the human visual system into account, in order to be able to control level-of-details. In a second step, we integrate this information in a variety of styles: minimalist black and white, realistic, or line-based renderings
Echantillonage d'importance des sources de lumières réalistes by Heqi Lu( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

Realistic images can be rendered by simulating light transport with Monte Carlo techniques. The possibility to use realistic light sources for synthesizing images greatly contributes to their physical realism. Among existing models, the ones based on environment maps and light fields are attractive due to their ability to capture faithfully the far-field and near-field effects as well as their possibility of being acquired directly. Since acquired light sources have arbitrary frequencies and possibly high dimension (4D), using such light sources for realistic rendering leads to performance problems.In this thesis, we focus on how to balance the accuracy of the representation and the efficiency of the simulation. Our work relies on generating high quality samples from the input light sources for unbiased Monte Carlo estimation. In this thesis, we introduce three novel methods.The first one is to generate high quality samples efficiently from dynamic environment maps that are changing over time. We achieve this by introducing a GPU approach that generates light samples according to an approximation of the form factor and combines the samples from BRDF sampling for each pixel of a frame. Our method is accurate and efficient. Indeed, with only 256 samples per pixel, we achieve high quality results in real time at 1024 × 768 resolution. The second one is an adaptive sampling strategy for light field light sources (4D), we generate high quality samples efficiently by restricting conservatively the sampling area without reducing accuracy. With a GPU implementation and without any visibility computations, we achieve high quality results with 200 samples per pixel in real time at 1024 × 768 resolution. The performance is still interactive as long as the visibility is computed using our shadow map technique. We also provide a fully unbiased approach by replacing the visibility test with a offline CPU approach. Since light-based importance sampling is not very effective when the underlying material of the geometry is specular, we introduce a new balancing technique for Multiple Importance Sampling. This allows us to combine other sampling techniques with our light-based importance sampling. By minimizing the variance based on a second-order approximation, we are able to find good balancing between different sampling techniques without any prior knowledge. Our method is effective, since we actually reduce in average the variance for all of our test scenes with different light sources, visibility complexity, and materials. Our method is also efficient, by the fact that the overhead of our "black-box" approach is constant and represents 1% of the whole rendering process
Level-Of-Details Rendering with Hardware Tessellation by Thibaud Lambert( )

1 edition published in 2017 in English and held by 1 WorldCat member library worldwide

In the last two decades, real-time applications have exhibited colossal improvements in the generation of photo-realistic images. This is mainly due to the availability of 3D models with an increasing amount of details. Currently, the traditional approach to represent and visualize highly detailed 3D objects is to decompose them into a low-frequency mesh and a displacement map encoding the details. The hardware tessellation is the ideal support to implement an efficient rendering of this representation. In this context, we propose a general framework for the generation and the rendering of multi-resolution feature-aware meshes compatible with hardware tessellation. First, we introduce a view-dependent metric capturing both geometric and parametric distortions, allowing to select the appropriate resolution at rendertime. Second, we present a novel hierarchical representation enabling on the one hand smooth temporal and spatial transitions between levels and on the other hand a non-uniform hardware tessellation. Last, we devise a simplification process to generate our hierarchical representation while minimizing our error metric. Our framework leads to huge improvements both in terms of triangle count and rendering time in comparison to alternative methods
Interactive generation and rendering of massive models : a parallel procedural approach by Cyprien Buron( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

With the increasing computing and storage capabilities of recent hardware, movie and video games industries desire huger realistic environments. However, modeling such sceneries by hand turns out to be highly time consuming and costly. On the other hand, procedural modeling provides methods to easily generate high diversity of elements such as vegetation and architecture. While grammar rules bring a high-level powerful modeling tool, using these rules is often a tedious task, necessitating frustrating trial and error process. Moreover, as no solution proposes real-time generation and rendering for massive environments, artists have to work on separate parts before integrating the whole and see the results.In this research, we aim to provide interactive generation and rendering of very large sceneries, while offering artist-friendly methods for controlling grammars behavior. We first introduce a GPU-based pipeline providing parallel procedural generation at render time. To this end we propose a segment-based expansion method working on independent elements, thus allowing for parallel amplification. We then extend this pipeline to permit the construction of models relying on internal contexts, such as roofs. We also present external contexts to control grammars with surface and texture data. Finally, we integrate a LOD system with optimization techniques within our pipeline providing interactive generation, edition and visualization of massive environments. We demonstrate the efficiency of our pipeline with a scene comprising hundred thousand trees and buildings each, representing 2 terabytes of data
Etude en vue de la multirésolution de l'apparence by Julien Hadim( )

1 edition published in 2009 in French and held by 1 WorldCat member library worldwide

Les fonctions de texture directionnelle "Bidirectional Texture Function" (BTF) ont rencontrés un certain succès ces dernières années, notamment pour le rendu temps-réel d'images de synthèse, grâce à la fois au réalisme qu'elles apportent et au faible coût de calcul nécessaire. Cependant, un inconvénient de cette approche reste la taille gigantesque des données : de nombreuses méthodes ont été proposées afin de les compresser. Dans ce document, nous proposons une nouvelle représentation des BTFs qui améliore la cohérence des données et qui permet ainsi une compression plus efficace. Dans un premier temps, nous étudions les méthodes d'acquisition et de génération des BTFs et plus particulièrement, les méthodes de compression adaptées à une utilisation sur cartes graphiques. Nous réalisons ensuite une étude à l'aide de notre logiciel "BTFInspect" afin de déterminer parmi les différents phénomènes visuels dans les BTFs, ceux qui influencent majoritairement la cohérence des données par texel. Dans un deuxième temps, nous proposons une nouvelle représentation pour les BTFs, appelées Flat Bidirectional Texture Function (Flat-BTFs), qui améliore la cohérence des données d'une BTF et donc la compression des données. Dans l'analyse des résultats obtenus, nous montrons statistiquement et visuellement le gain de cohérence obtenu ainsi que l'absence d'une perte significative de qualité en comparaison avec la représentation d'origine. Enfin, dans un troisième temps, nous démontrons l'utilisation de notre nouvelle représentation dans des applications de rendu en temps-réel sur cartes graphiques. Puis, nous proposons une compression de l'apparence grâce à une méthode de quantification sur GPU et présentée dans le cadre d'une application de diffusion de données 3D entre un serveur contenant des modèles 3D et un client désirant visualiser ces données
Legible Visualization of Semi-Transparent Objects using Light Transport by David Murray( )

1 edition published in 2018 in English and held by 1 WorldCat member library worldwide

Exploring and understanding volumetric or surface data is one of the challenges of Computer Graphics. The appearance of these data can be modeled and visualized using light transport theory. For the sake of understanding such a data visualization, transparent materials are widely used. If solutions exist to correctly simulate the light propagation and display semi-transparent objects, offering a understandable visualization remains an open research topic. The goal of this thesis is twofold. First, an in-depth analysis of the optical model for light transport and its implication on computer generated images is performed. Second, this knowledge can be used to tackle the problematic of providing efficient and reliable solution to visualize transparent and semi-transparent media. In this manuscript, we first introduce the general optical model for light transport in participating media, its simplification to surfaces, and how it is used in computer graphics to generate images. Second, we present a solution to improve shape depiction in the special case of surfaces. The proposed technique uses light transport as a basis to change the lighting process and modify the materials appearance and opacity. Third, we focus on the problematic of using full volumetric data instead of the simplified case of surfaces. In this case, changing only the material properties has a limited impact, thus we study how light transport can be used to provide useful information for participating media. Last, we present our light transport model for participating media that aims at exploring part of interest of a volume
Acquisition légère de matériaux par apprentissage profond by Valentin Deschaintre( )

1 edition published in 2019 in English and held by 1 WorldCat member library worldwide

Whether it is used for entertainment or industrial design, computer graphics is ever more present in our everyday life. Yet, reproducing a real scene appearance in a virtual environment remains a challenging task, requiring long hours from trained artists. A good solution is the acquisition of geometries and materials directly from real world examples, but this often comes at the cost of complex hardware and calibration processes. In this thesis, we focus on lightweight material appearance capture to simplify and accelerate the acquisition process and solve industrial challenges such as result image resolution or calibration. Texture, highlights, and shading are some of many visual cues that allow humans to perceive material appearance in pictures. Designing algorithms able to leverage these cues to recover spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a few images has challenged computer graphics researchers for decades. We explore the use of deep learning to tackle lightweight appearance capture and make sense of these visual cues. Once trained, our networks are capable of recovering per-pixel normals, diffuse albedo, specular albedo and specular roughness from as little as one picture of a flat surface lit by the environment or a hand-held flash. We show how our method improves its prediction with the number of input pictures to reach high quality reconstructions with up to 10 images -- a sweet spot between existing single-image and complex multi-image approaches -- and allows to capture large scale, HD materials. We achieve this goal by introducing several innovations on training data acquisition and network design, bringing clear improvement over the state of the art for lightweight material capture
Contrôle de l'apparence des matériaux anisotropes by Boris Raymond( )

1 edition published in 2016 in French and held by 1 WorldCat member library worldwide

In computer graphics, material appearance is a fundamental component of the final image quality. Many models have contributed to improve material appearance. Today, some materials remains hard to represent because of their complexity. Among them, anisotopic materials are especially complex and little studied. In this thesis, we propose a better comprehension of anisotropic materials providing a representation model and an editing tool to control their appearance. Our scratched material model is based on a light transport simulation in the micro-geometry of a scratch, preserves all the details and keeps an interactive rendering time. Our anisotropic reflections edition tool uses BRDF orientation fields to give the user the impression to draw or deform reflections directly on the surface
Multiscale methods in signal processing for adaptive optics by Suman Kumar Maji( )

1 edition published in 2013 in English and held by 1 WorldCat member library worldwide

In this thesis, we introduce a new approach to wavefront phase reconstruction in Adaptive Optics (AO) from the low-resolution gradient measurements provided by a wavefront sensor, using a non-linear approach derived from the Microcanonical Multiscale Formalism (MMF). MMF comes from established concepts in statistical physics, it is naturally suited to the study of multiscale properties of complex natural signals, mainly due to the precise numerical estimate of geometrically localized critical exponents, called the singularity exponents. These exponents quantify the degree of predictability, locally, at each point of the signal domain, and they provide information on the dynamics of the associated system. We show that multiresolution analysis carried out on the singularity exponents of a high-resolution turbulent phase (obtained by model or from data) allows a propagation along the scales of the gradients in low-resolution (obtained from the wavefront sensor), to a higher resolution. We compare our results with those obtained by linear approaches, which allows us to offer an innovative approach to wavefront phase reconstruction in Adaptive Optics
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.96 (from 0.93 for Contrôle ... to 0.96 for Représent ...)

Alternative Names
Xavier Granier investigador

Xavier Granier onderzoeker

Languages
English (13)

French (9)