**Fredrik Nysjö - CG research and graphics programming** This page is a presentation of computer graphics and graphics programming related work that I have done in recent personal hobby projects, as well as back when I was a PhD student and research engineer at Uppsala University. # CG research The computer graphics related research that I did during my PhD at the Division of Visual Information and Interaction (Vi2) at the Department of Information Technology at Uppsala University was mainly focused on developing efficient point-based (splatting) methods for real-time rendering of isosurfaces in volume data, CPU-based methods for collision detection and haptic rendering, and interactive tools for signed distance field (SDF) based modelling of 3D-printable surgical implants and guides in our in-house virtual surgery planning software HASP. Other research that I also have been involved in can be found in the publications lists on my old (and no longer maintained) [UU website](https://user.it.uu.se/~freny907) or [Google Scholar profile](https://scholar.google.se/citations?user=SI0wHoEAAAAJ&hl=en). ## PhD thesis  Fredrik Nysjö, [*"Modeling and Visualization for Virtual Interaction with Medical Image Data"*](http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-403104), Uppsala University (2020) ## Clustered grid cells for isosurface rendering  Isosurface rendering method proposed in the paper *"Clustered Grid Cell Data Structure for Isosurface Rendering"* published in the Journal of WSCG (2020), that I also presented at the WSCG 2020 conference (virtual session). Abstract: "Active grid cells in scalar volume data are typically identified by many isosurface rendering methods when extracting another representation of the data for rendering. However, the use of grid cells themselves as rendering primitives is not extensively explored in the literature. In this paper, we propose a cluster-based data structure for storing the data of active grid cells for fast cell rasterisation via billboard splatting. Compared to previous cell rasterisation approaches, eight corner scalar values are stored with each active grid cell, so that the full volume data is not required during rendering. The grid cells can be quickly extracted and use about 37 percent memory compared to a typical efficient mesh-based representation, while supporting large grid sizes. We present further improvements such as a visibility buffer for cluster culling and EWA-based interpolation of attributes such as normals. We also show that our data structure can be used for hybrid ray tracing or path tracing to compute global illumination." [[Source code](https://bitbucket.org/FredrikNysjo/grid_cells)] [[Paper (2020)](http://wscg.zcu.cz/wscg2020/journal/E07.pdf)] [[Video presentation from WSCG 2020](https://drive.google.com/file/d/10OL6Q6qku_tnhKftL4kBT8j0MVGV6IdT/view?usp=sharing) (demo starts at 13:30)] ## Ray-caching: amortized isosurface rendering for VR   Isosurface rendering method proposed in the paper *"RayCaching: Amortized Isosurface Rendering for Virtual Reality"* published in Computer Graphics Forum (2019). I also presented this work at the joint EuroGraphics/EuroVis 2020 conference, during the virtual session "Ray Tracing and Global Illumination". Abstract: "Real-time virtual reality requires efficient rendering methods to deal with high-resolution stereoscopic displays and low latency head-tracking. Our proposed RayCaching method renders isosurfaces of large volume datasets by amortizing raycasting over several frames and caching primary rays as small bricks that can be efficiently rasterized. An occupancy map in form of a clipmap provides level of detail and ensures that only bricks corresponding to visible points on the isosurface are being cached and rendered. Hard shadows and ambient occlusion from secondary rays are also accumulated and stored in the cache. Our method supports real-time isosurface rendering with dynamic isovalue and allows stereoscopic visualization and exploration of large volume datasets at framerates suitable for virtual reality applications." [[Source code](https://bitbucket.org/FredrikNysjo/raycaching)] [[Paper (2019)](https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.13762)] [[Video presentation from EuroGraphics 2020](https://youtu.be/AoAbvd540I4?t=3320) (demo starts [here](https://youtu.be/AoAbvd540I4?t=4262))] ## Using anti-aliased signed distance fields for generating surgical guides and plates from CT images  This paper, *"Using anti-aliased signed distance fields for generating surgical guides and plates from CT images"*, was published in the Journal of WSCG (2017). I also presented this work at the WSCG 2017 conference in Pilsen, Czech Republic. The C++ and OpenGL based software HASP that I implemented the method and modeling tools in, and also worked on developing, is described in the section Software development. Abstract: "We present a method for generating shell-like objects such as surgical guides and plates from segmented computed tomography (CT) images, using signed distance fields and constructive solid geometry (CSG). We develop a user friendly modeling tool which allows a user to quickly design such models with the help of stereo graphics, six degrees-of-freedom input, and haptic feedback, in our existing software for virtual cranio-maxiollofacial surgery planning, HASP. To improve the accuracy and precision of the modeling, we use an anti-aliased distance transform to compute signed distance field values from fuzzy coverage representations of the bone. The models can be generated within a few minutes, with only a few interaction steps, and are 3D printable. The tool has potential to be used by the surgeons themselves, as an alternative to traditional surgery planning services." [[Paper (2017)](http://wscg.zcu.cz/WSCG2017/!_2017_Journal_WSCG-No-1.pdf) (pages 11-20)] [[Video (demo)](https://drive.google.com/file/d/1mrRfR_P3DMLA5u_wWXUEs-HD4GvRXCDB/view?usp=sharing)] ## Vectorised high-fidelity haptic rendering with dynamic pointshell  Six-degrees-of-freedom (6DOF) haptic rendering method proposed in the paper *"Vectorised High-Fidelity Haptic Rendering with Dynamic Pointshell"* that we submitted to a conference in 2020. The paper was rejected, so this work has not yet been published in a peer-reviewed publication. The main idea in the paper was to utilise SPMD-vectorisation (via the [ISPC](https://ispc.github.io/) language and compiler) in combination with a dynamic and adaptive point cloud to speed up the collision detection and collision response between voxmap-pointshell models on the CPU. An abstract from the version of the manuscript included in my PhD thesis can be found on the Bitbucket page in the link below, together with the source code for a demo implementation based on the [CHAI3D](https://www.chai3d.org/) open-source haptics framework. [[Source code](https://bitbucket.org/FredrikNysjo/vector_haptics)] [[Abstract](https://bitbucket.org/FredrikNysjo/vector_haptics)] # Personal projects Source code for these personal projects can be found on my [Bitbucket page](https://bitbucket.org/FredrikNysjo/workspace/repositories) or via links from the project descriptions. ## Atomic rasterisation of SDF models  Work-in-progress demo (in C++ and OpenGL) of rendering of signed distance field (SDF) models with atomic rasterisation, automatic level-of-detail (LOD), cluster-based visibility culling, and large number of instances! Basically a continuation of some of the work that I did in the WSCG paper "Clustered Grid Cells Data Structure for Isosurface Rendering" (see separate description under section CG research). [[Source code](https://bitbucket.org/FredrikNysjo/sdf_cells)] [[Video (demo)](https://drive.google.com/file/d/1r0xEVwJXUZFL84Ug4F0uqj93D7EG3r9s/view?usp=sharing)] ## glTF viewer (OpenGL 4.6 version)  A glTF model viewer implemented in C++, and based on a [code skeleton](https://github.com/cg-uu/gltf_viewer) I wrote in 2021 while teaching the course Computer Graphics 1TD388/1MD150 at Uppsala University. This implementation of the viewer supports more complex scenes (with node hierarchies), physically based rendering of glTF materials, image-based lighting from HDR environment maps, shadows and screen-space ambient occlusion, and a few post-processing effects. It uses OpenGL 4.6 for rendering, instead of the older (but for teaching more suitable) OpenGL 3.3 used in the course. Shaders are also distributed as binary SPIR-V shaders, instead of plain GLSL code (which is available in a separate repository). [[Source code (including more screenshots)](https://bitbucket.org/FredrikNysjo/gltf_viewer_gl4)] ## glTF viewer (Vulkan version)  A Vulkan port of my OpenGL-based glTF model viewer. Includes all features and render passes from the OpenGL version, but with some additional restrictions regarding supported vertex formats. The goal here is mainly to learn Vulkan (after many years of using higher level API:s like OpenGL) and also to experiment with sharing GLSL shader code between Vulkan and OpenGL code bases. Like the OpenGL-based viewer, shaders are distributed as SPIR-V compiled from GLSL code stored in a shared separate repository. For Vulkan abstractions, the code from my [Vulkan testbed](https://bitbucket.org/FredrikNysjo/vulkan_testbed) is used. [[Source code (including more screenshots)](https://bitbucket.org/FredrikNysjo/gltf_viewer_vulkan)] ## Vulkan testbed  Simple Vulkan testbed (implemented in C++) used for testing things and to provide some abstractions over the API in personal projects. This is something I wrote also to learn how the Vulkan API works, so the code is probably not suitable (or safe!) for other things than hobby stuff... [[Source code](https://bitbucket.org/FredrikNysjo/vulkan_testbed)] ## ISPC-based voxelization  A small command-line based tool (implemented in C++ and ISPC) for converting OBJ-meshes into VTK-volumes. It outputs an 8-bit grayscale coverage representation of the voxelized mesh, by first doing solid voxelization at a higher resolution and then downsampling the result. ISPC is used for vectorisation to speed up the CPU-based voxelization. This tool was also used in some of the isosurface rendering papers listed in the section CG research to generate some of the test volume datasets. [[Source code](https://bitbucket.org/FredrikNysjo/voxelize_ispc)] # Software development These are tools and software for virtual surgery planning, visualization, and image analysis that I have been involved in developing, and where I also did some graphics programming in OpenGL and WebGL. ## HASP  HASP (which stands for haptics-assisted surgery planning system) is a software that I and Pontus Olsson developed in a research project about haptics at the Centre for Image Analysis at Uppsala University. It uses a voxmap-pointshell algorithm for 6DOF haptic rendering, in combination with stereo graphics, to allow a surgeon to feel contacts between virtual bone fragments via a haptic input device and work with the data in 3D when planning a reconstruction. The first prototype from 2013 was built on top of the open-source H3DAPI from SenseGraphics AB, which we later replaced with a custom framework (based on C++ and OpenGL 3.2) that I developed. On the architecture and rendering side, HASP has some nice features, including a render queue, assets backed by JSON files, and a stencil-routed A-buffer for order independant transparency! The source code for the framework part is public, while the rest of the HASP code is private because of a previous effort to commercialize the software. A version of the system is also installed at the Uppsala University hospital. [[Source code (framework only)](https://bitbucket.org/FredrikNysjo/hasp_framework)] [[Most recent paper (2021)](https://link.springer.com/content/pdf/10.1007/s11548-021-02353-w.pdf)] [[Old video](https://drive.google.com/file/d/1vpAOeexyk1Uvi-A_NDajYYKtKDHPPb70/view?usp=sharing)] [[Newer video](https://drive.google.com/file/d/1mrRfR_P3DMLA5u_wWXUEs-HD4GvRXCDB/view?usp=sharing)] ## TissUUmaps  TissUUmaps is a web-based tool that biologists and pathologists can use to visualize point data and polygonal regions on top of large whole slide images obtained from microscopes. It is developed at the Wählby Lab research group at Uppsala University. My contribution to this software has mainly been the implementation of the WebGL-based marker rendering in the JavaScript/HTML-based [core version](https://github.com/TissUUmaps/TissUUmapsCore) of the tool, to enable interactive rendering of datasets with up to tens of millions of points in the browser. I also worked on adding WebGL-based path rendering of polygonal regions to the more actively developed [standalone version](https://github.com/TissUUmaps/TissUUmaps) of the tool. For more information and some cool interactive demos, check out the [TissUUmaps website](https://tissuumaps.github.io) and its [project gallery](https://tissuumaps.github.io/gallery)! ## ichseg  A tool I developed for annotation and semi-automatic segmentation of intracranial hemorrhages in medical CT images, as part of a technical fellowship grant from AIDA in 2020. It is implemented in Python (with NumPy being used for most of the processing), uses OpenGL for volume visualization and other rendering, and supports DICOM and a few other input file formats. A goal of the project was also to make it possible to connect this tool to deep learning models trained for automatic segmentation, so that radiologists can go in and draw corrections if they are not happy with the automatic result. For more details, see the source code and my [presentation slides](https://drive.google.com/file/d/1c_dhk7SCbARQWq47V2PdFekFEQ-sviRn/view?usp=sharing) from the AIDA Days 2021. Note: people a bit familiar with graphics programming might in the screenshot recognise Omar Cornut's popular [Dear ImGui library](https://github.com/ocornut/imgui) being used for the GUI in the tool :) [[Source code](https://github.com/FredrikNysjo/ichseg)] # CV My full CV can be found [here](cv.html). # Contact
Fredrik Nysjö ![]() |