New Book: Virtual participation, real involvement – Transformative technologies for a more inclusive society
Stacks Image 32269
Virtual reality (VR) and augmented reality (AR) are among the fastest-growing technologies of the 21st century. They also open up enormous opportunities for social integration: through cultural and educational offerings, through networked digital interaction spaces, or as a means of promoting citizen participation. The contributions in this volume present a wide range of application scenarios for VR and AR in the fields of education, health and public space. They demonstrate in a practical way how society, but also companies, can benefit from expanding their technological skills and taking diversity aspects into account. After all, genuine participation is only possible through a sustainable, transdisciplinary and citizen science approach.

Our book chapter ‘Human-Centred Design of Mixed Reality Applications in Medical Education – GreifbAR’ is now available in open access. Authors are Robert Luzsa, Moritz Queisner, Christopher Remde, Igor Sauer, Nadia Robertini and Susanne Mayr.

As part of the BMBF-funded project ‘Tangible Reality – Skilful Interaction of User Hands and Fingers with Real Tools in Mixed Reality Worlds’, we investigated how XR technology can be integrated into medical education. The chapter presents an interdisciplinary, XR-based training system for surgical knot tying. It describes key design principles and experiences from development and evaluation. In addition, it proposes a model for the human-centred design of comparable training applications that can also support other projects.

Opening Exhibition | »Vessels. Infrastructures of Life«
Stacks Image 28095

We warmly invite you to the opening of »Vessels. Infrastructures of Life« at the Berlin Museum of Medical History at the Charité (bmm), a group exhibition curated by Igor M. Sauer and Navena Widulin with contributions by Assal Daneshgar, Emile de Visscher, Frédéric Eyl, Karl Hillebrandt, Eriselda Keshi, Dietrich Polenz, Moritz Queisner, Iva Rešetar and Igor M. Sauer.

Vernissage
Wed, 4 June 2025, 7:00 - 10:00 pm

Exhibition
5 June – 12 October 2025 Tue, Thu, Fri, Sun, 10:00 am - 5:00 pm Wed, Sat, 10:00 am - 7:00 pm Closed on Mondays

Venue
Berliner Medizinhistorisches Museum der Charité (bmm) Virchowweg 17 10117 Berlin

What do plants, animals, humans and cities have in common? They all have vascular systems and, therefore, an infrastructure without which they would not be able to survive.

In the human body, arteries and veins move the blood together with the heart. Plants have a finely branched vascular system for the transport of water and nutrients. And cities utilize an underground network of pipelines that supply clean water and remove wastewater. The temporary exhibition, co-curated by Igor Sauer and Navena Widulin, shows how these vessels function and how they can be visualized, used and reproduced.

What can medicine learn from these natural and technical supply systems? What role does the interdisciplinary view – between biology, design, materials research and medical technology – play for regenerative medicine? And what innovative approaches can be derived from this for the development of artificial and bioartificial donor organs?

»Vessels. Infrastructures of Life« provides insights into the work of designers, material scientists and surgical researchers who are working together on solutions for the future – inspired by nature, technology and the logic of living systems. From exhibits on transplantation and regenerative medicine to examples of architecture and design, the exhibition offers exciting insights into these often-hidden structures. The objects on display correspond with those in Rudolf Virchow’s historical collection of specimens. A particular focus lies on the connections between natural vessels and human-made networks, such as the regulation of temperature in buildings or the water and wastewater supply in cities.

The temporary exhibition »Vessels. Infrastructures of Life« is a collaboration between the Berlin Museum of Medical History and the Experimental Surgery at the Charité and the Cluster of Excellence »Matters of Activity« of Humboldt-Universität zu Berlin as part of the  _matter Festival 2025.
90-Day Mortality Prediction in Elective Visceral Surgery Using Machine Learning
Stacks Image 28103
Our paper, "90-Day Mortality Prediction in Elective Visceral Surgery Using Machine Learning: A Retrospective Multicenter Development, Validation, and Comparison Study" has been published online ahead of print in the International Journal of Surgery.
Authors are C. Riepe, R. van de Water, A. Winter, B. Pfitzner, L. Faraj, R. Ahlborn, M. Schulze, D. Zuluaga, C. Schineis, K. Beyer, J. Pratschke, B. Arnrich, I.M. Sauer, and M.M. Maurer

Machine Learning (ML) is increasingly being adopted in biomedical research, however, its potential for outcome prediction in visceral surgery remains uncertain. This study compares the potential of ML methods for preoperative 90-day mortality (90DM) prediction of an aggregated multi-organ approach to conventional scoring systems and individual organ models.

This retrospective cohort study enrolled patients undergoing major elective visceral surgery between 2014 and 2022 across two tertiary centers. Multiple ML models for preoperative 90DM prediction were trained, externally validated and benchmarked against the American Society of Anesthesiologists (ASA) score and revised Charlson Comorbidity Index (rCCI). Areas under the receiver operating characteristic (AUROC) and precision recall curves (AUPRC) including standard deviations were calculated. Additionally, individual models for esophageal, gastric, intestinal, liver, and pancreatic surgery were developed and compared to an aggregated approach. A total of 7,711 cases encompassing 78 features were included. Overall 90DM was 4% (n = 309). An XBoost classifier demonstrated the best performance and high robustness following external validation (AUROC: 0.86 [0.01]; AUPRC: 0.2 [0.04]). All models outperformed the ASA score (AUROC: 0.72; AUPRC: 0.08) and rCCI (AUROC: 0.81; AUPRC: 0.11). rCCI, patient age and C-reactive protein emerged as most decisive model weights. Models for gastric (AUROC: 0.88 [0.13]; AUPRC: 0.24 [0.26]) and intestinal surgery (AUROC: 0.87 [0.05]; AUPRC: 0.17 [0.09]) revealed the highest organ-specific performances, while pancreatic surgery yielded the lowest results (AUROC: 0.66 [0.08]; AUPRC: 0.22 [0.12]). A combined multi-organ approach (AUROC: 0.84 [0.04]; AUPRC: 0.21 [0.06]) demonstrated superiority over the weighted average across all organ-specific models (AUROC: 0.82 [0.07]; AUPRC: 0.2 [0.13]).

ML offers robust preoperative risk stratification for 90DM in elective visceral surgery. Leveraging training across multi-organ cohorts may improve accuracy and robustness compared to organ-specific models. Prospective studies are needed to confirm the potential of ML in surgical outcome prediction.
Sparse camera volumetric video applications
The paper "Sparse camera volumetric video applications. A comparison of visual fidelity, user experience, and adaptability" is available open access in Frontiers in Signal Processing.
Authors are Christopher Remde, Igor M. Sauer, and Moritz Queisner.

Volumetric video production in commercial studios is predominantly produced using a multi-view stereo process that relies on a high two-digit number of cameras to capture a scene. Due to the hardware requirements and associated processing costs, this workflow is resource-intensive and expensive, making it unattainable for creators and researchers with smaller budgets. Low-cost volumetric video systems using RGBD cameras offer an affordable alternative. As these small, mobile systems are a relatively new technology, the available software applications vary in terms of workflow and image quality. In this paper we provide an overview of the technical capabilities of sparse camera volumetric video capture applications and assess their visual fidelity and workflow.

We selected volumetric video applications that are publicly available, support capture with multiple Microsoft Azure Kinect cameras and run on consumer-grade computer hardware. We compared the features, usability, and workflow of each application and benchmarked them in five different scenarios. Based on the benchmark footage, we analyzed spatial calibration accuracy, artifact occurrence and conducted a subjective perception study with 19 participants from a game design study program to assess the visual fidelity of the captures.

We evaluated three applications, Depthkit Studio, LiveScan3D and VolumetricCapture. We found Depthkit Studio to provide the best experience for novel users, while LiveScan3D and VolumetricCapture require advanced technical knowledge to be operated. The footage captured by Depthkit Studio showed the least amount of artifacts by a larger margin, followed by LiveScan3D and VolumetricCapture. These findings were confirmed by the participants who preferred Depthkit Studio over LiveScan3D and VolumetricCapture. Based on the results, we recommend Depthkit Studio for the highest fidelity captures. LiveScan3D produces footage of only acceptable fidelity but is the only candidate that is available as open-source software. We therefore recommend it as a platform for research and experimentation. Due to the lower fidelity and high setup complexity, we recommend VolumetricCapture only for specific use-cases where its ability to handle a high number of sensors in a large capture volume is required.
Stacks Image 28139
Deutschlandfunk: AI in the operating theater
Stacks Image 28166
How artificial intelligence supports surgeons: Planning surgical procedures, monitoring patients or predicting complications: there are already many useful applications for AI in the operating theater in research. In hospitals, the technology is still an exception. This is likely to change soon.
A Deutschlandfunk radio show/podcast by Carina Schroeder and Friederike Walch-Nasseri reports on the work in our Digital Surgery Lab (in German).
 
Artificial intelligence is revolutionizing our everyday lives. It translates texts, filters news, analyzes X-ray images and decides who gets a job. In the “KI verstehen (Understanding AI)” podcast, Deutschlandfunk provides answers to questions about dealing with AI every week.

Surgical planning in virtual reality: a systematic review
Stacks Image 28265
We just published a review on surgical planning in VR in the Journal of Medical Imaging. In the systematic review we look into how virtual reality (VR) is transforming surgical planning. With VR physicians can assess patient-specific image data in 3D, enhancing surgical decision-making and spatial localization of pathologies. We found that benefits of VR become more evident. However, its application in surgical planning remains experimental, with a need for refined study designs, improved technical reporting, and enhanced VR software usability for effective clinical implementation. Authors of "Surgical planning in virtual reality: a systematic review" are Prof. Dr. Moritz Queisner and Karl Eisenträger.

Virtual reality (VR) technology has emerged as a promising tool for physicians, offering the ability to assess anatomical data in 3D with visuospatial interaction qualities. This systematic review aims to provide an up-to-date overview of the latest research on VR in the field of surgical planning.
A comprehensive literature search was conducted based on the preferred reporting items for systematic reviews and meta-analyses covering the period from April 1, 2021 to May 10, 2023. The review summarizes the current state of research in this field, identifying key findings, technologies, study designs, methods, and potential directions for future research. Results show that the application of VR for surgical planning is still in an experimental stage but is gradually advancing toward clinical use. The diverse study designs, methodologies, and varying reporting hinder a comprehensive analysis. Some findings lack statistical evidence and rely on subjective assumptions. To strengthen evaluation, future research should focus on refining study designs, improving technical reporting, defining visual and technical proficiency requirements, and enhancing VR software usability and design. Addressing these areas could pave the way for an effective implementation of VR in clinical settings.
Spacial computing in the OR
Stacks Image 28271

We tested the Apple Vision Pro in the operating theatre and it cuts an excellent figure: great images even in challenging lighting situations, stable interaction with the device - even though the limited peripheral vision and awareness inherent to video-based devices is a considerable downside in surgery.

We are looking forward to our first software solutions for improved hand-eye coordination in visceral surgery for this device too!
AI-based intra- and postoperative measurement from stereoimages
The publication "Redefining the Laparoscopic Spatial Sense: AI-based Intra- and Postoperative Measurement from Stereoimages“ has been accepted for the 38th AAAI Conference on Artificial Intelligence and is available via https://doi.org/10.48550/arXiv.2311.09744. The publication is the result of a fruitful collaboration between Karlsruhe Institute of Technology (KIT), Fraunhofer FIT, University of Bayreuth, and Charité – Universitätsmedizin Berlin. Authors are Leopold Müller, Patrick Hemmer, Moritz Queisner, Igor Sauer, Simeon Allmendinger, Johannes Jakubik, Michael Vössing, and Niklas Kühl.

A significant challenge in image-guided surgery is the accurate measurement task of relevant structures such as vessel segments, resection margins, or bowel lengths. While this task is an essential component of many surgeries, it involves substantial human effort and is prone to inaccuracies. In this paper, we develop a novel human-AI-based method for laparoscopic measurements utilizing stereo vision that has been guided by practicing surgeons. Based on a holistic qualitative requirements analysis, this work proposes a comprehensive measurement method, which comprises state-of-the-art machine learning architectures, such as RAFT-Stereo and YOLOv8. The developed method is assessed in various realistic experimental evaluation environments. Our results outline the potential of our method achieving high accuracies in distance measurements with errors below 1 mm. Furthermore, on-surface measurements demonstrate robustness when applied in challenging environments with textureless regions. Overall, by addressing the inherent challenges of image-guided surgery, we lay the foundation for a more robust and accurate solution for intra- and postoperative measurements, enabling more precise, safe, and efficient surgical procedures.

Stacks Image 28320
Priv.-Doz. Dr. med. Simon Moosburner
Today Simon Moosburner gave his inaugural lecture on "Liver Transplantation in Germany - Opportunities and Solutions for the Future". He is now – at the age of 28 (!) – a private lecturer (Privatdozent) at the Charité – Universitätsmedizin Berlin and habilitated in the field of "Experimental Surgery".

He is being honored for his achievements in the field of extracorporeal organ perfusion and organ transplantation. His postdoctoral thesis is entitled "Challenges and solutions in adults and children after liver transplantation".

Congratulations!

Stacks Image 28373
Dr. Zeynep Akbal
Stacks Image 28378
We are delighted to welcome Dr. Zeynep Akbal as a new member of the team!
Before she started as a post-doctoral researcher at the Digital Surgery Lab she studied communication sciences, media sciences, philosophy, and worked on developing her interdisciplinary method around virtual reality (VR) technology. She did her doctorate in philosophy at Universität Potsdam. Her dissertation is recently published as a monograph, titled "Lived-Body Experiences in Virtual Reality. A Phenomenology of the Virtual Body.”
Her research focuses on the intersection of philosophy of perception, cognitive sciences and VR. In her recent research project “Tactile Stimulation in VR” at Max Planck Institute for Human Cognitive and Brain Sciences, she focused on the behavioral consequences of haptic feedback in a VR task.

Wellcome to the team!
science x media Tandem Program: "From Slices to Spaces"
Prof. Dr. Moritz Queisner and Frédéric Eyl (Designer and Managing Director of TheGreenEyl) successfully applied to the Stiftung Charité for funding as a "science x media tandem".
The science x media tandems are the first programme in the new funding priority "Open Life Science". With this funding priority, the Charité Foundation is working to make the life sciences in Berlin more comprehensible and accessible to a broader public and to strengthen the trustworthiness of medical professionals.

Under the title "From Slices to Spaces", the tandem of Moritz Queisner and Frédéric Eyl is implementing a science parcours in which spatially complex research data from surgery and biomedicine will be made multisensually accessible to a broad audience through new visualization techniques. Building on research work on new imaging techniques by Moritz Queisner, they employ Extended Reality techniques. Due to their unique ability to link digital objects with the real environment of the viewers, the 4D images they generate are particularly suited for representing and conveying spatial information.

This is where the tandem's project comes in: 4D images are not only interesting for researchers to understand complex research data but can also provide laypeople with a less presupposing insight into research data and processes. Frédéric Eyl's media expertise will be used to make the specific visual knowledge from research comprehensible and experiential for non-experts. The science parcours is intended to integrate as a digital extension into the architecture of the new research building, "Der Simulierte Mensch", located on the premises of Charité. The parcours will include the facade, the inter-floor airspace, and the central glass surfaces within the building as its stations. By enabling users to explore 4D research data within the architecture and investigate it using their own smartphones in an AR application, concrete practices and deployment locations of new image-based technologies become experiential and comprehensible. This project not only enhances the perception of Charité and the scientific location of Berlin but also opens up places of knowledge creation to the public, making practices and techniques of life sciences more visible.


Stacks Image 28388
New DFG project "4D Imaging"
Stacks Image 28436
The DFG Schwerpunktprogramm „Das Digitale Bild“ (SPP 2172) funds the new project “4D Imaging: From Image Theory to Imaging Practice” (2023-2026). Principal investigators are Prof. Dr. Kathrin Friedrich (Universität Bonn) and Prof. Dr. Moritz Queisner.

The term 4D imaging refers to a new form of digital visuality in which image, action and space are inextricably interwoven. 4D technologies capture, process and transmit information about physical space and make it computable in real time. Changes due to movements and actions become calculable in real time, making 4D images particularly important in aesthetic and operational contexts where they reconceptualize various forms of human-computer interaction. The 4D Imaging project responds to the growing need in medicine to understand, use, and design these complex imaging techniques. It transfers critical reflexive knowledge from research into clinical practices to enable surgeons to use and apply 4D Imaging techniques. Especially in surgical planning, 4D Imaging techniques may improve the understanding and accessibility of spatially complex anatomical structures. To this end, the project is developing approaches to how 4D imaging can complement and transform established topographic ("2D") imaging practices.

Stacks Image 28438
Work with us | PhD position

We are hiring: 3-year #PhD position @Charité – Universitätsmedizin Berlin.
  • Join our interdisciplinary team for a PhD on new #imaging technologies at the intersection of digital health, surgery and biomedicine
  • Explore new ways to understand and/or visualize anatomical structures in #4D using extended reality #XR #digitaltransformation
  • Connect theory and practice in an interdisciplinary research group
  • Open call: open to all disciplines! Yes, that’s right – design, computer science, computer visualistics, digital health, psychology, media studies, workplace studies, game design…
  • What counts is a convincing idea for your doctoral project in the field of "4D imaging“

Sounds interesting? Apply now or reach out to Moritz Queisner (moritz.queisner@charite.de) if you have any questions.

More information:
German: https://karriere.charite.de/stellenangebote/detail/wissenschaftliche-mitarbeiterin-wissenschaftlicher-mitarbeiter-dwm-technologietransfer-chirurgie-dm27222a
English: https://karriere.charite.de/stellenangebote/detail/scientific-researcher-phd-position-dfm-dm27222b

Prof. Dr. Moritz Queisner
Today Moritz Queisner received his appointment certificate for the professorship (W1) for Interdisciplinary Technology Transfer and Digitization in Surgery!
The professorship is associated with the DFG-funded Cluster of Excellence
»Matters of Activity«.

Congratulations!

Stacks Image 28459
On behalf of the Dean, Vice Dean Prof. Susanne Michl awards the certificate.
BMBF funds KIARA
With the programme "AI-based assistance systems for process-accompanying health applications", the Federal Ministry of Education and Research (BMBF) is funding innovative research and development work on interactive assistance systems that support processes in clinical health care using artificial intelligence methods.

Together with the partners Gebrüder Martin GmbH & Co. KG, Tuttlingen, HFC Human-Factors-Consult GmbH, Berlin and the Fraunhofer Institute for Telecommunications Heinrich-Hertz-Institut (HHI), Berlin, we successfully applied with the project "AI-based recording of work processes in the operating theatre for the automated compilation of the operating theatre report" (KIARA).


Stacks Image 28492


Operating theatre reports document all relevant information during surgical interventions. They serve to ensure therapeutic safety and accountability and as proof of performance. The preparation of the OR report is time-consuming and ties up valuable working time – time that is then not available for the treatment of patients.

In the KIARA project, we are working on a system that automatically drafts operating theatre reports. The KIARA system is intended to relieve medical staff: it documents operating theatre activities and creates a draft of the report, which then only needs to be checked, completed and approved. The system works via cameras integrated into operating theatre lamps. Their image data is then analysed with the help of artificial intelligence to recognise and record objects, people and all operating theatre activities. The ambitious system is to be developed and tested in a user-centred manner for procedures in the abdominal cavity and in oral and maxillofacial surgery.

KIARA is intended to continuously learn through human feedback and to simplify clinical processes for the benefit of medical staff by automating the creation of operating theatre reports. The system can also be applied to other operating theatre areas in the future.

The project has a financial volume of € 2.16 million.
The kick-off meeting took place on 16.09.2022 at the Charité.
„Si-M-Day“ | November 24th, 2022
Stacks Image 28502
Join us – at our online networking event.
We, the Si-M spokespersons and coordinators, are pleased to invite you to our first symposium „Si-M-Day“ on 24th November from 9 to 14 h – online.
It is dedicated to networking and initiation of projects between investigators of both partner institutions.
Click
here to register until November 18th (abstract submission deadline October 17th).
Active Matter in Robotic-Assisted Surgery
Stacks Image 28529
Tuesday, 12.09.2022 | Cluster Retreat | Matters of Activity

2:30 – 2:45 pm Welcome & Intro
2:45 – 4:15 pm Panel 1
Rasa Weber Product Design (20 Minutes)
Felix Rasehorn Product Design (20 Minutes)
Binru Yang Engineering (20 Minutes)
Panel Discussion (30 Minutes)

4:15 – 4:45 pm Coffee Break
4:45 – 6:15 pm Panel 2
Jakub Rondomanski Mathematics (20 Minutes)
Babette Werner Art and Visual History (20 Minutes)
Anna Schäffner & Dominic Eger Domingos Product Design (20 Minutes)
Panel Discussion (30 Minutes)

6:15–7:30 pm Opening Exhibition und Aperitivo
Si-M | Topping-out Ceremony
Today, representatives of Charité – Universitätsmedizin Berlin and Technische Universität Berlin celebrated the topping-out ceremony for the research building "Der Simulierte Mensch" (Si-M, "The Simulated Human") together with political representatives. Guests included the Governing Mayor Franziska Giffey, Senator for Health and Science and Charité Supervisory Board Chair Ulrike Gote and Finance Senator Daniel Wesener.

We are very excited: this will be a great building with even greater content.

Stacks Image 28556
 Page 1 / 3  >>
Year
© 2025 Prof. Dr. Igor M. Sauer | Charité - Universitätsmedizin Berlin | Disclaimer

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purpose illustrated in the Disclaimer. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to the use of cookies.