90-Day Mortality Prediction in Elective Visceral Surgery Using Machine Learning
Stacks Image 27932
Our paper, "90-Day Mortality Prediction in Elective Visceral Surgery Using Machine Learning: A Retrospective Multicenter Development, Validation, and Comparison Study" has been published online ahead of print in the International Journal of Surgery.
Authors are C. Riepe, R. van de Water, A. Winter, B. Pfitzner, L. Faraj, R. Ahlborn, M. Schulze, D. Zuluaga, C. Schineis, K. Beyer, J. Pratschke, B. Arnrich, I.M. Sauer, and M.M. Maurer

Machine Learning (ML) is increasingly being adopted in biomedical research, however, its potential for outcome prediction in visceral surgery remains uncertain. This study compares the potential of ML methods for preoperative 90-day mortality (90DM) prediction of an aggregated multi-organ approach to conventional scoring systems and individual organ models.

This retrospective cohort study enrolled patients undergoing major elective visceral surgery between 2014 and 2022 across two tertiary centers. Multiple ML models for preoperative 90DM prediction were trained, externally validated and benchmarked against the American Society of Anesthesiologists (ASA) score and revised Charlson Comorbidity Index (rCCI). Areas under the receiver operating characteristic (AUROC) and precision recall curves (AUPRC) including standard deviations were calculated. Additionally, individual models for esophageal, gastric, intestinal, liver, and pancreatic surgery were developed and compared to an aggregated approach. A total of 7,711 cases encompassing 78 features were included. Overall 90DM was 4% (n = 309). An XBoost classifier demonstrated the best performance and high robustness following external validation (AUROC: 0.86 [0.01]; AUPRC: 0.2 [0.04]). All models outperformed the ASA score (AUROC: 0.72; AUPRC: 0.08) and rCCI (AUROC: 0.81; AUPRC: 0.11). rCCI, patient age and C-reactive protein emerged as most decisive model weights. Models for gastric (AUROC: 0.88 [0.13]; AUPRC: 0.24 [0.26]) and intestinal surgery (AUROC: 0.87 [0.05]; AUPRC: 0.17 [0.09]) revealed the highest organ-specific performances, while pancreatic surgery yielded the lowest results (AUROC: 0.66 [0.08]; AUPRC: 0.22 [0.12]). A combined multi-organ approach (AUROC: 0.84 [0.04]; AUPRC: 0.21 [0.06]) demonstrated superiority over the weighted average across all organ-specific models (AUROC: 0.82 [0.07]; AUPRC: 0.2 [0.13]).

ML offers robust preoperative risk stratification for 90DM in elective visceral surgery. Leveraging training across multi-organ cohorts may improve accuracy and robustness compared to organ-specific models. Prospective studies are needed to confirm the potential of ML in surgical outcome prediction.
Sparse camera volumetric video applications
The paper "Sparse camera volumetric video applications. A comparison of visual fidelity, user experience, and adaptability" is available open access in Frontiers in Signal Processing.
Authors are Christopher Remde, Igor M. Sauer, and Moritz Queisner.

Volumetric video production in commercial studios is predominantly produced using a multi-view stereo process that relies on a high two-digit number of cameras to capture a scene. Due to the hardware requirements and associated processing costs, this workflow is resource-intensive and expensive, making it unattainable for creators and researchers with smaller budgets. Low-cost volumetric video systems using RGBD cameras offer an affordable alternative. As these small, mobile systems are a relatively new technology, the available software applications vary in terms of workflow and image quality. In this paper we provide an overview of the technical capabilities of sparse camera volumetric video capture applications and assess their visual fidelity and workflow.

We selected volumetric video applications that are publicly available, support capture with multiple Microsoft Azure Kinect cameras and run on consumer-grade computer hardware. We compared the features, usability, and workflow of each application and benchmarked them in five different scenarios. Based on the benchmark footage, we analyzed spatial calibration accuracy, artifact occurrence and conducted a subjective perception study with 19 participants from a game design study program to assess the visual fidelity of the captures.

We evaluated three applications, Depthkit Studio, LiveScan3D and VolumetricCapture. We found Depthkit Studio to provide the best experience for novel users, while LiveScan3D and VolumetricCapture require advanced technical knowledge to be operated. The footage captured by Depthkit Studio showed the least amount of artifacts by a larger margin, followed by LiveScan3D and VolumetricCapture. These findings were confirmed by the participants who preferred Depthkit Studio over LiveScan3D and VolumetricCapture. Based on the results, we recommend Depthkit Studio for the highest fidelity captures. LiveScan3D produces footage of only acceptable fidelity but is the only candidate that is available as open-source software. We therefore recommend it as a platform for research and experimentation. Due to the lower fidelity and high setup complexity, we recommend VolumetricCapture only for specific use-cases where its ability to handle a high number of sensors in a large capture volume is required.
Stacks Image 27884
Deutschlandfunk: AI in the operating theater
Stacks Image 27807
How artificial intelligence supports surgeons: Planning surgical procedures, monitoring patients or predicting complications: there are already many useful applications for AI in the operating theater in research. In hospitals, the technology is still an exception. This is likely to change soon.
A Deutschlandfunk radio show/podcast by Carina Schroeder and Friederike Walch-Nasseri reports on the work in our Digital Surgery Lab (in German).
 
Artificial intelligence is revolutionizing our everyday lives. It translates texts, filters news, analyzes X-ray images and decides who gets a job. In the “KI verstehen (Understanding AI)” podcast, Deutschlandfunk provides answers to questions about dealing with AI every week.

Surgical planning in virtual reality: a systematic review
Stacks Image 27662
We just published a review on surgical planning in VR in the Journal of Medical Imaging. In the systematic review we look into how virtual reality (VR) is transforming surgical planning. With VR physicians can assess patient-specific image data in 3D, enhancing surgical decision-making and spatial localization of pathologies. We found that benefits of VR become more evident. However, its application in surgical planning remains experimental, with a need for refined study designs, improved technical reporting, and enhanced VR software usability for effective clinical implementation. Authors of "Surgical planning in virtual reality: a systematic review" are Prof. Dr. Moritz Queisner and Karl Eisenträger.

Virtual reality (VR) technology has emerged as a promising tool for physicians, offering the ability to assess anatomical data in 3D with visuospatial interaction qualities. This systematic review aims to provide an up-to-date overview of the latest research on VR in the field of surgical planning.
A comprehensive literature search was conducted based on the preferred reporting items for systematic reviews and meta-analyses covering the period from April 1, 2021 to May 10, 2023. The review summarizes the current state of research in this field, identifying key findings, technologies, study designs, methods, and potential directions for future research. Results show that the application of VR for surgical planning is still in an experimental stage but is gradually advancing toward clinical use. The diverse study designs, methodologies, and varying reporting hinder a comprehensive analysis. Some findings lack statistical evidence and rely on subjective assumptions. To strengthen evaluation, future research should focus on refining study designs, improving technical reporting, defining visual and technical proficiency requirements, and enhancing VR software usability and design. Addressing these areas could pave the way for an effective implementation of VR in clinical settings.
Spacial computing in the OR
Stacks Image 23697

We tested the Apple Vision Pro in the operating theatre and it cuts an excellent figure: great images even in challenging lighting situations, stable interaction with the device - even though the limited peripheral vision and awareness inherent to video-based devices is a considerable downside in surgery.

We are looking forward to our first software solutions for improved hand-eye coordination in visceral surgery for this device too!
AI-based intra- and postoperative measurement from stereoimages
The publication "Redefining the Laparoscopic Spatial Sense: AI-based Intra- and Postoperative Measurement from Stereoimages“ has been accepted for the 38th AAAI Conference on Artificial Intelligence and is available via https://doi.org/10.48550/arXiv.2311.09744. The publication is the result of a fruitful collaboration between Karlsruhe Institute of Technology (KIT), Fraunhofer FIT, University of Bayreuth, and Charité – Universitätsmedizin Berlin. Authors are Leopold Müller, Patrick Hemmer, Moritz Queisner, Igor Sauer, Simeon Allmendinger, Johannes Jakubik, Michael Vössing, and Niklas Kühl.

A significant challenge in image-guided surgery is the accurate measurement task of relevant structures such as vessel segments, resection margins, or bowel lengths. While this task is an essential component of many surgeries, it involves substantial human effort and is prone to inaccuracies. In this paper, we develop a novel human-AI-based method for laparoscopic measurements utilizing stereo vision that has been guided by practicing surgeons. Based on a holistic qualitative requirements analysis, this work proposes a comprehensive measurement method, which comprises state-of-the-art machine learning architectures, such as RAFT-Stereo and YOLOv8. The developed method is assessed in various realistic experimental evaluation environments. Our results outline the potential of our method achieving high accuracies in distance measurements with errors below 1 mm. Furthermore, on-surface measurements demonstrate robustness when applied in challenging environments with textureless regions. Overall, by addressing the inherent challenges of image-guided surgery, we lay the foundation for a more robust and accurate solution for intra- and postoperative measurements, enabling more precise, safe, and efficient surgical procedures.

Stacks Image 23746
Priv.-Doz. Dr. med. Simon Moosburner
Today Simon Moosburner gave his inaugural lecture on "Liver Transplantation in Germany - Opportunities and Solutions for the Future". He is now – at the age of 28 (!) – a private lecturer (Privatdozent) at the Charité – Universitätsmedizin Berlin and habilitated in the field of "Experimental Surgery".

He is being honored for his achievements in the field of extracorporeal organ perfusion and organ transplantation. His postdoctoral thesis is entitled "Challenges and solutions in adults and children after liver transplantation".

Congratulations!

Stacks Image 23799
Dr. Zeynep Akbal
Stacks Image 23804
We are delighted to welcome Dr. Zeynep Akbal as a new member of the team!
Before she started as a post-doctoral researcher at the Digital Surgery Lab she studied communication sciences, media sciences, philosophy, and worked on developing her interdisciplinary method around virtual reality (VR) technology. She did her doctorate in philosophy at Universität Potsdam. Her dissertation is recently published as a monograph, titled "Lived-Body Experiences in Virtual Reality. A Phenomenology of the Virtual Body.”
Her research focuses on the intersection of philosophy of perception, cognitive sciences and VR. In her recent research project “Tactile Stimulation in VR” at Max Planck Institute for Human Cognitive and Brain Sciences, she focused on the behavioral consequences of haptic feedback in a VR task.

Wellcome to the team!
science x media Tandem Program: "From Slices to Spaces"
Prof. Dr. Moritz Queisner and Frédéric Eyl (Designer and Managing Director of TheGreenEyl) successfully applied to the Stiftung Charité for funding as a "science x media tandem".
The science x media tandems are the first programme in the new funding priority "Open Life Science". With this funding priority, the Charité Foundation is working to make the life sciences in Berlin more comprehensible and accessible to a broader public and to strengthen the trustworthiness of medical professionals.

Under the title "From Slices to Spaces", the tandem of Moritz Queisner and Frédéric Eyl is implementing a science parcours in which spatially complex research data from surgery and biomedicine will be made multisensually accessible to a broad audience through new visualization techniques. Building on research work on new imaging techniques by Moritz Queisner, they employ Extended Reality techniques. Due to their unique ability to link digital objects with the real environment of the viewers, the 4D images they generate are particularly suited for representing and conveying spatial information.

This is where the tandem's project comes in: 4D images are not only interesting for researchers to understand complex research data but can also provide laypeople with a less presupposing insight into research data and processes. Frédéric Eyl's media expertise will be used to make the specific visual knowledge from research comprehensible and experiential for non-experts. The science parcours is intended to integrate as a digital extension into the architecture of the new research building, "Der Simulierte Mensch", located on the premises of Charité. The parcours will include the facade, the inter-floor airspace, and the central glass surfaces within the building as its stations. By enabling users to explore 4D research data within the architecture and investigate it using their own smartphones in an AR application, concrete practices and deployment locations of new image-based technologies become experiential and comprehensible. This project not only enhances the perception of Charité and the scientific location of Berlin but also opens up places of knowledge creation to the public, making practices and techniques of life sciences more visible.


Stacks Image 23814
New DFG project "4D Imaging"
Stacks Image 23862
The DFG Schwerpunktprogramm „Das Digitale Bild“ (SPP 2172) funds the new project “4D Imaging: From Image Theory to Imaging Practice” (2023-2026). Principal investigators are Prof. Dr. Kathrin Friedrich (Universität Bonn) and Prof. Dr. Moritz Queisner.

The term 4D imaging refers to a new form of digital visuality in which image, action and space are inextricably interwoven. 4D technologies capture, process and transmit information about physical space and make it computable in real time. Changes due to movements and actions become calculable in real time, making 4D images particularly important in aesthetic and operational contexts where they reconceptualize various forms of human-computer interaction. The 4D Imaging project responds to the growing need in medicine to understand, use, and design these complex imaging techniques. It transfers critical reflexive knowledge from research into clinical practices to enable surgeons to use and apply 4D Imaging techniques. Especially in surgical planning, 4D Imaging techniques may improve the understanding and accessibility of spatially complex anatomical structures. To this end, the project is developing approaches to how 4D imaging can complement and transform established topographic ("2D") imaging practices.

Stacks Image 23864
Work with us | PhD position

We are hiring: 3-year #PhD position @Charité – Universitätsmedizin Berlin.
  • Join our interdisciplinary team for a PhD on new #imaging technologies at the intersection of digital health, surgery and biomedicine
  • Explore new ways to understand and/or visualize anatomical structures in #4D using extended reality #XR #digitaltransformation
  • Connect theory and practice in an interdisciplinary research group
  • Open call: open to all disciplines! Yes, that’s right – design, computer science, computer visualistics, digital health, psychology, media studies, workplace studies, game design…
  • What counts is a convincing idea for your doctoral project in the field of "4D imaging“

Sounds interesting? Apply now or reach out to Moritz Queisner (moritz.queisner@charite.de) if you have any questions.

More information:
German: https://karriere.charite.de/stellenangebote/detail/wissenschaftliche-mitarbeiterin-wissenschaftlicher-mitarbeiter-dwm-technologietransfer-chirurgie-dm27222a
English: https://karriere.charite.de/stellenangebote/detail/scientific-researcher-phd-position-dfm-dm27222b

Prof. Dr. Moritz Queisner
Today Moritz Queisner received his appointment certificate for the professorship (W1) for Interdisciplinary Technology Transfer and Digitization in Surgery!
The professorship is associated with the DFG-funded Cluster of Excellence
»Matters of Activity«.

Congratulations!

Stacks Image 23885
On behalf of the Dean, Vice Dean Prof. Susanne Michl awards the certificate.
BMBF funds KIARA
With the programme "AI-based assistance systems for process-accompanying health applications", the Federal Ministry of Education and Research (BMBF) is funding innovative research and development work on interactive assistance systems that support processes in clinical health care using artificial intelligence methods.

Together with the partners Gebrüder Martin GmbH & Co. KG, Tuttlingen, HFC Human-Factors-Consult GmbH, Berlin and the Fraunhofer Institute for Telecommunications Heinrich-Hertz-Institut (HHI), Berlin, we successfully applied with the project "AI-based recording of work processes in the operating theatre for the automated compilation of the operating theatre report" (KIARA).


Stacks Image 23918


Operating theatre reports document all relevant information during surgical interventions. They serve to ensure therapeutic safety and accountability and as proof of performance. The preparation of the OR report is time-consuming and ties up valuable working time – time that is then not available for the treatment of patients.

In the KIARA project, we are working on a system that automatically drafts operating theatre reports. The KIARA system is intended to relieve medical staff: it documents operating theatre activities and creates a draft of the report, which then only needs to be checked, completed and approved. The system works via cameras integrated into operating theatre lamps. Their image data is then analysed with the help of artificial intelligence to recognise and record objects, people and all operating theatre activities. The ambitious system is to be developed and tested in a user-centred manner for procedures in the abdominal cavity and in oral and maxillofacial surgery.

KIARA is intended to continuously learn through human feedback and to simplify clinical processes for the benefit of medical staff by automating the creation of operating theatre reports. The system can also be applied to other operating theatre areas in the future.

The project has a financial volume of € 2.16 million.
The kick-off meeting took place on 16.09.2022 at the Charité.
„Si-M-Day“ | November 24th, 2022
Stacks Image 23928
Join us – at our online networking event.
We, the Si-M spokespersons and coordinators, are pleased to invite you to our first symposium „Si-M-Day“ on 24th November from 9 to 14 h – online.
It is dedicated to networking and initiation of projects between investigators of both partner institutions.
Click
here to register until November 18th (abstract submission deadline October 17th).
Active Matter in Robotic-Assisted Surgery
Stacks Image 23955
Tuesday, 12.09.2022 | Cluster Retreat | Matters of Activity

2:30 – 2:45 pm Welcome & Intro
2:45 – 4:15 pm Panel 1
Rasa Weber Product Design (20 Minutes)
Felix Rasehorn Product Design (20 Minutes)
Binru Yang Engineering (20 Minutes)
Panel Discussion (30 Minutes)

4:15 – 4:45 pm Coffee Break
4:45 – 6:15 pm Panel 2
Jakub Rondomanski Mathematics (20 Minutes)
Babette Werner Art and Visual History (20 Minutes)
Anna Schäffner & Dominic Eger Domingos Product Design (20 Minutes)
Panel Discussion (30 Minutes)

6:15–7:30 pm Opening Exhibition und Aperitivo
Si-M | Topping-out Ceremony
Today, representatives of Charité – Universitätsmedizin Berlin and Technische Universität Berlin celebrated the topping-out ceremony for the research building "Der Simulierte Mensch" (Si-M, "The Simulated Human") together with political representatives. Guests included the Governing Mayor Franziska Giffey, Senator for Health and Science and Charité Supervisory Board Chair Ulrike Gote and Finance Senator Daniel Wesener.

We are very excited: this will be a great building with even greater content.

Stacks Image 23982
VolumetricOR | Surgical Innovation
Stacks Image 23987
Our paper "VolumetricOR: A new Approach to Simulate Surgical Interventions in Virtual Reality for Training and Education" is available in the latest issue of Surgical Innovation.

Surgical training is primarily carried out through observation during assistance or on-site classes, by watching videos as well as by different formats of simulation. The simulation of physical presence in the operating theatre in virtual reality might complement these necessary experiences. A prerequisite is a new education concept for virtual classes that communicates the unique workflows and decision-making paths of surgical health professions (i.e. surgeons, anesthesiologists, and surgical assistants) in an authentic and immersive way. For this project, media scientists, designers and surgeons worked together to develop the foundations for new ways of conveying knowledge using virtual reality in surgery.
A technical workflow to record and present volumetric videos of surgical interventions in a photorealistic virtual operating room was developed. Situated in the virtual reality demonstrator called VolumetricOR, users can experience and navigate through surgical workflows as if they are physically present . The concept is compared with traditional video-based formats of digital simulation in surgical training.

VolumetricOR let trainees experience surgical action and workflows a) three-dimensionally, b) from any perspective and c) in real scale. This improves the linking of theoretical expertise and practical application of knowledge and shifts the learning experience from observation to participation.
Discussion: Volumetric training environments allow trainees to acquire procedural knowledge before going to the operating room and could improve the efficiency and quality of the learning and training process for professional staff by communicating techniques and workflows when the possibilities of training on-site are limited.

Authors are Moritz Queisner, Michael Pogorzhelskiy, Christopher Remde, Johann Pratschke, and Igor M. Sauer.
BMBF grant – GreifbAR
Stacks Image 24038
The Federal Ministry of Education and Research (BMBF) funds the project "Tangible reality - skilful interaction of user hands and fingers with real tools in mixed reality worlds (GreifbAR)" – a cooperation of the Augmented Vision group of the DFKI (Prof. Dr. Didier Stricker), the Department of Psychology and Human-Machine Interaction of the University of Passau (Prof. Dr. Susanne Mayr), the company NMY Mixed Reality Communication (Christoph Lenk), and the Experimental Surgery of Charité – Universitätsmedizin Berlin (Prof. Dr. Igor M. Sauer).

The goal of the GreifbAR project is to make extended reality (XR) worlds, including virtual (VR) and mixed reality (MR), tangible and graspable by allowing users to interact with real and virtual objects with their bare hands. Hand accuracy and dexterity is paramount for performing precise tasks in many fields, but the capture of hand-object interaction in current XR systems is woefully inadequate. Current systems rely on hand-held controllers or capture devices that are limited to hand gestures without contact with real objects. GreifbAR solves this limitation by proposing a sensing system that detects both the full hand grip including hand surface and object pose when users interact with real objects or tools. This sensing system will be integrated into a mixed reality training simulator.

Competent handling of instruments and suture material is the basis of every surgical activity. The main instruments used in surgery are in the hands of the surgical staff. Their work is characterised by the targeted use of a large number of instruments that have to be operated and controlled in different ways. Until now, surgical knotting techniques have been learned by means of personal instruction by experienced surgeons, blackboard images and video-based tutorials. A training and teaching concept based on the acquisition of finger movement does not yet exist in surgical education and training. Learning surgical account techniques through participant observation and direct instruction by experienced surgeons is cost-intensive and hardly scalable. This type of training is increasingly reaching its limits in daily clinical practice, which can be attributed in particular to the changed economic, social and regulatory conditions in surgical practice. Students and trainees as well as specialist staff in further training are therefore faced with the problem of applying and practising acquired theoretical knowledge in a practice-oriented manner. Text- and image-based media allow scalable theoretical knowledge acquisition independent of time and place. However, gestures and work steps can only be passively observed and subsequently imitated. Moreover, the learning success cannot be quantitatively measured and verified.

The aim of the Charité's sub-project is therefore to develop a surgical application scenario for Mixed/Augmented Reality (MR/AR) for the spatial guidance and verifying recording of complex fine motor finger movements for the creation of surgical knots, the practical implementation and technical testing of the developed concept within the framework of a demonstrator, and the evaluation of the usability of the system for use in a clinical context.
 Page 1 / 3  >>
© 2025 Prof. Dr. Igor M. Sauer | Charité - Universitätsmedizin Berlin | Disclaimer

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purpose illustrated in the Disclaimer. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to the use of cookies.