PERISKOP brings hospital stays to life with immersive 360° videos
Stacks Image 32485
A hospital stay can be unsettling – especially in the run-up to major operations or medical procedures.  "Patient Empowerment Through Immersive Hospital Experiences before Operations or Procedures" (PERISKOP) uses immersive 360° films to bring key wards and medical care processes to life. Stereoscopic 360° videos show the environment and processes in the intensive care unit, during preparation for surgery and in obstetrics.

This gives patients the opportunity to familiarise themselves with the environment before their stay, learn about spatial and organisational processes, and experience potentially anxiety-provoking situations in advance. The aim is to reduce uncertainty, create a feeling of familiarity and support individual preparation for the hospital stay.
The project also examines how VR-supported information services can improve patient education and be integrated into clinical routine in the long term to support hospital staff.

The studies are based on three stereoscopic 360° films on the clinical areas of anaesthesia, intensive care and childbirth, produced at original locations at Charité – Universitätsmedizin Berlin. The films are not only aimed at patients: hospital staff can also use the recordings to gain the patient's perspective and thus deepen their empathy and understanding of the clinical experience. Each sub-study follows its own research design, tailored to the respective clinical situation and target group. Among other things, the studies examine preoperative anxiety, the subjective feeling of preparation, and effects on empathy and communication in everyday clinical practice. Data collection is currently underway. The project is funded by the Charité Foundation through the Max Rubner Prize 2024.

Learn more about PERISKOP via periskop.experimental-surgery.de/

We thank Stiftung Charité for their support and making this project come alive!
Virtual reality volumetric rendering versus cross-sectional imaging for pancreatic cancer resectability assessment
Stacks Image 32462
Our paper on "Virtual reality volumetric rendering versus cross-sectional imaging for pancreatic cancer resectability assessment: a pilot randomized controlled reader study" is available here. Authors are K. Eisenträger, K. Saribeyoglu, U. Fehrenbach, M. Felsenstein, L. Timmermann, P.L.M. Pereira, W. Schöning, B. Strücker, J. Pratschke, A. Pascher, T. Malinka, I.M. Sauer, H. Morgul, and M. Queisner.
This study evaluated whether virtual reality (VR) visualization of CT scans improves the assessment of pancreatic cancer resectability compared with conventional cross-sectional imaging (CSI) on 2D screens. Ten hepatopancreatobiliary surgeons assessed twelve CT cases using either VR volumetric rendering or standard CSI. Results showed that CSI outperformed VR. CSI achieved substantial inter-rater agreement (κ = 0.609), whereas VR showed only slight agreement (κ = 0.127). Diagnostic accuracy was also higher with CSI (84.7% vs. 79.7%), particularly for determining resectability (83.3% vs. 58.3%). Surgeons using VR reported lower confidence, while assessment time was similar between the two methods.
Overall, this preliminary study suggests that the tested VR visualization strategy performed worse than conventional imaging. However, previous research indicates that different or hybrid VR visualization approaches may still improve agreement, implying that the specific visualization method—rather than VR technology itself—determines clinical usefulness.
Overcoming the data barrier: transfer learning for 90-day mortality prediction in general surgery
Stacks Image 32421
The study "Overcoming the data barrier: transfer learning for 90-day mortality prediction in general surgery - a retrospective multicenter development and comparison study" was published in the January issue of the International Surgery Journal by A. Winter, B. Pfitzner, R.P. van de Water, L. Faraj, C. Riepe, W.H. Hahn, F. Krenzien, C. Schineis, T. Malinka, W. Schöning, C. Denecke, B. Arnrich, K. Beyer, J. Pratschke J, I.M. Sauer, and M.M. Maurer.

This multicenter study investigated whether transfer learning (TL) can improve AI-based prediction of 90-day postoperative mortality in general surgery, where limited datasets often hinder the development of robust models.

Data from 14,922 patients undergoing esophageal, liver, pancreatic, or colorectal surgery across three tertiary centers (2015–2023) were analyzed using 85 preoperative variables. Large source neural network models were first trained, then fine-tuned for specific surgical procedures using transfer learning. These models were compared with conventional machine learning (ML) approaches and standard clinical risk scores.
Results showed that ML models already outperformed traditional risk scores (e.g., ASA and Charlson Comorbidity Index). Transfer learning further improved performance, particularly in predicting mortality for esophageal (+38% AUPRC), liver (+14%), and pancreatic surgery (+8%). Across all models, patient age and the Charlson Comorbidity Index were the most influential predictors.
Overall, the study demonstrates that transfer learning can significantly enhance AI model performance in surgical settings with limited data, offering a promising strategy for improving preoperative risk stratification and decision-making in general surgery.
Privacy preserving federated learning for 90-day mortality prediction in colorectal surgery
Stacks Image 32390
M.M. Maurer, B. Pfitzner, R.P. van de Water, L. Faraj, C. Riepe, D. Zuluaga, F. Krenzien, N. Raschzok, R. Siegel, C. Schineis, B. Arnrich, K. Beyer, J. Pratschke, I.M. Sauer, and A. Winter evaluated federated learning (FL) as a privacy-preserving approach for AI-based prediction of 90-day mortality after colorectal surgery. Limited data sharing between hospitals often restricts surgical AI development, and FL allows multicenter model training without transferring raw patient data. The study also assessed the effect of differential privacy (DP) on model performance.
Data from 2,959 patients undergoing elective colorectal surgery at three tertiary centers (2015–2021) were analyzed. Neural networks were trained locally at each center, using centralized data aggregation and distributed federated learning. Additional privacy protection was implemented using central and local differential privacy.
Results showed that federated learning performed similarly to centralized modeling, achieving comparable predictive accuracy (AUROC ~0.78 vs. 0.81). However, adding differential privacy reduced performance, with central DP causing moderate declines and local DP nearly eliminating predictive accuracy. Across models, the most influential predictors were patient age, blood parameters, and the Charlson Comorbidity Index.
Overall, the study demonstrates that federated learning can enable effective multicenter surgical AI models while preserving data privacy, though strong privacy mechanisms like differential privacy may significantly compromise model performance.

"Privacy preserving federated learning for 90-day mortality prediction in colorectal surgery: a multicenter retrospective development and comparison study" is available in International Journal of Surgery 2025;111(12):9065-9074
Prototype of a Virtual Reality Simulator for Thyroidectomy: A Proof of Concept.
Stacks Image 32372
The anatomical complexity of the thyroid region presents significant challenges in surgical training, particularly regarding the identification and preservation of the recurrent laryngeal nerve and parathyroid glands. We present a prototype of a virtual reality simulator designed to support thyroidectomy training by enabling the immersive, interactive exploration of CT-derived, deformable anatomical models in a photorealistic operating room environment. Structures not detectable in CT, such as nerves and glands, were manually integrated. The simulator was evaluated qualitatively by three surgeons using a structured questionnaire. Feedback indicated high usability, visual realism, and potential for improving anatomical recognition skills. Limitations include the absence of instrument interaction, haptic feedback, and full procedural simulation. This prototype demonstrates feasibility and outlines a clear development roadmap toward a high-fidelity, scalable training platform for endocrine surgery.

The paper "Prototype of a Virtual Reality Simulator for Thyroidectomy: A Proof of Concept." by K. Eisentraeger, E.M. Dobrindt, M. Queisner, C. Remde, I.M. Sauer, J. Pratschke, M. Mogl, F. Butz, and C. Müller-Debus was published in Cureus Journal of Medical Science. 2025;17(9):e92724
New Book: Virtual participation, real involvement – Transformative technologies for a more inclusive society
Stacks Image 32269
Virtual reality (VR) and augmented reality (AR) are among the fastest-growing technologies of the 21st century. They also open up enormous opportunities for social integration: through cultural and educational offerings, through networked digital interaction spaces, or as a means of promoting citizen participation. The contributions in this volume present a wide range of application scenarios for VR and AR in the fields of education, health and public space. They demonstrate in a practical way how society, but also companies, can benefit from expanding their technological skills and taking diversity aspects into account. After all, genuine participation is only possible through a sustainable, transdisciplinary and citizen science approach.

Our book chapter ‘Human-Centred Design of Mixed Reality Applications in Medical Education – GreifbAR’ is now available in open access. Authors are Robert Luzsa, Moritz Queisner, Christopher Remde, Igor Sauer, Nadia Robertini and Susanne Mayr.

As part of the BMBF-funded project ‘Tangible Reality – Skilful Interaction of User Hands and Fingers with Real Tools in Mixed Reality Worlds’, we investigated how XR technology can be integrated into medical education. The chapter presents an interdisciplinary, XR-based training system for surgical knot tying. It describes key design principles and experiences from development and evaluation. In addition, it proposes a model for the human-centred design of comparable training applications that can also support other projects.

Opening Exhibition | »Vessels. Infrastructures of Life«
Stacks Image 28095

We warmly invite you to the opening of »Vessels. Infrastructures of Life« at the Berlin Museum of Medical History at the Charité (bmm), a group exhibition curated by Igor M. Sauer and Navena Widulin with contributions by Assal Daneshgar, Emile de Visscher, Frédéric Eyl, Karl Hillebrandt, Eriselda Keshi, Dietrich Polenz, Moritz Queisner, Iva Rešetar and Igor M. Sauer.

Vernissage
Wed, 4 June 2025, 7:00 - 10:00 pm

Exhibition
5 June – 12 October 2025 Tue, Thu, Fri, Sun, 10:00 am - 5:00 pm Wed, Sat, 10:00 am - 7:00 pm Closed on Mondays

Venue
Berliner Medizinhistorisches Museum der Charité (bmm) Virchowweg 17 10117 Berlin

What do plants, animals, humans and cities have in common? They all have vascular systems and, therefore, an infrastructure without which they would not be able to survive.

In the human body, arteries and veins move the blood together with the heart. Plants have a finely branched vascular system for the transport of water and nutrients. And cities utilize an underground network of pipelines that supply clean water and remove wastewater. The temporary exhibition, co-curated by Igor Sauer and Navena Widulin, shows how these vessels function and how they can be visualized, used and reproduced.

What can medicine learn from these natural and technical supply systems? What role does the interdisciplinary view – between biology, design, materials research and medical technology – play for regenerative medicine? And what innovative approaches can be derived from this for the development of artificial and bioartificial donor organs?

»Vessels. Infrastructures of Life« provides insights into the work of designers, material scientists and surgical researchers who are working together on solutions for the future – inspired by nature, technology and the logic of living systems. From exhibits on transplantation and regenerative medicine to examples of architecture and design, the exhibition offers exciting insights into these often-hidden structures. The objects on display correspond with those in Rudolf Virchow’s historical collection of specimens. A particular focus lies on the connections between natural vessels and human-made networks, such as the regulation of temperature in buildings or the water and wastewater supply in cities.

The temporary exhibition »Vessels. Infrastructures of Life« is a collaboration between the Berlin Museum of Medical History and the Experimental Surgery at the Charité and the Cluster of Excellence »Matters of Activity« of Humboldt-Universität zu Berlin as part of the  _matter Festival 2025.
90-Day Mortality Prediction in Elective Visceral Surgery Using Machine Learning
Stacks Image 28103
Our paper, "90-Day Mortality Prediction in Elective Visceral Surgery Using Machine Learning: A Retrospective Multicenter Development, Validation, and Comparison Study" has been published online ahead of print in the International Journal of Surgery.
Authors are C. Riepe, R. van de Water, A. Winter, B. Pfitzner, L. Faraj, R. Ahlborn, M. Schulze, D. Zuluaga, C. Schineis, K. Beyer, J. Pratschke, B. Arnrich, I.M. Sauer, and M.M. Maurer

Machine Learning (ML) is increasingly being adopted in biomedical research, however, its potential for outcome prediction in visceral surgery remains uncertain. This study compares the potential of ML methods for preoperative 90-day mortality (90DM) prediction of an aggregated multi-organ approach to conventional scoring systems and individual organ models.

This retrospective cohort study enrolled patients undergoing major elective visceral surgery between 2014 and 2022 across two tertiary centers. Multiple ML models for preoperative 90DM prediction were trained, externally validated and benchmarked against the American Society of Anesthesiologists (ASA) score and revised Charlson Comorbidity Index (rCCI). Areas under the receiver operating characteristic (AUROC) and precision recall curves (AUPRC) including standard deviations were calculated. Additionally, individual models for esophageal, gastric, intestinal, liver, and pancreatic surgery were developed and compared to an aggregated approach. A total of 7,711 cases encompassing 78 features were included. Overall 90DM was 4% (n = 309). An XBoost classifier demonstrated the best performance and high robustness following external validation (AUROC: 0.86 [0.01]; AUPRC: 0.2 [0.04]). All models outperformed the ASA score (AUROC: 0.72; AUPRC: 0.08) and rCCI (AUROC: 0.81; AUPRC: 0.11). rCCI, patient age and C-reactive protein emerged as most decisive model weights. Models for gastric (AUROC: 0.88 [0.13]; AUPRC: 0.24 [0.26]) and intestinal surgery (AUROC: 0.87 [0.05]; AUPRC: 0.17 [0.09]) revealed the highest organ-specific performances, while pancreatic surgery yielded the lowest results (AUROC: 0.66 [0.08]; AUPRC: 0.22 [0.12]). A combined multi-organ approach (AUROC: 0.84 [0.04]; AUPRC: 0.21 [0.06]) demonstrated superiority over the weighted average across all organ-specific models (AUROC: 0.82 [0.07]; AUPRC: 0.2 [0.13]).

ML offers robust preoperative risk stratification for 90DM in elective visceral surgery. Leveraging training across multi-organ cohorts may improve accuracy and robustness compared to organ-specific models. Prospective studies are needed to confirm the potential of ML in surgical outcome prediction.
Sparse camera volumetric video applications
The paper "Sparse camera volumetric video applications. A comparison of visual fidelity, user experience, and adaptability" is available open access in Frontiers in Signal Processing.
Authors are Christopher Remde, Igor M. Sauer, and Moritz Queisner.

Volumetric video production in commercial studios is predominantly produced using a multi-view stereo process that relies on a high two-digit number of cameras to capture a scene. Due to the hardware requirements and associated processing costs, this workflow is resource-intensive and expensive, making it unattainable for creators and researchers with smaller budgets. Low-cost volumetric video systems using RGBD cameras offer an affordable alternative. As these small, mobile systems are a relatively new technology, the available software applications vary in terms of workflow and image quality. In this paper we provide an overview of the technical capabilities of sparse camera volumetric video capture applications and assess their visual fidelity and workflow.

We selected volumetric video applications that are publicly available, support capture with multiple Microsoft Azure Kinect cameras and run on consumer-grade computer hardware. We compared the features, usability, and workflow of each application and benchmarked them in five different scenarios. Based on the benchmark footage, we analyzed spatial calibration accuracy, artifact occurrence and conducted a subjective perception study with 19 participants from a game design study program to assess the visual fidelity of the captures.

We evaluated three applications, Depthkit Studio, LiveScan3D and VolumetricCapture. We found Depthkit Studio to provide the best experience for novel users, while LiveScan3D and VolumetricCapture require advanced technical knowledge to be operated. The footage captured by Depthkit Studio showed the least amount of artifacts by a larger margin, followed by LiveScan3D and VolumetricCapture. These findings were confirmed by the participants who preferred Depthkit Studio over LiveScan3D and VolumetricCapture. Based on the results, we recommend Depthkit Studio for the highest fidelity captures. LiveScan3D produces footage of only acceptable fidelity but is the only candidate that is available as open-source software. We therefore recommend it as a platform for research and experimentation. Due to the lower fidelity and high setup complexity, we recommend VolumetricCapture only for specific use-cases where its ability to handle a high number of sensors in a large capture volume is required.
Stacks Image 28139
Deutschlandfunk: AI in the operating theater
Stacks Image 28166
How artificial intelligence supports surgeons: Planning surgical procedures, monitoring patients or predicting complications: there are already many useful applications for AI in the operating theater in research. In hospitals, the technology is still an exception. This is likely to change soon.
A Deutschlandfunk radio show/podcast by Carina Schroeder and Friederike Walch-Nasseri reports on the work in our Digital Surgery Lab (in German).
 
Artificial intelligence is revolutionizing our everyday lives. It translates texts, filters news, analyzes X-ray images and decides who gets a job. In the “KI verstehen (Understanding AI)” podcast, Deutschlandfunk provides answers to questions about dealing with AI every week.

Surgical planning in virtual reality: a systematic review
Stacks Image 28265
We just published a review on surgical planning in VR in the Journal of Medical Imaging. In the systematic review we look into how virtual reality (VR) is transforming surgical planning. With VR physicians can assess patient-specific image data in 3D, enhancing surgical decision-making and spatial localization of pathologies. We found that benefits of VR become more evident. However, its application in surgical planning remains experimental, with a need for refined study designs, improved technical reporting, and enhanced VR software usability for effective clinical implementation. Authors of "Surgical planning in virtual reality: a systematic review" are Prof. Dr. Moritz Queisner and Karl Eisenträger.

Virtual reality (VR) technology has emerged as a promising tool for physicians, offering the ability to assess anatomical data in 3D with visuospatial interaction qualities. This systematic review aims to provide an up-to-date overview of the latest research on VR in the field of surgical planning.
A comprehensive literature search was conducted based on the preferred reporting items for systematic reviews and meta-analyses covering the period from April 1, 2021 to May 10, 2023. The review summarizes the current state of research in this field, identifying key findings, technologies, study designs, methods, and potential directions for future research. Results show that the application of VR for surgical planning is still in an experimental stage but is gradually advancing toward clinical use. The diverse study designs, methodologies, and varying reporting hinder a comprehensive analysis. Some findings lack statistical evidence and rely on subjective assumptions. To strengthen evaluation, future research should focus on refining study designs, improving technical reporting, defining visual and technical proficiency requirements, and enhancing VR software usability and design. Addressing these areas could pave the way for an effective implementation of VR in clinical settings.
Spacial computing in the OR
Stacks Image 28271

We tested the Apple Vision Pro in the operating theatre and it cuts an excellent figure: great images even in challenging lighting situations, stable interaction with the device - even though the limited peripheral vision and awareness inherent to video-based devices is a considerable downside in surgery.

We are looking forward to our first software solutions for improved hand-eye coordination in visceral surgery for this device too!
AI-based intra- and postoperative measurement from stereoimages
The publication "Redefining the Laparoscopic Spatial Sense: AI-based Intra- and Postoperative Measurement from Stereoimages“ has been accepted for the 38th AAAI Conference on Artificial Intelligence and is available via https://doi.org/10.48550/arXiv.2311.09744. The publication is the result of a fruitful collaboration between Karlsruhe Institute of Technology (KIT), Fraunhofer FIT, University of Bayreuth, and Charité – Universitätsmedizin Berlin. Authors are Leopold Müller, Patrick Hemmer, Moritz Queisner, Igor Sauer, Simeon Allmendinger, Johannes Jakubik, Michael Vössing, and Niklas Kühl.

A significant challenge in image-guided surgery is the accurate measurement task of relevant structures such as vessel segments, resection margins, or bowel lengths. While this task is an essential component of many surgeries, it involves substantial human effort and is prone to inaccuracies. In this paper, we develop a novel human-AI-based method for laparoscopic measurements utilizing stereo vision that has been guided by practicing surgeons. Based on a holistic qualitative requirements analysis, this work proposes a comprehensive measurement method, which comprises state-of-the-art machine learning architectures, such as RAFT-Stereo and YOLOv8. The developed method is assessed in various realistic experimental evaluation environments. Our results outline the potential of our method achieving high accuracies in distance measurements with errors below 1 mm. Furthermore, on-surface measurements demonstrate robustness when applied in challenging environments with textureless regions. Overall, by addressing the inherent challenges of image-guided surgery, we lay the foundation for a more robust and accurate solution for intra- and postoperative measurements, enabling more precise, safe, and efficient surgical procedures.

Stacks Image 28320
Priv.-Doz. Dr. med. Simon Moosburner
Today Simon Moosburner gave his inaugural lecture on "Liver Transplantation in Germany - Opportunities and Solutions for the Future". He is now – at the age of 28 (!) – a private lecturer (Privatdozent) at the Charité – Universitätsmedizin Berlin and habilitated in the field of "Experimental Surgery".

He is being honored for his achievements in the field of extracorporeal organ perfusion and organ transplantation. His postdoctoral thesis is entitled "Challenges and solutions in adults and children after liver transplantation".

Congratulations!

Stacks Image 28373
Dr. Zeynep Akbal
Stacks Image 28378
We are delighted to welcome Dr. Zeynep Akbal as a new member of the team!
Before she started as a post-doctoral researcher at the Digital Surgery Lab she studied communication sciences, media sciences, philosophy, and worked on developing her interdisciplinary method around virtual reality (VR) technology. She did her doctorate in philosophy at Universität Potsdam. Her dissertation is recently published as a monograph, titled "Lived-Body Experiences in Virtual Reality. A Phenomenology of the Virtual Body.”
Her research focuses on the intersection of philosophy of perception, cognitive sciences and VR. In her recent research project “Tactile Stimulation in VR” at Max Planck Institute for Human Cognitive and Brain Sciences, she focused on the behavioral consequences of haptic feedback in a VR task.

Wellcome to the team!
science x media Tandem Program: "From Slices to Spaces"
Prof. Dr. Moritz Queisner and Frédéric Eyl (Designer and Managing Director of TheGreenEyl) successfully applied to the Stiftung Charité for funding as a "science x media tandem".
The science x media tandems are the first programme in the new funding priority "Open Life Science". With this funding priority, the Charité Foundation is working to make the life sciences in Berlin more comprehensible and accessible to a broader public and to strengthen the trustworthiness of medical professionals.

Under the title "From Slices to Spaces", the tandem of Moritz Queisner and Frédéric Eyl is implementing a science parcours in which spatially complex research data from surgery and biomedicine will be made multisensually accessible to a broad audience through new visualization techniques. Building on research work on new imaging techniques by Moritz Queisner, they employ Extended Reality techniques. Due to their unique ability to link digital objects with the real environment of the viewers, the 4D images they generate are particularly suited for representing and conveying spatial information.

This is where the tandem's project comes in: 4D images are not only interesting for researchers to understand complex research data but can also provide laypeople with a less presupposing insight into research data and processes. Frédéric Eyl's media expertise will be used to make the specific visual knowledge from research comprehensible and experiential for non-experts. The science parcours is intended to integrate as a digital extension into the architecture of the new research building, "Der Simulierte Mensch", located on the premises of Charité. The parcours will include the facade, the inter-floor airspace, and the central glass surfaces within the building as its stations. By enabling users to explore 4D research data within the architecture and investigate it using their own smartphones in an AR application, concrete practices and deployment locations of new image-based technologies become experiential and comprehensible. This project not only enhances the perception of Charité and the scientific location of Berlin but also opens up places of knowledge creation to the public, making practices and techniques of life sciences more visible.


Stacks Image 28388
New DFG project "4D Imaging"
Stacks Image 28436
The DFG Schwerpunktprogramm „Das Digitale Bild“ (SPP 2172) funds the new project “4D Imaging: From Image Theory to Imaging Practice” (2023-2026). Principal investigators are Prof. Dr. Kathrin Friedrich (Universität Bonn) and Prof. Dr. Moritz Queisner.

The term 4D imaging refers to a new form of digital visuality in which image, action and space are inextricably interwoven. 4D technologies capture, process and transmit information about physical space and make it computable in real time. Changes due to movements and actions become calculable in real time, making 4D images particularly important in aesthetic and operational contexts where they reconceptualize various forms of human-computer interaction. The 4D Imaging project responds to the growing need in medicine to understand, use, and design these complex imaging techniques. It transfers critical reflexive knowledge from research into clinical practices to enable surgeons to use and apply 4D Imaging techniques. Especially in surgical planning, 4D Imaging techniques may improve the understanding and accessibility of spatially complex anatomical structures. To this end, the project is developing approaches to how 4D imaging can complement and transform established topographic ("2D") imaging practices.

Stacks Image 28438
Work with us | PhD position

We are hiring: 3-year #PhD position @Charité – Universitätsmedizin Berlin.
  • Join our interdisciplinary team for a PhD on new #imaging technologies at the intersection of digital health, surgery and biomedicine
  • Explore new ways to understand and/or visualize anatomical structures in #4D using extended reality #XR #digitaltransformation
  • Connect theory and practice in an interdisciplinary research group
  • Open call: open to all disciplines! Yes, that’s right – design, computer science, computer visualistics, digital health, psychology, media studies, workplace studies, game design…
  • What counts is a convincing idea for your doctoral project in the field of "4D imaging“

Sounds interesting? Apply now or reach out to Moritz Queisner (moritz.queisner@charite.de) if you have any questions.

More information:
German: https://karriere.charite.de/stellenangebote/detail/wissenschaftliche-mitarbeiterin-wissenschaftlicher-mitarbeiter-dwm-technologietransfer-chirurgie-dm27222a
English: https://karriere.charite.de/stellenangebote/detail/scientific-researcher-phd-position-dfm-dm27222b

 Page 1 / 3  >>
Year
© 2025 Prof. Dr. Igor M. Sauer | Charité - Universitätsmedizin Berlin | Disclaimer

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purpose illustrated in the Disclaimer. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to the use of cookies.