Towards Suturing World Models:
Learning Predictive Models for Robotic Surgical Tasks


Workflow of the proposed approach for suturing world models.

We prompt latent video diffusion models with text and (optionally) visual input. The models are trained with expert-annotated ideal and non-ideal demonstrations, allowing them to output either class of quality.

Abstract

This work introduces specialized diffusion-based generative models that capture the spatiotemporal dynamics of finegrained robotic surgical sub-stitch actions through supervised learning on annotated laparoscopic surgery footage. The proposed models form a foundation for data-driven world models capable of simulating the biomechanical interactions and procedural dynamics of surgical suturing with high temporal fidelity. Annotating a dataset of ∼ 2K clips extracted from simulation videos, we categorize surgical actions into fine-grained sub-stitch classes including ideal and non-ideal executions of needle positioning, targeting, driving, and withdrawal. We fine-tune two stateof-the-art video diffusion models, LTX-Video and HunyuanVideo, to generate high-fidelity surgical action sequences at ≥768×512 resolution and ≥49 frames. For training our models, we explore both Low-Rank Adaptation (LoRA) and full-model fine-tuning approaches. Our experimental results demonstrate that these world models can effectively capture the dynamics of suturing, potentially enabling improved training simulators, surgical skill assessment tools, and autonomous surgical systems. The models also display the capability to differentiate between ideal and non-ideal technique execution, providing a foundation for building surgical training and evaluation systems. We release our models for testing and as a foundation for future research.

Workflow of the proposed approach for suturing world models.

Sample outputs of different models compared against a real-world needle driving clip from a backhand suturing task. (a) Real- world sample, (b) HunyuanVideo, (c) LTX Video LoRA, (d) LTX Video full training.

Model Architecture & Training

We fine-tuned two state-of-the-art open-source video diffusion models: LTX-Video (2B parameters) and HunyuanVideo (13B parameters). We explored both full-parameter fine-tuning and low-rank adaptation (LoRA) with rank = 256. All models support high spatiotemporal resolution at ≥768×512 resolution with at least 49 frames per video, capturing the temporal dynamics of surgical actions with sufficient spatial detail for modeling complete sub-stitch actions.

Model Training Resolution (W×H×T) Training Time (Hours)
LTX-Video (t2v) 768×512×49 15
HunyuanVideo (t2v) 768×512×49 71
LTX-Video (i2v) 1024×576×49, 960×444×65, 512×288×121 35

Dataset

We manually annotated start and end times of sub-stitch actions in 102 training session videos. Each sub-stitch was expert-annotated with a binary technical score (ideal or non-ideal), reflecting the operator's skill while performing the sub-stitch action. The final dataset comprises 1,836 video clips from railroad and backhand suturing exercises, annotated with detailed sub-stitch classifications:

  • Needle Positioning (ideal/non-ideal): Grasping and orienting the needle appropriately
  • Needle Targeting (ideal/non-ideal): Approaching the tissue at the correct angle and position
  • Needle Driving (ideal/non-ideal): Passing the needle through tissue with proper wrist rotation
  • Needle Withdrawal (ideal/non-ideal): Extracting the needle along its curved trajectory

Research Team

Mehmet Kerem Turkcan, Mattia Ballo, Filippo Filicori, Zoran Kostic
Columbia University & Northwell Health, Lenox Hill Hospital