This work introduces specialized diffusion-based generative models that capture the spatiotemporal dynamics of finegrained robotic surgical sub-stitch actions through supervised learning on annotated laparoscopic surgery footage. The proposed models form a foundation for data-driven world models capable of simulating the biomechanical interactions and procedural dynamics of surgical suturing with high temporal fidelity. Annotating a dataset of ∼ 2K clips extracted from simulation videos, we categorize surgical actions into fine-grained sub-stitch classes including ideal and non-ideal executions of needle positioning, targeting, driving, and withdrawal. We fine-tune two stateof-the-art video diffusion models, LTX-Video and HunyuanVideo, to generate high-fidelity surgical action sequences at ≥768×512 resolution and ≥49 frames. For training our models, we explore both Low-Rank Adaptation (LoRA) and full-model fine-tuning approaches. Our experimental results demonstrate that these world models can effectively capture the dynamics of suturing, potentially enabling improved training simulators, surgical skill assessment tools, and autonomous surgical systems. The models also display the capability to differentiate between ideal and non-ideal technique execution, providing a foundation for building surgical training and evaluation systems. We release our models for testing and as a foundation for future research.
Model | Training Resolution (W×H×T) | Training Time (Hours) |
---|---|---|
LTX-Video (t2v) | 768×512×49 | 15 |
HunyuanVideo (t2v) | 768×512×49 | 71 |
LTX-Video (i2v) | 1024×576×49, 960×444×65, 512×288×121 | 35 |
We manually annotated start and end times of sub-stitch actions in 102 training session videos. Each sub-stitch was expert-annotated with a binary technical score (ideal or non-ideal), reflecting the operator's skill while performing the sub-stitch action. The final dataset comprises 1,836 video clips from railroad and backhand suturing exercises, annotated with detailed sub-stitch classifications: