Pull Request №451 RosettaCommons/RFdiffusion/main ← leonardomarino/RFdiffusion/fix/empty-cache-per-design
Merge: 9535f1938203a24937d7dadf0cb831d02cb5fc0e←e8bd8d14357548acc46a3062f616423d77479c4e
feat: add inference.empty_cache_per_design flag to reduce CUDA allocator fragmentation
----------------
Merge commit message:
feat: call torch.cuda.empty_cache() per design when empty_cache_per_design=True