52个AIGC视频生成算法模型介绍(上):https://developer.aliyun.com/article/1480690
- Pix2Video: Video Editing using Image Diffusion
机构:Abode
时间:2023.3.22
https://duyguceylan.github.io/pix2video.github.io/
- InstructVid2Vid: Controllable Video Editing with Natural Language Instructions
机构:浙大
时间:2023.5.21
- ControlVideo: Training-free Controllable Text-to-Video Generation
机构:华为
时间:2023.5.22
https://github.com/YBYBZhang/ControlVideo
- ControlVideo: Conditional Control for One-shot Text-driven Video Editing and Beyond
机构:清华
时间:2023.11.28
https://github.com/thu-ml/controlvideo
- Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models
时间:2023.12.6https://controlavideo.github.io/
- StableVideo: Text-driven Consistency-aware Diffusion Video Editing
机构:MSRA
时间:2023.8.18
https://github.com/rese1f/StableVideo
- MagicEdit: High-Fidelity and Temporally Coherent Video Editing
机构:字节
时间:2023.8.28
https://magic-edit.github.io/(未开源)
- GROUND-A-VIDEO: ZERO-SHOT GROUNDED VIDEO EDITING USING TEXT-TO-IMAGE DIFFUSION MODELS
机构:KAIST时间:2023.10.2https://ground-a-video.github.io/
- FateZero: Fusing Attentions for Zero-shot Text-based Video Editing
机构:腾讯AI Lab时间:2023.10.11
https://fate-zero-edit.github.io
- Motion-Conditioned Image Animation for Video Editing
机构:Meta
时间:2023.11.30
facebookresearch.github.io/MoCA(未开源)
- VidEdit: Zero-shot and Spatially Aware Text-driven Video Editing
机构:Sorbonne Université, Paris, France
时间:2023.12.15
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models
时间:2024.1.4https://github.com/baaivision/vid2vid-zero
▐ 人物动态化
主要是通过人体姿态作为条件性输入(结合controlnet等),将一张图作为前置参考图,或者直接使用文本描述生成图片。其中阿里和字节分别有几篇代表性论文,其中字节的代码有两篇已经开源,阿里的代码还在等待阶段。
- Follow Your Pose
机构:腾讯AI Lab
时间:2023.4.3
https://follow-your-pose.github.io/
- DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion
机构:google,nvidia
时间:2023.5.4
https://grail.cs.washington.edu/projects/dreampose/
- DISCO: Disentangled Control for Realistic Human Dance Generation
机构:微软
时间:2023.10.11
https://disco-dance.github.io
- MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
机构:字节
时间:2023.11.27
https://showlab.github.io/magicanimate/
- MaigcDance
机构:字节
时间:2023.11.18
https://boese0601.github.io/magicdance/
- Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation
机构:阿里
时间:2023.12.7
https://humanaigc.github.io/animate-anyone/(未开源)
- DreaMoving: A Human Video Generation Framework based on Diffusion Model
机构:阿里
时间:2023.12.11
https://dreamoving.github.io/dreamoving(未开源)
52个AIGC视频生成算法模型介绍(下):https://developer.aliyun.com/article/1480687