On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.
The hair is the giveaway for me. Though I may not have noticed it unless I was looking for something.
Also the teeth that keep expanding and shrinking. But if you just lowkey watch something it is really hard to notice…