Researchers at UC Berkeley have found a way to copy the movements of a subject’s body in one video, and then generate a new video of a completely different person’s body more or less performing those actions. (For more details, see their paper humorously titled, “Everybody Dance Now”)


When copying dance moves, a person’s arms, legs, head, and torso can move in completely different ways than they did in the sample footage that’s used to train the artificial intelligence. To make the motion transfer possible, the AI generates simple stick figure representations of the movements of subjects in both the source and target clips.



The changes in motion needed to make one person move or dance like another are calculated, and then that data is used to generate new frames of video featuring someone appearing to tear up the dance floor, even if they have two left feet.