Facebook’s DensePose technology lets anyone turn 2D images of people into 3D models

Source: Facebook

In early 2018, Facebook’s AI researchers unveiled a deep-learning system that can transform 2D photo and video images of people into 3D mesh models of those human bodies in motion. Last month, Facebook publicly shared the code for its “DensePose” technology, which could be used by Hollywood filmmakers and augmented reality game developers—but maybe also by those seeking to build a surveillance state.

DensePose goes beyond basic object recognition. Besides detecting humans in pictures, it can also make 3D models of their bodies by estimating the positions of their torsos and limbs. Those models can then enable the technology to create real-time 3D re-creations of human movement in 2D videos. For example, it could produce videos that show models of several people kicking soccer balls or a single individual riding on a motorcycle.

This work could prove useful for “graphics, augmented reality, or human-computer interaction, and could also be a stepping-stone towards general 3D-based object understanding,” according to the Facebook AI Research (FAIR) paper published in January 2018.

But there is a “troubling implication of this research” that could enable “real-time surveillance,” said Jack Clark, strategy and communications director at OpenAI, a nonprofit AI research company, in his popular newsletter, called Import AI. Clark first discussed the implications of Facebook’s DensePose paper in the February issue of his newsletter, and followed up in June after Facebook released the DensePose code on the software development platform GitHub.

“The same system has wide utility within surveillance architectures, potentially letting operators analyze large groups of people to work out if their movements are problematic or not—for instance, such a system could be used to signal to another system if a certain combination of movements are automatically labelled as portending a protest or a riot,” Clark wrote in his newsletter.

As always, the deep-learning algorithms behind DensePose needed some help from humans in the beginning. Facebook researchers first enlisted human annotators to create a training data set by manually labeling certain points on 50,000 images of human bodies. To make that job easier for the annotators and try to improve their accuracy, the researchers broke the task of labeling down into body segments such as head, torso, limbs, hands, and feet. They also “unfolded” each body part to present multiple viewpoints without requiring the annotators to manually rotate the image to get a better view.

Still, the annotators were only asked to label 100 to 150 points per image. To complete the training data, Facebook researchers used an algorithm to estimate and fill in the rest of the points that corresponded between the 2D images and the 3D mesh models.

The result is a system that can perform the 2D to 3D conversion at a rate of “20-26 frames per second for a 240 × 320 image or 4-5 frames per second for a 800 × 1100 image,” Facebook researchers wrote in their paper. In other words, it’s generally capable of creating 3D models of humans in a 2D video in real time.

Facebook’s researchers do not specifically mention surveillance as a possible application of DensePose alongside the many they do list in their paper. But because Facebook has put its technology out there, someone could adapt DensePose for surveillance or law enforcement, if they so desired.

In fact, other research groups have been working on similar systems to estimate human body poses for security applications: a group of U.K. and Indian researchers have been developing a drone-mounted system aimed at detecting violence within crowds of people. And there are clearly law enforcement agencies and governments around the world interested in potentially harnessing such technology, for good or for ill.

Clark described his hope of seeing the FAIR group—and AI researchers in general—publicly discuss the implications of their work. He wondered if Facebook’s researchers considered the surveillance possibility and whether or not Facebook has an internal process for weighing the risks of publicly releasing such technology. In the case of DensePose, it’s a question that only Facebook can answer. The company did not respond to a request for comment.

“As a community we—including organizations like OpenAI—need to be better about dealing publicly with the information hazards of releasing increasingly capable systems, lest we enable things in the world that we’d rather not be responsible for,” Clark said.