【美味蟹堡王今日营业】论文学习笔记10-01

十一假期第一天!天气好好,还暖洋洋的~决定中午去东南门外的屈臣氏看看有没有什么活动~感受一下五道口边缘的欢快节日气氛~

【face】Nonlinear 3D Face Morphable Model[paper]

  • 类似文章 Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network[paper]

Abstract

As a classic statistical model of 3D facial shape and texture, 3D Morphable Model (3DMM) is widely used in facial analysis, e.g., model fitting, image synthesis. Conventional 3DMM is learned from a set of well-controlled 2D face images with associated 3D face scans, and represented by two sets of PCA basis functions. Due to the type and amount of training data, as well as the linear bases, the representation power of 3DMM can be limited. To address these problems, this paper proposes an innovative framework to learn a nonlinear 3DMM model from a large set of unconstrained face images, without collecting 3D face scans. Specifically, given a face image as input, a network encoder estimates the projection, shape and texture parameters. Two decoders serve as the nonlinear 3DMM to map from the shape and texture parameters to the 3D shape and texture, respectively. With the projection parameter, 3D shape, and texture, a novel analytically-differentiable rendering layer is designed to reconstruct the original input face. The entire network is end-to-end trainable with only weak supervision. We demonstrate the superior representation power of our nonlinear 3DMM over its linear counterpart, and its contribution to face alignment and 3D reconstruction.

 Introduction (part)

3D Morphable Model (3DMM) is a statistical model of 3D facial shape and texture in a space where there are explicit correspondences. The morphable model framework provides two key benefits: first, a point-to-point correspondence between the reconstruction and all other models, enabling morphing, and second, modeling underlying transformations between types of faces (male to female, neutral to smile, etc.). 3DMM is learnt through supervision by performing dimension reduction, normally Principal Component Analysis (PCA), on a training set of face images/scans. To model highly variable 3D face shapes, a large amount of high-quality 3D face scans is required.  Hence, it is fragile to large variances in the face identity.Therefore, such a model is only learnt to represent the facial texture in similar conditions, rather than in-the-wild environments

Whether and how can we learn a nonlinear 3D Morphable Model of face shape and texture from a set of unconstrained 2D face images, without collecting 3D face scans? As shown in Fig. 1, starting with an observation that the linear 3DMM formulation is equivalent to a single layer network, using a deep network architecture naturally increases the model capacity. Hence, we utilize two network decoders, instead of two PCA spaces, as the shape and texture model components, respectively. With careful consideration of each component, we design different networks for shape and texture: the multi-layer perceptron (MLP) for shape and convolutional neural network (CNN) for texture. Each decoder will take a shape or texture representation as input and output the dense 3D face or a face texture. These two decoders are essentially the nonlinear 3DMM. Further, we learn the fitting algorithm to our nonlinear 3DMM, which is formulated as a CNN encoder. The encoder takes a 2D face image as input and generates the shape and texture parameters, from which two decoders estimate the 3D face and texture. The 3D face and texture would perfectly reconstruct the input face, if the fitting algorithm and 3DMM are well learnt. Therefore, we design a differentiable rendering layer to generate a reconstructed face by fusing the 3D face, texture, and the camera projection parameters estimated by the encoder. Finally, the end-to-end learning scheme is constructed where the encoder and two decoders are learnt jointly to minimize the difference between the reconstructed face and the input face. Jointly learning the 3DMM and the model fitting encoder allows us to leverage the large collection of unconstrained 2D images without relying on 3D scans. We show significantly improved shape and texture representation power over the linear 3DMM. Consequently, this also benefits other tasks such as 2D face alignment and 3D reconstruction.

We make the following contributions:

1) We learn a nonlinear 3DMM model that has greater representation power than its traditional linear counterpart.

2) We jointly learn the model and the model fitting algorithm via weak supervision, by leveraging a large collection of 2D images without 3D scans. The novel rendering layer enables the end-to-end training.

3) The new 3DMM further improves performance in related tasks: face alignment and face reconstruction.

From Lifestyle Vlogs to Everyday Interactions[paper]

  • a new way of data collecting 

Abstract

A major stumbling block to progress in understanding basic human interactions, such as getting out of bed or opening a refrigerator, is lack of good training data. Most past efforts have gathered this data explicitly: starting with a laundry list of action labels, and then querying search engines for videos tagged with each label. In this work, we do the reverse and search implicitly: we start with a large collection of interaction-rich video data and then annotate and analyze it. We use Internet Lifestyle Vlogs as the source of surprisingly large and diverse interaction data. We show that by collecting the data first, we are able to achieve greater scale and far greater diversity in terms of actions and actors. Additionally, our data exposes biases built into common explicitly gathered data. We make sense of our data by analyzing the central component of interaction – hands. We benchmark two tasks: identifying semantic object contact at the video level and non-semantic contact state at the frame level. We additionally demonstrate future prediction of hands.

【style transfer】Stereoscopic Neural Style Transfer[paper]

Abstract

This paper presents the first attempt at stereoscopic neural style transfer, which responds to the emerging demand for 3D movies or AR/VR. We start with a careful examination of applying existing monocular style transfer methods to left and right views of stereoscopic images separately. This reveals that the original disparity consistency cannot be well preserved in the final stylization results, which causes 3D fatigue to the viewers. To address this issue, we incorporate a new disparity loss into the widely adopted style loss function by enforcing the bidirectional disparity constraint in non-occluded regions. For a practical realtime solution, we propose the first feed-forward network by jointly training a stylization sub-network and a disparity sub-network, and integrate them in a feature level middle domain. Our disparity sub-network is also the first end-toend network for simultaneous bidirectional disparity and occlusion mask estimation. Finally, our network is effectively extended to stereoscopic videos, by considering both temporal coherence and disparity consistency. We will show that the proposed method clearly outperforms the baseline algorithms both quantitatively and qualitatively.

【style transfer】Neural Stereoscopic Image Style Transfer[paper]

  • 和上一篇文章无论是名字还是内容上都很像

Abstract

Neural style transfer is an emerging technique which is able to endow daily-life images with attractive artistic styles. Previous work has succeeded in applying convolutional neural networks (CNNs) to style transfer for monocular images or videos. However, style transfer for stereoscopic images is still a missing piece. Different from processing a monocular image, the two views of a stylized stereoscopic pair are required to be consistent to provide observers a comfortable visual experience. In this paper, we propose a novel dual path network for viewconsistent style transfer on stereoscopic images. While each view of the stereoscopic pair is processed in an individual path, a novel feature aggregation strategy is proposed to effectively share information between the two paths. Besides a traditional perceptual loss being used for controlling the style transfer quality in each view, a multi-layer view loss is leveraged to enforce the network to coordinate the learning of both the paths to generate view-consistent stylized results. Extensive experiments show that, compared against previous methods, our proposed model can produce stylized stereoscopic images which achieve decent view consistency

Separating Style and Content for Generalized Style Transfer[paper]

Abstract

Neural style transfer has drawn broad attention in recent years. However, most existing methods aim to explicitly model the transformation between different styles, and the learned model is thus not generalizable to new styles. We here attempt to separate the representations for styles and contents, and propose a generalized style transfer network consisting of style encoder, content encoder, mixer and decoder. The style encoder and content encoder are used to extract the style and content factors from the style reference images and content reference images, respectively. The mixer employs a bilinear model to integrate the above two factors and finally feeds it into a decoder to generate images with target style and content. To separate the style features and content features, we leverage the conditional dependence of styles and contents given an image. During training, the encoder network learns to extract styles and contents from two sets of reference images in limited size, one with shared style and the other with shared content. This learning framework allows simultaneous style transfer among multiple styles and can be deemed as a special ‘multi-task’ learning scenario. The encoders are expected to capture the underlying features for different styles and contents which is generalizable to new styles and contents. For validation, we applied the proposed algorithm to the Chinese Typeface transfer problem. Extensive experiment results on character generation have demonstrated the effectiveness and robustness of our method.


今日份知识巩固

PCA:机器学习之PCA主成分分析PCA的数学原理

MLP:

猜你喜欢

转载自blog.csdn.net/weixin_39284803/article/details/82915056