Yu-Ying Yeh

Hi. I'm Yu-Ying Yeh (葉鈺濙).


About Me

I am a research assistant at National Taiwan University and will be a master's student at UC San Diego in this fall. My research interest mainly focuses on machine learning and computer vision. I am currently working on projects related to generative models on videos. I am also interested in representation learning, domain adaptation and video prediction.


University of California San Diego

M.S. student in Computer Science and Engineering

National Tsing Hua University

Non-degree, Computer Science

National Chiao Tung University

Non-degree, Computer Science

National Taiwan University

B.S. in Physics & B.A. in Economics

Work Experience

Research Assistant, National Taiwan University

Vision and Learning Lab

Supervised by Prof. Yu-Chiang Frank Wang

Research Assistant, Academia Sinica

Multimedia and Maching Learning Lab

Supervised by Dr. Yu-Chiang Frank Wang

Assistant Structured Product Manager, Cathay United Bank


Here are my recent research projects.

Detach and Adapt: Learning Cross-Domain Disentangled Deep Representation

Yen-Cheng Liu, Yu-Ying Yeh, Tzu-Chien Fu, Wei-Chen Chiu, Sheng-De Wang, Yu-Chiang Frank Wang (CVPR 2018 Spotlight)

Full paper: [PDF] / Code: To be updated soon.

While representation learning aims to derive interpretable features for describing visual data, representation disentanglement further results in such features so that particular image attributes can be identified and manipulated. However, one cannot easily address this task without observing ground truth annotation for the training data. To address this problem, we propose a novel deep learning model of Cross-Domain Representation Disentangler (CDRD). By observing fully annotated source-domain data and unlabeled target-domain data of interest, our model bridges the information across data domains and transfers the attribute information accordingly. Thus, cross-domain joint feature disentanglement and adaptation can be jointly performed. In the experiments, we provide qualitative results to verify our disentanglement capability. Moreover, we further confirm that our model can be applied for solving classification tasks of unsupervised domain adaptation, and performs favorably against state-of-the-art image disentanglement and translation methods.

Adaptation and Re-Identification Network: An Unsupervised Deep Transfer Learning Approach to Person Re-Identification

Yu-Jhe Li, Fu-En Yang, Yen-Cheng Liu, Yu-Ying Yeh, Xiaofei Du, Yu-Chiang Frank Wang (CVPR 2018 workshop)

Full paper: [Arxiv] / Code: To be updated soon.

Person re-identification (Re-ID) aims at recognizing the same person from images taken across different cameras. To address this task, one typically requires a large amount labeled data for training an effective Re-ID model, which might not be practical for real-world applications. To alleviate this limitation, we choose to exploit a sufficient amount of pre-existing labeled data from a different (auxiliary) dataset. By jointly considering such an auxiliary dataset and the dataset of interest (but without label information), our proposed adaptation and re-identification network (ARN) performs unsupervised domain adaptation, which leverages information across datasets and derives domain-invariant features for Re-ID purposes. In our experiments, we verify that our network performs favorably against state-of-the-art unsupervised Re-ID approaches, and even outperforms a number of baseline Re-ID methods which require fully supervised data for training.


Generative model

Tensorflow implementation of Variational Autoencoder and Generative Adversarial Networks.