Yu-Ching Lin
林佑璟 (Vagante)
NTUGICE MPAC Lab

Multimedia Processing and Communication Lab
Graduate Institute of Communication Engineering
National Taiwan University

Room 505, BL Building,
National Taiwan University,
No.1, Sec. 4, Roosevelt Road, Taipei 10617,
Taiwan R.O.C.

Advisor: Prof. Homer H. Chen


Personal Info

1984.09.08
Born in Taipei, Taiwan, R.O.C.

I am pursuing the Master degree in GICE at NTU (start from 2007). My research interests include mutlimedia information retrival, music feature representation, signal processing. The current project I am working on is how to represent one song semantically. I received my B.S. in EE dept. from NTU in 2007.

Curriculum Vitae: cv-en.pdf
blog: wanderland
bbs(telnet): telnet://ptt2.cc (board: vagante)
MSN/skype: ck891046@msn.com
E-mail: vagante@gmail.com


Research

Audio Feature Representation (2007/3~)
Features in speech recognition are often adopted for MIR. Especially MFCC is reported as the most powerful one. However, all these features are extracted from a short period of signal. They are often used to represent the timbre of the piece. The ability of capturing music semantics is in doubt. Besides, there are still many issues in feature extraction within the MIR field. We want to address all of the issues and maybe to present a new audio feature representation in the future. Some techniques of text-IR or image-IR may be adopted at that time.


Music Emotion Classification (2006/9~)
Due to the subjective nature of human perception, classification of the emotion of music is a challenging problem. Simply assigning an emotion class to a song segment in a deterministic way does not work well because not all people share the same feeling for a song. In this paper, we consider a different approach to music emotion classification. For each music segment, the approach determines how likely the song segment belongs to an emotion class. Two fuzzy classifiers are adopted to provide the measurement of the emotion strength. The measurement is also found useful for tracking the variation of music emotions in a song. Results are shown to illustrate the effectiveness of the approach.


Publication

/*Journal*/

[1] Y.-H. Yang, Y.-C. Lin, Y.-F. Su, and H.-H. Chen, "A regression approach to music emotion recognition," IEEE Trans. Audio, Speech and Language Processing (TASLP), vol. 16, no. 2, pp. 448-457, Feb. 2008. (abstract, paper, slides) [project page, include dataset]


/*Conference*/

[9] Y.-H. Yang, Y.-C. Lin, and H.-H. Chen, "Personalized music emotion recognition," in Proc. ACM Int. Conf. Information Retrieval 2009 (SIGIR'09), Boston, USA, short paper, accepted.

[8] Y.-C. Lin, Y.-H. Yang, and H.-H. Chen, "Exploiting genre for music emotion classification," in Proc. IEEE Int. Conf. Multimedia and Expo. 2009 (ICME'09), Cancun, Maxico, accepted.

[7] Y.-H. Yang, Y.-C. Lin, and H.-H. Chen, "Clustering for music search results," in Proc. IEEE Int. Conf. Multimedia and Expo. 2009 (ICME'09), Cancun, Maxico, accepted.

[6] H.-T. Cheng, Y.-H. Yang, Y.-C. Lin, and H.-H. Chen, "Multimodal structure segmentation and analysis of music using audio and textual information," in Proc. IEEE Int. Symp. Circuits and Systems 2009 (ISCAS'09), Taipei, Taiwan, accepted.
(paper) [project page, include dataset]

[5] Y.-H. Yang, Y.-C. Lin, H.-T. Cheng, and H.-H. Chen, "Mr.Emo: Music retrieval in the emotion plane," in Proc. ACM Multimedia 2008 (MM'08) (demonstration), pp. 1003-1004.
(paper, demo)

[4] Y.-H. Yang, Y.-C. Lin, H.-T. Cheng, I.-B. Liao, Yeh-Chin Ho, and H.-H. Chen, "Toward multi-modal music emotion classification," in Proc. Pacific-Rim Conf. Multimedia 2008 (PCM'08), pp. 70-79.
(paper, slides)

[3] H.-T. Cheng, Y.-H. Yang, Y.-C. Lin, and H.-H. Chen, "Automatic chord recognition for music classification and retrieval," in Proc. IEEE Int. Conf. Multimedia and Expo. 2008 (ICME'08), Hannover, Germany, pp. 1505-1508.
(paper)

[2] Y.-H. Yang, Y.-F. Su, Y.-C. Lin, and H.-H. Chen, "Music emotion recognition: The role of individuality," in Proc. ACM SIGMM Int. Workshop on Human-centered Multimedia 2007, in conjunction with ACM Multimedia (ACM MM/HCM'07), Augsburg, Germany, pp. 13-21.
(abstract, paper, slides) [project page(include dataset and the software 'AnnoEmo')]

[1] Y.-H. Yang, Y.-C. Lin, Y.-F. Su, and H.-H. Chen, "Music emotion classification: A regression approach," in Proc. IEEE Int. Conf. Multimedia and Expo. 2007 (ICME'07), Bejing, China, pp. 208-211.
(abstract, paper, poster) [project page(include dataset)]


Work Experiences

07 Jul.-Sep. Research Assistance in the Insitute of Information Sceience, Academia Sinica(中研院). (link)


Awards

07 Sep. Third place in the 3rd NISSAN Design Award (裕隆汽車創新風雲賞). (link)


Activities

(course taken)
   multimedia analysis and indexing, computer vision,digital video effects, digital signal processing, algorithm, data structure, pattern recognition, computer network, speech signal processing, etc.
(club)
   台大國樂團 (NTU Chinese Orchestra)
      leadership of this club in 2005.09~2006.07

   小巨人絲竹樂團 (Little Giant Chinese Chamber Orchestra) (link) (youtube)


Links

(academia) (further links)
MIREX http://www.music-ir.org/mirex2007/index.php/Main_Page

(others)
NISSAN http://www.nissan.com.tw/2007designaward/work-info.asp#2

(course) (further link)
Network and Computer Security
Object Oriented Software Design


Friends

(ntu) Yi-Hsuan Yang . Chia-Kai Liang . Tien-Lin Wu . Winston H. Hsu .

(last update: 2009/4/2)