隨著全球資訊網快速蓬勃的發展，各式各樣的資訊內容和服務不斷地擴增，人類的生活和互動方式逐漸轉移到網路平台，並且伴隨著無線網路和多媒體技術的 快速進展，傳統的資訊檢索技術也不斷地和這些新的資訊媒體和平台結合，產生許多創新的研究，這些創新研究和重要應用議題仍然受到學術界和產業界重視和熱烈討論。因此，本研討會將邀請國內外相關學者專家進行觀念和技術交流。本研討會係繼2002年「資訊自動分類技術研討會」、2003年「資訊檢索與電腦輔助語言教學研討會」、2004年「文件探勘技術研討會」、2005年「網路資訊檢索技術與趨勢研討會」、2006年「網路探勘技術與趨勢研討會」、2007年「Web 2.0技術與應用研討會」、2008年「網路社群服務計算暨探勘技術研討會」、2009年「行動資訊檢索暨行動定位服務技術研討會」、2010年「2010資訊檢索創新技術研討會」、2011年「音樂資訊檢索暨社群服務技術研討會」、2013、2014年「資訊檢索頂尖論文研討會」、2015年「跨領域自然語言處理與資訊檢索技術新趨勢」及2016年「資訊檢索大未來」後續的年度會議活動，每年研討會的主題都獲得廣大迴響。 近年來，由於人工智慧技術越來越受到關注與期待，資訊檢索技術與多媒體及自然語言處理技術的結合與交流，已產生大量的技術需求。今年度我們將循著這波技術浪潮，以「資訊檢索與人工智慧」為主題舉辦本屆研討會，我們特別邀請到兩位國外的頂尖學者與國內優秀的年輕學者前來與我們分享人工智慧、自然語言處理與資訊檢索的頂尖技術，絕對是接觸相關新技術的絕佳機會，歡迎各界人士踴躍報名參加！
鄭卜壬 教授 (國立臺灣大學資訊工程學系)
古倫維 博士 (中央研究院資訊科學研究所)
|9:00-10:00||Keynote Speech||Prof. Yulan He||Department of Computer Science, University of Warwick|
Emotion Ranking and Emotion Transition Modelling in Text
AbstractText might contain or evoke multiple emotions with varying intensities. Traditional approaches typically cast the problem of detecting multiple emotions from text as multi-label classification. This talk will first present a novel ranking-based approach to generate a ranked list of relevant emotions where top ranked emotions are more intensely associated with text compared to lower ranked emotions. Furthermore, since emotions might be evoked by different hidden topics, it is important to unveil and incorporate such topical information to understand how the emotions are evoked. A novel neural network approach with topical information incorporated will be discussed for relevant emotion ranking. Finally, by modelling the topics and emotions in successive sentences as a Markov chain, a novel probabilistic graphical model will be presented to automatically detect the sentence-level topic and emotion labels despite being trained from document-level emotion labels only. (Slide)
BiographyYulan He is a Professor in the Department of Computer Science at the University of Warwick, UK. She obtained her PhD degree in spoken language understanding from the University of Cambridge. Yulan is experienced in statistical modelling and text mining, particularly the integration of machine learning and natural language processing for text understanding. She has published over 150 papers on topics including sentiment analysis, information extraction, clinical text mining, recommender systems, learning analytics and spoken dialogue systems. In the past, she has served as an Area Chair in Sentiment Analysis in top natural language processing conferences including ACL, EMNLP and NAACL.
|10:20-11:20||Invited Talk 1||黃挺豪 助理教授 Prof. Ting-Hao Kenneth Huang||College of Information Sciences and Technology, Pennsylvania State University|
Crowd Research: Data, Workflows, and Crowd-AI Systems
AbstractAmazon Mechanical Turk, one of the largest crowdsourcing marketplace, was launched to the public in 2005. Since then, researchers have developed a tremendous amount of work around this platform. In this talk, we will walk through three phases of crowd research through the lens of building future computer systems, including (i) labeling datasets and providing human feedback for computer systems, (ii) developing human computation workflows for complex tasks, and (iii) creating real-time interactive systems using crowd-AI architectures. (Slide)
BiographyTing-Hao (Kenneth) Huang is a tenure-track assistant professor in the College of Information Sciences and Technology (IST) at the Pennsylvania State University (University Park.) His research focuses on real-time crowdsourcing and conversational agents, under the broader umbrella of human-in-the-loop architectures. Prof. Huang has published in both top HCI and AI conferences, including CHI, UIST, HCOMP, and CI. He has also published in top NLP conferences, including NAACL, COLING, EMNLP, IJCNLP, and LREC. Prof. Huang received Best Paper Honorable Mention Award at CHI 2018 and CHI LBW 2016.
|11:20-12:20||Invited Talk 2||陳縕儂 助理教授 Prof. Yun-Nung Vivian Chen||國立臺灣大學資訊工程學系|
Towards Open-Domain Conversational AI
AbstractInteracting with machines via natural language has been an emerging trend. The goal of developing open-domain dialogue systems that not only emulate human conversation but fulfill complex tasks, such as travel planning, seemed elusive. Recent advances in deep learning enabled new research frontiers for end-to-end conversational systems. This talk will review the research work about deep learning and reinforcement learning technologies that have been developed for two types of conversational agents. First is a task-oriented dialogue system that can help users accomplish tasks, ranging from meeting scheduling to vacation planning. Second is a social bot that can converse seamlessly and appropriately with humans. This talk will conclude with the advanced work that attempted to develop open-domain neural dialogue systems by combining the strengths of both types of agents. (Slide)
BiographyYun-Nung (Vivian) Chen is currently an assistant professor in the Department of Computer Science & Information Engineering at National Taiwan University. She earned her Ph.D. degree from Carnegie Mellon University, where her research interests focus on spoken dialogue systems, language understanding, natural language processing, and multimodality. She received Google Faculty Research Awards, MOST Young Scholar Fellowship, Student Best Paper Awards from IEEE SLT 2010 and IEEE ASRU 2013, a Student Best Paper Nominee from Interspeech 2012, and the Distinguished Master Thesis Award from ACLCLP. Prior to joining National Taiwan University, she worked in the Deep Learning Technology Center at Microsoft Research Redmond.
|14:00-15:00||Researcher Meet ‘n Greet|
|15:20-16:20||Invited Talk 3||楊奕軒 副研究員 Dr. Yi-Hsuan Yang||中央研究院資訊科技創新研究中心|
Machine Learning for Creative AI Applications in Music
AbstractIn this talk, I will briefly introduce three latest projects in our lab at Academia Sinica on creative applications in music, including the singing voice separation project, GenMusic (music generation) project, and the DJnet project. The first project is about separating the singing voice from the musical accompaniments, which can be used as a pre-processing step for many music related applications. The second project is about learning from massive collection of MIDI files to generate multi-track music by a generative adversarial network (GAN). The generative model can be used for generating music either from scratch, or by accompanying a given (instrument) track. The third project is about creating an AI DJ that knows how to manipulate, sample, and sequence musical pieces to create a personalized playlist. The goal of these projects is to enrich the way people create and interact with music in their daily lives, using the latest machine learning (deep learning) techniques. (Slide)
BiographyYi-Hsuan Yang is an Associate Research Fellow with Academia Sinica. He received his Ph.D. degree in Communication Engineering from National Taiwan University in 2010. He is also a Joint-Appointment Associate Professor with the National Cheng Kung University, Taiwan. His research interests include music information retrieval, affective computing, multimedia, and machine learning. Dr. Yang was a recipient of the 2011 IEEE Signal Processing Society Young Author Best Paper Award, the 2012 ACM Multimedia Grand Challenge First Prize, the 2014 Ta-You Wu Memorial Research Award of the Ministry of Science and Technology, Taiwan, and the 2015 Best Conference Paper Award of the IEEE Multimedia Communications Technical Committee. He is an author of the book Music Emotion Recognition (CRC Press 2011). In 2014, he served as a Technical Program Co-Chair of the International Society for Music Information Retrieval Conference (ISMIR). In 2016, he started his term as an Associate Editor for the IEEE Transactions on Affective Computing and the IEEE Transactions on Multimedia. Dr. Yang is a senior member of the IEEE.
|16:20-17:20||Invited Talk 4||邱維辰 助理教授 Prof. Wei-Chen Chiu||國立交通大學資訊工程學系|
Self-Contained Style Transfer and Saliency-Guided Image Manipulation
AbstractWhile deep learning approaches have demonstrated impressive results in a wide variety of computer vision and image processing tasks, in this talk I will introduce two recent works from my research groups which "hack" two important and practical deep models of image style transfer and saliency estimation. In the first work, self-contained style transfer, we integrate the power of steganogaphy into style transfer in order to resolve the issues originated from the content inconsistency between the original image and its stylized output, thus achieve reverse and serial style transfer. For the second work, we tackle the problem of saliency-guided image manipulation for adjusting the saliency distribution over image regions, which has potential applications in human-computer interaction, autonomous driving, or advertisement with needs for highlighting the specific regions or objects. We aim to view the models and tasks from different perspectives in order to discover more interesting, unique but practical problems. (Slide)
BiographyWei-Chen Chiu (邱維辰) received the B.S. degree in Electrical Engineering and Computer Science and the M.S. degree in Computer Science from National Chiao Tung University (Hsinchu, Taiwan) in 2008 and 2009 respectively. He further received Doctor of Engineering Science (Dr.-Ing.) from Max Planck Institute for Informatics (Saarbrücken, Germany) in 2016. He joined Department of Computer Science, National Chiao Tung University as an assistant professor from August 2017 and established the Enriched Vision Applications Laboratory, which now already grows up to 12 graduate students and 3 research assistants. He was an postdoctoral researcher in Research Center for Information Technology Innovation, Academia Sinica, from Feb. to July. 2017, and a research scientist in a Taiwanese startup, Viscovery, from Aug. 2016 to Jan. 2017. His current research interests generally include computer vision, machine learning, and deep learning, with special focus on generative models.
一般人士：會員 NT$700，非會員 NT$900；
學生 ：會員 NT$500，非會員 NT$700；
- 劃撥：戶名「社團法人中華民國計算語言學學會」；帳號：19166251。 (劃撥通訊欄內請註明「IR Workshop以及Registration ID.；同單位多位報名可合併劃撥)
聯絡人：陳彥勻 / 高嘉駿 電話：(02) 2788-3799 #1559 / 1564 Email: email@example.com / firstname.lastname@example.org 地址：台北市南港區研究院路二段128號 中央研究院資訊科學所