The 3rd Jittor Workshop on Deep Learning

April 18, 2021


General Information

Deep Learning Technology is being widely used in various fields of artificial intelligence, such as computer vision, intelligent robot, intelligent city, machine translation, natural language processing, etc., and has made unprecedented breakthroughs.

Tsinghua University released a new machine learning framework -Jittor on March 20, 2020, which has been open source on Github.

The Jittor workshop on Deep Leaning is intended to provide a forum for disseminating novel research ideas and significant practical results in deep learning, and exchanges of development of Jittor framwork. The 3rd workshop on Deep Learning will be held on April 18, 2021.

Invited Speakers

Jun Zhu, Tsinghua University
WangMeng Zuo, Harbin Institute of Technology

Venue & Registration

The 3rd Jittor Workshop on Deep Learning will be held in Jingyuan Hall, No.5 Building, Xijiao Hotel.

The workshop is free to attend, but all participants are requested to register below. Registered participants of the workshop will receive a conference bag, which includes a printed program and recent issues of the CVM journal.

The safety of our participants is the first priority, due to COVID-19, we only accept the first 100 Registers.


Please register now!

 
Program

15:00 - 15:10 Opening Session
15:10 - 15:50 Zhusuan: Differentiable Probability Programming and Its Application
Jun Zhu, Tsinghua University
15:50 - 16:30 A Preliminary Study on Deep Network Learning Method for Low Labeling Cost and Non-ideal Supervision
Wangmeng Zuo, Harbin Institute of Technology


16:30 - 17:30


Awarding Ceremony of the First "Jittor" Artificial Intelligence Contest


Invited Speakers and Abstracts

Speaker 1: Jun Zhu, Tsinghua University

Bio: Dr. Jun Zhu is a Professor at the Department of Computer Science and a Chief Scientist of Machine Learning at Beijing Academy of Artificial Intelligence. He was an Adjunct Faculty at the Machine Learning Department in Carnegie Mellon University from 2015 to 2018. His research interest lies in machine learning. He has published over 100 papers in the prestigious conferences and journals. He is an associate editor-in-chief for IEEE Trans. on PAMI. He regularly served as (senior) area chair for ICML, NeurIPS, ICLR, and AAAI. He is a recipient of Xplorer Prize, IEEE Intelligent Systems "AI's 10 to Watch" Award, MIT TR35 China, CCF Young Scientist Award, and CCF first-class Natural Science Award.

Title: ZhuSuan: A Differential Probabilistic Programming Library

Abstract: Probabilistic models provide a set of powerful tools to deal with uncertainty that is pervasive in machine learning applications. Probabilistic programming utilizes computer programs to represent probabilistic models, and it supports the functions of sampling as well as probabilistic inference conditioned on arbitrary observations. Traditionally, the dependency relationship in probabilistic programming is mainly linear or generalized linear, which serves as the basis of many successful models and inference algorithms. However, such linearity also limits the expressiveness and flexibility of probabilistic programs. Differentiable probabilistic programming allows the probabilistic programs to have nonlinear dependency under a proper parameterization form (e.g., neural networks), and is able to learn the unknown parameters from data via gradient-based methods. This programming paradigm is easy to extend, largely avoids the tedious model selection process, and makes it possible to deploy probabilistic models in an end-to-end manner. This talk presents ZhuSuan, an open-sourced library for differentiable probabilistic programming. We will discuss the design and implementation of differentiable probabilistic programming systems.

Speaker 2: Wangmeng Zuo, Harbin Institute of Technology

Bio: Wangmeng Zuo is currently a Professor in the School of Computer Science and Technology, Harbin Institute of Technology. He received the Ph.D. degree in computer application technology from the Harbin Institute of Technology, Harbin, China, in 2007. His current research interests include image enhancement and restoration, image and face editing, object detection, visual tracking, and image classification. He has published over 100 papers in top tier academic journals and conferences. His publications have received 20,000+ citations in terms of Google Scholar. He has served as Area Chairs for ICCV 2019, CVPR 2020/2021, and is an Associate Editor of IEEE Trans. Pattern Analysis and Machine Intelligence, and IEEE Trans. Image Processing.

Title: Primary Study on Learning Deep Networks under Low Annotation Cost and Non-ideal Supervision

Abstract: Deep learning has made unprecedented success in many computer vision tasks, which largely depends on massive well annotated data. However, the annotation cost is expensive and laborious, especially for several localization tasks such as detection and segmentation. As for several low-level vision tasks, the ideal ground-truth images usually are unavailable. In this talk, we first investigate the weakly supervised learning from a knowledge perspective, and present two examples to acquire knowledge from auxiliary data for assisting weakly supervised learning. Then, we show that the ideal ground-truth images are not available for deep ISP and Raw super-resolution, and present our alternative solution for learning deep networks with non-ideal supervision (e.g., color inconsistency and spatial deformation).



Sponsored By
Beijing National Research Center for Information Science and Technology

Tsinghua-Tencent Joint Lab for Internet Innovation Technology