WSR 2016
Workshop on Smart Robotics 2016
Jun 3rd, Tsinghua University, Beijing, China

Workshop on Smart Robotics 2016 (WSR 2016)

This decade is witnessing the fastest awareness and heavy investigation at the research of Robotics, which will play a key role in the development of science and technology in this century. The aims of robotics include making them be able to perceive and learn from the environment, enabling the robots to independently work or provide support in work-intensive, difficult and possibly complex situations, and giving them higher level intelligence like social behavior and cognitions. To make the robots smarter, knowledge and technique from multiple research areas should combine, like intelligent sensors, computer vision, geometric processing, operating system and automation. In order to encourage the integration of various research streams from robotics and other relevant disciplines, this workshop aims to bring together a diverse and multidisciplinary group of researchers interested in intelligent robotics. The workshop will include several invited talks and a poster session to introduce the newest progress in intelligent robotics. The topics of this workshop will cover:
  • Geometric computing for robotics;
  • Scene understanding of robots;
  • Smart control architectures of robots;
  • Intelligent Human-Robot interaction;
  • Robots and smart manipulation;
  • System software for robots;
  • Security and safety of robots.

Program

Program Schedule

Location: Lecture Hall(多功能厅), 2nd floor, FIT Building



09:00-09:05 Chairs Welcome
09:05-09:45 Invited Talk I: Prof. Charlie C. L. Wang

Geometric Computing for Robotic Applications

09:45-10:25 Invited Talk II: Prof. HelmutPottmann

Geometric Optimization for Manufacturing of Freeform Shapes

10:25-10:50 Coffee Break
10:50-11:20 Invited Talk III: Prof. Peter Hall

Combining Vision and Touch for Robot Sensing

11:20-11:50 Invited Talk IV: Prof. Huaping Liu

Visual-Tactile Fusion for Robotic Perception

11:50-13:30 Lunch
13:30-14:00 Invited Talk V: Prof. Shaojie Shen

Robust Autonomous Flight in Cluttered Environments

14:00-14:30 Invited Talk VI: Prof. Jia Pan

Model-Driven and Data-Driven Multi-agent Navigation

14:30-15:00 Invited Talk VII: Prof. Ralph R. Martin

Shape retrieval of non-rigid objects

15:00-15:20 Coffee Break

Poster Session: System and Applications of Robotics

15:20-15:30 Di Guo( Phd student, Tsinghua University):

Object Discovery and Grasp Detection with a Shared Convolutional Neural Network

15:30-15:40 Chen-Ming Wu (Phd student, Tsinghua University) and Yong-Jin Liu(Tsinghua University):

Delta DLP 3D Pringting with Large Size

15:40-15:50 Yu He (Zhejiang University of Technology) and Sheng-Yong Chen (Tianjin University of Technology):

Depth image optimization method based on multi TOF camera system

15:50-16:00 Hao Li (Phd student, Tsinghua University):

Nerva: Appending Nervous Reflex Circuit for Humanoid Robots with Nature Language

16:00-16:10 Tinglong Tang (Zhejiang University of Technology) and Sheng-Yong Chen (Tianjin University of Technology):

Incremental support vector machine combined with active learning and semi-supervised learning

16:10-16:20 Lele Cao (Phd student, Tsinghua University):

Robust Tactile Recognition

16:20-16:30 Yan-Hong Yang (Zhejiang University of Technology) and Sheng-Yong Chen (Tianjin University of Technology):

3D Morphable Face Model and Fitting





Invited Speakers

Prof. Charlie C.L. Wang

Delft University of Technology, The Netherlands

 

Talk Title:


Geometric Computing for Robotic Applications


Talk Abstract:


With the development of computer vision and imaging techniques, robotic systems are now increasingly able to operate in non-structured environments. The control system of robots are facing the challenges of complex geometry and high DoFs in motion. In this talk, a few approaches will be presented to tackle these difficulties. Specifically, geometric computing in the applications of positioning, grasping, imitation and swarm steering will be introduced as: (1) computing stable contact interface for customized surgical jigs, (2) rope caging and grasping, (3) motion imitation based on sparsely sampled correspondence, and (4) steering micro-robotic swarm by dynamic actuating fields. Experimental results will be shown during the talk, and the possibility of future work will be discussed.


Short Bio:


Charlie C.L. Wang is currently a Professor and Chair of Advanced Manufacturing in the Department of Design Engineering at Delft University of Technology, The Netherlands. Prior to this position, he was a Professor of Mechanical and Automation Engineering at the Chinese University of Hong Kong, where he started his academic career in 2003. He received a BEng degree (1998) in mechatronics engineering from Huazhong University of Science and Technology, Wuhan, China. He received his MPhil (2000) and PhD (2002) degrees in mechanical engineering from Hong Kong University of Science and Technology.
Prof. Wang is a Fellow of American Society of Mechanical Engineers (ASME) and his research interests include geometric computing, computer-aided design, advanced manufacturing and computational physics. Charlie received a few awards from professional societies including the ASME CIE Young Engineer Award (2009), the Best Paper Awards of ASME CIE Conferences (in 2008 and 2001), the Prakash Krishnaswami CAPPD Best Paper Award of ASME CIE Conference in 2011, the NAMRI/SME Outstanding Paper Award in 2013, and the Best Paper Award of Computational Visual Media journal in 2015. He serves on the editorial board of a few journals including Computer-Aided Design, IEEE Transactions on Automation Science and Engineering, ASME Journal of Computing and Information Science in Engineering, and International Journal of Precision Engineering and Manufacturing.

 

Prof. Helmut Pottmann

TU Wien, Austria

 

Talk Title:


Geometric Optimization for Manufacturing of Freeform Shapes


Talk Abstract:


Freeform shapes represent one of today's important manufacturing challenges. This applies to numerically controlled (NC) machining of parts to be produced in large amounts as well as to outer surfaces and sub-constructions for unique designs in contemporary architecture. However, currently no systematic method exists which could reconcile the competing aims of faithfully reproducing smooth surfaces with their efficient segmentation into easily manufacturable parts. We consider surfaces generated by the motion of either a milling tool or a profile curve, and investigate their properties and approximation power. Our ultimate goal is to algorithmically determine a segmentation of freeform surfaces into parts exactly manufacturable by a single sweep. This amounts to highly nonlinear optimization with side conditions originating in both geometry and manufacturing and requires a detailed shape analysis. This is joint work with M. Barton, P. Bo, M. Kilian, D. Plakhotnik, L. Shi and J. Wallner.


Short Bio:


Professor Helmut Pottmann is the professor of geometry at Vienna University of Technology and head of the 'Geometric Modeling and Industrial Geometry' research group. He has also been working as director of the Geometric Modeling and Scientific Visualization Center at King Abdullah University of Science and Technology in Saudi Arabia. His research interests are in Applied Geometry and Visual Computing, in particular in Geometric Modeling, Geometry Processing and most recently in Geometric Computing for Architecture and Manufacturing


Prof. Peter Hall

University of Bath, UK

 

Talk Title:


Combining Vision and Touch for Robot Sensing


Talk Abstract:


There has been a long standing interest in combining sensor data in Robotics. We consider the specific case of touch and vision. In pursuit of this we make four contributions: 1) we build an inexpensive touch sensor that emulates a single finger tip, 2) we use the "finger" to recognise touched objects, achieving state-of-the-art results, 3) we describe the conditions under which combining vision and touch yields improved recognition performance over either sense alone, and 4) we propose the notion of "learning efficiency" (how accurate a system is for a given number of training samples) and propose that this measure is maximised by considering modalities jointly when neither individual modality dominates over the other.


Short Bio:


Dr. Peter Hall is a reader (associate professor) in the department of Computer Science, University of Bath. He is also director of the Media Technology Research Centre, also at Bath. He is interested in automatically processed real photographs and video into art or into 3D animated models. He founded and was chair of the Vision, Video, and Graphics Network. The network ran conferences and meetings as well as providing small grants to support research in the “convergence” area of VVG. He has served as an elected member of the British Machine Vision Association executive committee since 2003. He is also a member of the steering committee of the Computational Aesthetics series of conferences, being co-chair of the technical programme in 2009. He is an EPRSC college member, and review for international funding bodies too.

 

Prof. Huaping Liu

Tsinghua University, China

 

Talk Title:


Visual-Tactile Fusion for Robotic Perception


Talk Abstract:


Visual and tactile measurements offer complementary properties that make them particularly suitable for fusion in order to address the robust and accurate recognition of objects. For example, the camera provides rich visual information regarding objects. However, it is often difficult to be applicable when the objects are not visually distinguished. On the other hand, tactile sensors can be used to capture multiple object properties, such as textures, roughness, spatial features, compliance or friction, and therefore provides another important modality for the perception. Nevertheless, effective combination of the visual and tactile modalities is still a challenging problem. In this talk, we investigate a widely applicable scenario in grasp manipulation. When identifying an object, the manipulator may see it using the camera and touch it using its hand. Thus, we obtain a pair of test samples, including one image sample and one tactile sample. The manipulator then utilize this sample pair to identify this object with a classifier that is constructed using the previously collected training samples. However, when collecting training samples, we may collect the image samples and the tactile samples separately. In other words, the training samples may not be paired, while the test samples are paired. This work addresses this practical problem by developing a joint group kernel sparse coding method. Although our focus is on combining visual and tactile information, the described problem framework is common in the robotics community. The developed method can therefore work with weak pairings between a variety of sensors.


Short Bio:


Huaping Liu received Ph.D degree from Tsinghua University in 2004. Currently he is an associate professor in Department of Computer Science and Technology, Tsinghua University. He published more than 40 papers on international journals including IEEE TAC, TASE, TIM, TNNLS. His research interests include robot perception, learning and control. He is cooperating with several companies about robotic manipulation and perception. He serves as Associate Editor of some journals including Cognitive Computation, IEEE Robotics & Automation Letters, Neurocomputing, International Journal of Control, Automation and Systems, International Journal of Advanced Robotic Systems, and some conferences including ICRA and IROS. He also serves as the Program Committee member of Robotics: Science and Systems Conference (RSS) 2016.



Prof. Shaojie Shen

Hong Kong University of Science and Technology

 

Talk Title:


Robust Autonomous Flight in Cluttered Environments


Talk Abstract:


Micro aerial vehicles (MAVs) offer exceptional mobility and perceptual capabilities over ground platforms, making them particularly suitable for search-and-rescue, disaster response, and inspection missions in which the vehicle must be able to navigate through complex three dimensional environments. In such missions, MAVs must be fully autonomous in order to stabilize their fast dynamics and react to changes in environmental conditions. This talk summarizes our recent advances in autonomous MAVs with focus on state estimation and trajectory control: (1) a self-calibrating multi-camera visual-inertial fusion approach with on-the-fly initialization for rapid deployment of MAVs; (2) a dense visual-inertial fusion approach for robust tracking of very aggressive motions; (3) a real-time trajectory generation approach for high-speed autonomous flight through cluttered environments; and (4) experimental testbed and system integration efforts at the HKUST UAV group. Extensive experimental results will be presented throughout the talk.


Short Bio:


Prof. Shaojie Shen received his B.Eng. degree in Electronic Engineering (Honors Research Option) from the Hong Kong University of Science and Technology in 2009. He received his M.S. in Robotics and Ph.D. in Electrical and Systems Engineering in 2011 and 2014, respectively, all from the University of Pennsylvania. He joined the Department of Electronic and Computer Engineering at the Hong Kong University of Science and Technology in September 2014 as an Assistant Professor. His research interests are in the areas of robotics and unmanned aerial vehicles, with focus on state estimation, sensor fusion, computer vision, localization and mapping, and autonomous navigation in complex environments. His work was in the Best Paper Finalist in ICRA2011. He won the Best Theoretical Paper Award in SSRR2015.

 

Prof. Jia Pan

City University of Hong Kong

 

Talk Title:


Model-Driven and Data-Driven Multi-agent Navigation


Talk Abstract:


The problem of generating trajectories and behaviors of a large number of agents frequently arises in robotics, graphics, virtual reality, and even in understanding the biological systems. This problem includes the generation of pedestrian movements in a shared space and the collaboration between agents governed by social norms, physical principles, and interactions. In this talk, we will discuss two approaches to achieve efficient multi-agent navigation: a data-driven approach leverages the first principle, and a data-driven approach uses learning-based framework.


Short Bio:


Jia Pan is an Assistant Professor in the Department of Mechanical and Biomedical Engineering at the City University of Hong Kong. His research area is algorithmic robotics, including motion planning, multi-robot system, robotic perception, grasping, manipulation, reinforcement learning, learning from demonstration, so on so forth. He is also interested in combining intelligent software (e.g., deep learning, vision, and interaction approaches) with intelligent hardware (e.g., soft gripper, dexterous manipulator, tactile sensor array, and cameras).


Prof. Ralph R. Martin

Cardiff University, UK

Talk Title:


Shape retrieval of non-rigid objects


Talk Abstract:


This talk will consider a new approach to shape retrieval of non-rigid objects. After discussing the relevance of this topic to robotics, it will then briefly review approaches to rigid and non-rigid object retrieval, and then consider a new approach which promises to significantly speed up shape retrieval for non-rigid objects.


Short Bio:


Ralph R. Martin is currently a Professor at Cardiff University. He obtained his PhD degree in 1983 from Cambridge University. He has published more than 200 papers and 12 books, covering such topics as solid and surface modeling, intelligent sketch input, geometric reasoning, reverse engineering, and various aspects of computer graphics. He is a Fellow of: the Learned Society of Wales, the Institute of Mathematics and its Applications, and the British Computer Society. He is on the editorial boards of Computer Aided Design, Computer Aided Geometric Design, Geometric Models, the International Journal of Shape Modeling, CAD and Applications, and the International Journal of CADCAM.




Organizers

Workshop Organizers

Shi-Min HuTsinghua University
Charlie C.L. WangDelft University of Technology

Executive Committee and Contact

Fang-Lue Zhangz.fanglue@gmail.com

Venue


Accomodation

For accommodation, special prices have been negotiated for conference attendees at Unisplendour International Communication Center(2 min. walk to conference venue). To make a reservation, please send the name and date to z.fanglue@gmail.com.

We also suggest several alternative hotels:
1. Wenjin Hotel (http://www.booking.com/hotel/cn/wen-jin-beijing.html): 15 minutes walk to conference venue, approx. 1000 CNY/night;
2. Holiday Inn Beijing Haidian (http://www.booking.com/hotel/cn/holiday-inn-beijing-haidian.html): 15 minutes walk to conference venue, approx. 500 CNY/night;
3. Hejia Inn (http://www.booking.com/hotel/cn/hejia-hotel-beisihuan.html): 10 minutes walk to conference venue, approx. 350 CNY/night;



Venue

Workshop on Smart Robotics 2016 will take place at the Lecture Hall of the Future Internet Technology(FIT) building. The FIT building is located at the South-East Gate of Tsinghua University.


Future Internet Technology Building

The detailed position of the FIT building can be found in Google Map or Tencent Map.



Ground Transportation

 

The best connection from either the airport or the railway station to the conference venue is taking a taxi.

If you arrive at Beijing International Airport:
A taxi ride to the South-East Gate of Tsinghua University takes ~45 minutes and ~100 CNY (13 US Dollar).

If you take the railway to Beijing and arrive at:
1. Beijing Railway Station, the taxi ride to the South-East Gate of Tsinghua University requires ~45 minutes and ~60 CNY (8 US Dollar).
2. Beijing West Railway Station, the taxi ride to the South-East Gate of Tsinghua University requirest ~30 minutes and ~40 CNY (5 US Dollar).

From the South-East Gate of Tsinghua University to the FIT Lecture Hall:
Setp 1. Walk through the South-East Gate of Tsinghua University. The FIT building is the first building on your left.
Setp 2. Get inside the FIT building and go to the second floor. The Lecture Hall is located on the west side of the second floor.

You can print either of the following cards and show it to the taxi driver, which contains the Chinese address of:
1. The South-East Gate of Tsinghua University
2. Unisplendour International Communication Center

Copyright © WSR2016 - All Rights Reserved    Contact Us:z.fanglue (at) gmail.com