Frontier of Embodiment Informatics: ICT and Robotics, Top Global University Project早稲田大学 ICT・ロボット工学拠点

News

International Symposium on Machine Intelligence for Future Society 2019

日時 2019年9月9 – 10日
会場 早稲田大学 西早稲田キャンパス 63号館 2階 04室
参加費 無料 (事前登録をするようお願いいたします)
主催 早稲田大学SGU(スーパーグローバル大学創成支援) ICT・ロボット工学拠点
協賛 人工知能学会 (JSAI)
日本ロボット学会 (RSJ)
計測自動制御学会 (SICE)
早稲田大学 次世代ロボット研究機構

フォトギャラリー

アブストラクト

It has been seen countless times in movies and novels that robotics technology is intuitively integrated into human society to enhance and assist human in all possible scenarios. This technology could help us countermeasure several problems such as labor shortage, and low life quality of mobility impaired and brain-injury individuals. In order to achieve such technological advancement, we believe that machine intelligence is an important factor.

Therefore, in this symposium on Machine Intelligence for Future Societies, we would like to provide the audience with state-of-the-art research from around the world that would help in realizing such technology. We would like to have a discussion on the topic between the speakers and the audience. We believe that it will help to accelerate the research that could help human in achieving such technological advancement in the near future.

プログラム

2019年9月9日

10:00 ~ Registration open
10:30 – 10:40 Welcome Address:
Prof. Hironori Kasahara, Vice President, Waseda University
10:40 – 10:50 Special Address:
Dr. Hideyuki Tokuda, President, National Institute of Information and communications Technology)
10:50 – 11:00 Opening Address
11:00 – 11:45 Invited Speaker 1:
Prof. Tetsuya Ogata, Waseda University
Spotlight chairperson:
Dr. Kuniyuki Takahashi, Preferred Networks, Inc.
11:45 – 12:55 Lunch break
13:00 – 13:45 Invited Speaker 2:
Prof. Gordon Cheng, Technical University of Munich, Germany
Spotlight chairperson:
Prof. Takamitsu Matsubara, Nara Institute of Science and Technology
13:45 – 14:15 Coffer Break & Demo Session 1
14:15 – 15:00 Invited Speaker 3:
Prof. Laurel Riek, University of California San Diego, USA
Spotlight chairperson:
Prof. Gentiane Venture, Tokyo University of Agriculture and Technology
15:00 – 15:45 Invited Speaker 4:
Prof. Björn W. Schuller, Imperial College London, UK
Spotlight chairperson:
Prof. Gentiane Venture, Tokyo University of Agriculture and Technology
15:45 – 16:15 Coffer Break & Demo Session 2
16:15 – 17:00 Invited Speaker 5:
Prof. Michael Beetz, University of Bremen, Germany
Spotlight chairperson:
Prof. Toshiaki Tsuji, Saitama University
17:20 – 19:00 Banquet
Welcome Address by Prof. Tetsuya Ogata.
1F. Building 63
(All participants are welcome to join)

2019年9月10日

10:30 ~ Registration open
11:00 – 11:45 Invited Speaker 6:
Prof. Yiannis Aloimonos, University of Maryland, USA
Spotlight chairperson:
Prof. Tetsunari Inamura, National Institute of Informatics
11:45 – 12:55 Lunch Break
13:00 – 13:45 Invited Speaker 7:
Prof. Dana Kulic, Monash University, Australia
Spotlight chairperson:
Prof. Hiroki Mori, Waseda University
13:45 – 14:30 Invited Speaker 8:
Prof. Yasuo Kuniyoshi, University of Tokyo
Spotlight chairperson:
Prof. Hiroki Mori, Waseda University
14:30 – 15:00 Coffer Break & Demo Session 3
15:00 – 15:45 Invited Speaker 9:
Prof. Koh Hosoda, University of Osaka
Spotlight chairperson:
Ms. Namiko Saito, Waseda University
15:45 – 16:00 Coffee Break
16:00 – 16:30 Round table discussion
16:30 – 16:40 Closing Address:
Prof. Shigeki Sugano, Dean of School of Creative Science and Engineering and Leader of TGU: Frontier of Embodiment Informatics: ICT & Robotics

招待講演者

Professor Björn W. Schuller

Imperial College London, United Kingdom
University of Augsburg, Germany

Emotionally Intelligent Robots: Soon Coming Near You?

The fast paced advances in deep learning accelerate these days expectancies in what Artificial Intelligence will soon be able to accomplish. Movies like Ex Machina thereby contribute to the anticipation of human-level or beyond emotional and social intelligence in oncoming humanoid robots. But where is Affective Computing – the science behind – these days? In a time, where first professional services make use of emotional and personality profiling behind the scene, criticism arises how well such technology is prepared for arbitrary use-cases in the wild. In this light, results of the Audio/Visual Emotion Challenge and Interspeech ComParE both annually organized since a decade by the presenter will set the floor. We will then look into Automatic Deep End-to-End Learning as state-of-the-art technology to empower tomorrow’s robots with emotional and social skills. Cooperative and reinforcement learning will subsequently be introduced as avenue towards overcoming the field’s major bottleneck of label sparseness. Examples from European projects will include robots and agents in the domain of health and care. These will fire the discussion on the next goals including explainability and “green” processing. Expect your robot to know you well sooner rather than later.

Biography

Björn W. Schuller received his diploma, doctoral degree, habilitation, and Adjunct Teaching Professor in Machine Intelligence and Signal Processing all in EE/IT from TUM in Munich/Germany. He is Full Professor of Artificial Intelligence and the Head of GLAM – the Group on Language Audio & Music – at Imperial College London/UK, Full Professor and ZD.B Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg/Germany, co-founding CEO and current CSO of audEERING – an Audio Intelligence company based near Munich and in Berlin/Germany, and permanent Visiting Professor at HIT/China amongst other Professorships and Affiliations. Previous stays include Full Professor at the University of Passau/Germany, and Researcher at Joanneum Research in Graz/Austria, and the CNRS-LIMSI in Orsay/France. He is a Fellow of the IEEE and Golden Core Awardee of the IEEE Computer Society, President-Emeritus of the AAAC, and Senior Member of the ACM. He (co-)authored 800+ publications (25k citations, h-index=73), and was Editor in Chief of the IEEE Transactions on Affective Computing, is General Chair of ACII 2019, ACII Asia 2018, and ACM ICMI 2014, and a Program Chair of Interspeech 2019, ACM ICMI 2019/2013, ACII 2015/2011, and IEEE SocialCom 2012 amongst manifold further commitments and service to the community. His 30+ awards include having been honoured as one of 40 extraordinary scientists under the age of 40 by the WEF in 2015. He served as Coordinator/PI in 15+ European Projects, is an ERC Starting Grantee, and consultant of companies such as Barclays, GN, Huawei, or Samsung.

Professor Dana Kulic

Monash University, Australia

Learning from Human-Robot Interaction

As robots enter human environments, they will need to interact with humans in a variety of roles:  as students, teachers, collaborators and assistants.  In these roles, robots will need to adapt to user’s individual preferences and capabilities, which may not be known ahead of the interaction. In this talk, I will describe approaches for robot learning during interaction, considering robots in different roles and in a variety of applications, including rehabilitation, collaboration in industrial settings, and in education and entertainment.

Biography

Dana Kulić received the combined B. A. Sc. and M. Eng. degree in electro-mechanical engineering, and the Ph. D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. From 2006 to 2009, Dr. Kulić was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura-Yamane Laboratory at the University of Tokyo, Japan. In 2009, Dr. Kulić established the Adaptive System Laboratory at the University of Waterloo, Canada, conducting research in human robot interaction, human motion analysis for rehabilitation and humanoid robotics. Since 2019, Dr. Kulić is a professor at Monash University, Australia. Her research interests include robot learning, humanoid robots, human-robot interaction and mechatronics.

Professor Gordon Cheng

Technical University of Munich
Chair of Cognitive Systems, Germany

Towards multi-intelligences: are we there yet?

In this talk, I will provide different aspects of intelligences that I have been working on over the years, several cases will be given: from morphological-based to sensory-motor learning to cognitive reasoning. Although, many of these cases have been shown as successful examples of highly specialised forms of intelligence – is this really the way forward toward intelligence. Now, I would like to raise the questions: can we continue to engineer smart machines with the current methods with compartmentalising intelligence into closed singular forms? Or, do we need a new methodology or a real paradigm shift in thinking about intelligence as a whole?

Biography

Gordon Cheng holds the Chair for Cognitive Systems, he is Founder and Director of Institute for Cognitive Systems, Faculty of Electrical and Computer Engineering at Technical University of Munich, Munich/Germany. He is also the Head of the CoC for Neuro-Engineering – Center of Competence Neuro-Engineering in the Department of Electrical and Computer Engineering. Prof. Cheng is the Program Director of the Elite Master of Science program in Neuroengineering (MSNE) of the Elite Network of Bavaria.

Formerly, he was the Head of the Department of Humanoid Robotics and Computational Neuroscience, ATR Computational Neuroscience Laboratories, Kyoto, Japan. He was the Group Leader for the JST International Cooperative Research Project (ICORP), Computational Brain. He has also been designated as a Project Leader/Research Expert for National Institute of Information and Communications Technology (NICT) of Japan. He is involved in a large number of major European Union Projects (e.g. RoboCub, Factory-in-a-Day, CONTEST-ITN, RoboCom-Flagship).

Gordon Cheng is the co-inventor of approximately 20 patents and author of approximately 300 technical publications, proceedings, editorials and book chapters. He has been named IEEE Fellow 2017 for “contributions in humanoid robotic systems and neurorobotics”.

Source: https://www.ics.ei.tum.de/en/people/cheng/

細田 耕 教授

日本
大阪大学
大学院基礎工学研究科
システム創成専攻システム科学領域

Soft Humanoid Robotics

We have been working on hard humanoid robots consisting of rigid links, electric motors or hydraulic actuators. We have succeeded in a lot of tasks by utilizing precise control. However, looking at humans, our body is consisting of soft material and the control is very imprecise. Nevertheless, humans can behave adaptively and can realize amazing task performance. One of the underlying mysteries there is soft morphology of our body. Our body is well-designed though long-time evolution, and its morphology plays a great role to realize adaptive behavior. Especially, the soft muscular-skeletal structure is supposed to be very important. I will talk about our endeavors to understand the role of our soft body by building muscular-skeletal humanoid robots. I will introduce our humanoid robots, and a series of trials to generate hypothesis on our soft body. Such a hypothesis will help us to build far more adaptive humanoid robots in the next generation.

Biography

Koh Hosoda received his Ph.D. degree in Mechanical Engineering from Kyoto University, Japan in 1993. He was an assistant professor of Mechanical Engineering Department from 1993 to 1997, and an associate professor of Graduate School of Engineering from 1997 to 2010, at Osaka University. He was a guest professor in Artificial Intelligence Laboratory, University of Zurich from Apr. 1998 to Mar. 1999. He was a group leader of JST Asada ERATO Project from 2005 to 2010. From 2010 to 2014, he was a professor of Graduate School of Information Science and Technology, Osaka University. Since 2014, he has been a professor of Graduate School of Engineering Science, Osaka University.

Professor Laurel Riek

The University of California, San Diego, USA

Healthcare Robotics: A Vision for Personalized Healthcare at Home

As our society ages and population healthcare needs increase, many are looking to robots as a means to help fill care gaps. To work alongside people, particularly at their most vulnerable, robots need the ability to dynamically and quickly interpret human activities, understand context, and take appropriate, safe, and useful actions. They also need to learn from and adapt to people long term. My research focuses on building robots that autonomously solve problems in human environments, particularly those that are safety critical (e.g., hospitals, homes, and work sites). We have been designing new methods to support how robots can personalize their behavior with co-located humans by exploring new research directions in perception, coordination dynamics, and long term learning. Our primary application focus is healthcare, with recent work in neurorehabilitation, dementia caregiving, and emergency medicine. This talk will describe several of our recent projects in this space.

Biography

Dr. Laurel Riek is a professor in Computer Science and Engineering at the University of California, San Diego, with joint appointments in the Department of Emergency Medicine and Contextual Robotics Institute. Dr. Riek directs the Healthcare Robotics Lab and leads research in human-robot teaming, computer vision, and healthcare engineering, with a focus on autonomous robots that work proximately with people. Riek’s current research interests include long term learning, robot perception, and personalization; with applications in critical care, neurorehabilitation, and manufacturing. Dr. Riek received a Ph.D. in Computer Science from the University of Cambridge, and B.S. in Logic and Computation from Carnegie Mellon. Riek served as a Senior Artificial Intelligence Engineer and Roboticist at The MITRE Corporation from 2000-2008, working on learning and vision systems for robots, and held the Clare Boothe Luce chair in Computer Science and Engineering at the University of Notre Dame from 2011-2016. Dr. Riek has received the NSF CAREER Award, AFOSR Young Investigator Award, Qualcomm Research Award, and was named one of ASEE’s 20 Faculty Under 40.

Professor Michael Beetz

Head of the Institute for Artificial Intelligence (IAI)
Faculty for Mathematics & Informatics
University of Bremen, Germany

Knowledge Representation and Reasoning for Cognition-enabled Robot Manipulation

Robotic agents that can accomplish manipulation tasks with the competence of humans have been the holy grail for AI and robotics research for more than 50 years. However, while the fields made huge progress over the years, this ultimate goal is still out of reach. I believe that this is the case because the knowledge representation and reasoning methods that have been proposed in AI so far are necessary but still too abstract. In this talk I propose to endow robots with the capability to mentally “reason with their eyes and hands”, that is to internally emulate and simulate their perception-action loops based on photo-realistic images and faithful physics simulations, which are made machine-understandable by casting them as virtual symbolic knowledge bases. These capabilities allow robots to generate huge collections of machine-understandable manipulation experiences, which they can then generalize into commonsense and intuitive physics knowledge applicable to open manipulation task domains. The combination of learning, representation, and reasoning will equip robots with an understanding of the relation between their motions and the physical effects they cause at an unprecedented level of realism, depth, and breadth, and enable them to master human-scale manipulation tasks. This breakthrough will be achievable by combining simulation and visual rendering technologies with mechanisms to semantically interpret internal simulation data structures and processes.

Biography

Michael Beetz is a professor for Computer Science at the Faculty for Mathematics & Informatics of the University Bremen and head of the Institute for Artificial Intelligence (IAI). He received his diploma degree in Computer Science with distinction from the University of Kaiserslautern. His MSc, MPhil, and PhD degrees were awarded by Yale University in 1993, 1994, and 1996, and his Venia Legendi from the University of Bonn in 2000. In February 2019 he received an Honorary Doctorate from Örebro University. He was vice-coordinator of the German cluster of excellence CoTeSys (Cognition for Technical Systems, 2006–2011), coordinator of the European FP7 integrating project RoboHow (web-enabled and experience-based cognitive robots that learn complex everyday manipulation tasks, 2012-2016), and is the coordinator of the German collaborative research centre EASE (Everyday Activity Science and Engineering, since 2017). His research interests include plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognition-enabled perception.

尾形 哲也 教授

日本
早稲田大学
基幹理工学部
表現工学科

Deep Learning in Robots from the perspective of Cognitive Developmental Robotics

The various systems using deep learning show great performance on both recognition and generation of image, speech, language, etc. In this talk, I introduce our researches of application based on the concept of “predictive learning” for multi-modal integration and robot behavior learning, etc. I also introduce the concept of the “cognitive developmental robotics” which is the process of development of a real-world cognition mechanism to enhance the studies of deep learning with robotics.

Biography

Tetsuya Ogata received the B.S., M.S., and D.E. degrees in mechanical engineering from Waseda University, in 1993, 1995, and 2000, respectively. He was a Research Associate with Waseda University from 1999 to 2001. From 2001 to 2003, he was a Research Scientist with the RIKEN Brain Science Institute. From 2003 to 2012, he was an Associate Professor with the Graduate School of Informatics, Kyoto University. Since 2012, he has been a Professor with the Faculty of Science and Engineering, Waseda University. From 2009 to 2015, he was a JST (Japan Science and Technology Agency) PREST Researcher. He is currently a Joint-Appointed Research Fellow with the Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology. His current research interests include human-robot interaction, dynamics of human-robot mutual adaptation, and inter-sensory translation in robot systems with neuro-dynamical models. He received the IBM faculty award 2017.

國吉 康夫 教授

日本
東京大学
大学院情報理工学系研究科
知能機械情報学専攻人間機械情報学講座

Towards Open-Ended Fusion of Autonomy and Sociality for Human Centered AI/Robots via Emergence and Development From Embodied Interaction

For AI systems/robots to work for the benefits of humans with no harm or uneasiness, open-ended autonomy, sociality and their fusion are essential. In order to realize them, emergence and development from embodied interaction are required.

In this talk, I will first explain why so, and then present current state of the art of the above approach. Concrete examples of instantaneous adaptive behavior, emergent behavior and embodied brain development of simulated and real robots will be presented. A discussion on how we can proceed further follows.

Biography

Yasuo Kuniyoshi received Ph.D. from The University of Tokyo in 1991 and joined Electrotechnical Laboratory, AIST, MITI, Japan. From 1996 to 1997 he was a Visiting Scholar at MIT AI Lab. In 2001 he was appointed as an Associate Professor and then full Professor in 2005 at The University of Tokyo. He is also the Director of RIKEN CBS-Toyota Collaboration Center since 2012, the Director of Next Generation Artificial Intelligence Research Center of The University of Tokyo since 2016.

He published over 300 refereed academic papers and received IJCAI 93 Outstanding Paper Award, Gold Medal “Tokyo Techno-Forum21” Award, Best Paper Awards from Robotics Society of Japan, IEEE ROBIO T.-J. Tarn Best Paper Award in Robotics, Okawa Publications Prize, and other awards.

He is a Fellow of Robotics Society of Japan, President of Japan Society of Developmental Neuroscience and a member of IEEE, Science Council of Japan (affiliate), Japan Society of Artificial Intelligence, Information Processing Society of Japan, Japanese Society of Baby Science.

For further information about his research, visit http://www.isi.imi.i.u-tokyo.ac.jp/ and https://www.ai.u-tokyo.ac.jp/

Professor Yiannis Aloimonos

The University of Maryland, USA

MIND DESIGN: The theory of action grammars

Actions (what an intelligent autonomous system does) are the fundamental building blocks of the mind of the system. Actions however reside in different spaces, the visual space (actions seen), the auditory space (actions heard), the sensori-motor space (actions performed) and the language space (actions talked about). To achieve intelligence we must be able to map the different spaces to each other. The grammatical structure of these spaces point to a new approach. Taking advantage of the syntax of action, we model action as a formal system, namely as a program in a special language named AL (Action Language). We focus on manipulation actions and we conceive such mappings as compilers, interpreters or translators that take action descriptions in one space and turn them into the equivalent description in another space.

In this talk, I will outline a system called VALC (VALC: Visual AL Compiler) that will automatically translate visual observations of complex manipulation actions to AL programs. I will also outline MALC (MALC: Motor AL Compiler) that will automatically translate programs in AL to a motor execution plan for any robot whose specification is provided. MALC will also allow new ways of approaching reinforcement learning and provide a universal language for cognitive robot programming. Furthermore, I will introduce VALD (Visual AL Debugger), an Augmented Reality system, which will guide a human through the course of an AL program execution, providing visual instructions and feedback upon program violations. Finally, I will briefly describe VALT (Video Action Language Translator), a multimedia system that will produce semantic descriptions (and descriptions in English) of videos containing manipulation actions.

Biography

Yiannis Aloimonos is Professor of Computational Vision and Intelligence at the Department of Computer Science, University of Maryland, College Park, and the Director of the Computer Vision Laboratory at the Institute for Advanced Computer Studies (UMIACS). He is also affiliated with the Institute for Systems Research and the Neural and Cognitive Science Program. He was born in Sparta, Greece and studied Mathematics in Athens and Computer Science at the University of Rochester, NY (PhD 1990). He is interested in Active Perception and the modeling of vision as an active, dynamic process for real time robotic systems. For the past five years he has been working on bridging signals and symbols, specifically on the relationship of vision to reasoning, action and language.

スポンサー

主催 早稲田大学SGU(スーパーグローバル大学創成支援) ICT・ロボット工学拠点
協賛 人工知能学会 (JSAI)
日本ロボット学会 (RSJ)
計測自動制御学会 (SICE)
早稲田大学 次世代ロボット研究機構

開催地について

63号館 2階 04室 西早稲田キャンパス

早稲田大学
〒169-8555
新宿区大久保3-4-1
西早稲田キャンパス構内案内図はこちらへ

Dates
  • 0909

    MON
    2019

    0910

    TUE
    2019

Place

早稲田大学 西早稲田キャンパス 63号館 2階 04室

Tags
Posted

Wed, 10 Jul 2019

Page Top
WASEDA University

早稲田大学オフィシャルサイト(https://www.waseda.jp/fsci/ict-robotics/)は、以下のWebブラウザでご覧いただくことを推奨いたします。

推奨環境以外でのご利用や、推奨環境であっても設定によっては、ご利用できない場合や正しく表示されない場合がございます。より快適にご利用いただくため、お使いのブラウザを最新版に更新してご覧ください。

このままご覧いただく方は、「このまま進む」ボタンをクリックし、次ページに進んでください。

このまま進む

対応ブラウザについて

閉じる