2020年1月31日
人工知能(AI)が社会へおよぼすインパクトと、MILA(Montreal Institute for Learning Algorithms)と日立のコラボレーションがめざす社会貢献について対談を交わした。対談者は、モントリオール大学教授であり、Milaの創始者であるYoshua Bengio氏と、日立製作所 テクノロジーイノベーション統括本部 副統括本部長を務める矢川雄一で、進行はデジタルテクノロジーイノベーションセンタ メディア知能処理研究部部長 影広達彦が務めた。Yoshua
Mila started as a research group with my students. The University of Montreal believed in our research direction and recruited other professors of deep learning at a time when it was not a popular topic. We were able to have an impact in the scientific world with our first papers on deep learning, and we began to attract good students and more professors. The government realized that there was something important going on and we got major funding. In fact, the Canadian government was the first in the world with an artificial intelligence strategy.
Yuichi
Acquiring the right talent to support your research must be very important. How do you go about that?
Yoshua
It's all about critical mass and having very, very good research. Essentially, it's a snowball effect – if your researchers publish interesting papers, you attract other researchers and good students. Our focus on research excellence has been the critical factor in our ability to attract talent.
We really moved the needle here. Ten years ago, Canada was experiencing a brain drain where our best PhD graduates would leave and go to the U.S. Today, because of the critical mass we have built and the investments we've made, students are much more likely to stay here to work. That's really good for the local economy. We are creating a community where people who have studied here want to stay, and they feel good about it. We are aware of the social impact of our work. Montreal is different from other communities, like Silicon Valley, in its human-centered Canadian values and ethics.
Tatsuhiko
Great. You have also founded an AI startup, Element AI, that researches cutting-edge AI technologies and advises other startups, correct?
Yoshua
Right. Again, culture is important. The reason people are coming to Mila or to Element AI is because they want to be on a team with technical strength, but also one with a community spirit that we're doing things for good reason. It’s not just to create the next gadget, but to collaborate to have a positive impact on the world. People who come to the community know that they will learn, work for a while, and then use their experience here to easily find a job elsewhere. It's a high standard that is well recognized.
Yuichi
At Hitachi, we believe in collaborative creation, or co-creation. Like Mila, we focus on societal issues, which can be complicated and difficult to solve. Are you also finding that collaboration improves innovation?
Yoshua
Yes. I totally agree with your co-creation approach and have seen it work in many contexts. Deploying AI for social-good applications takes a multidisciplinary effort. We contribute our expertise in machine learning, but we work with experts in other fields. Recently we've been working on using machine learning to tackle climate change. Depending on the area of application – for example, for climate models – we need other experts. Otherwise we're just going to reinvent the wheel and not do it very well. Collaboration is especially important when you want to build something with AI that's going to have a large impact.
Deploying technology is different from inventing technology. It involves social skills, policy, logistics and expertise from many sources. Collaboration, even in basic research, is important because sharing diverse ways of thinking really helps.
Yuichi
Interesting, thank you. Hitachi's mission for innovation is to contribute to societies by developing superior, original products to improve quality of life and add value for customers. At the same time, we focus on environmental and economic values. AI is a key differentiator, of course. How does our approach sound to you, in terms of what you’re doing?
Yoshua
I see a lot of parallels with what Hitachi is doing and our activities at Mila and Element AI. AI is now not just in universities, but is being employed in society. We agree with you that researchers, engineers and developers share a social responsibility about how AI is going to be deployed so that it can be positive for humanity. We must try to avoid the misuse of AI and work with governments to help them define new social norms, laws and regulations to handle the concerns that people can have, whether it's about privacy or the misuse of AI to control people, or for a military application that can be dangerous for world stability. Researchers cannot just do math and engineering. They must collaborate with social scientists, philosophers and even regular citizens.
Yuichi
Hitachi R&D has people within the organization with many kinds of talents. We structured our R&D group into three pillars: the Global Center for Social Innovation (CSI), the Center for Technology Innovation (CTI), and the Center for Exploratory Research (CER). Each center drives advances in design, technology and science, including social science. And our work centers around collaborative creation, working together and with customers and other members of society.
We launched Kyōsō-no-Mori, a new research initiative for open co-creation. Our facility is located in a large forest where the fullness of nature helps to open our minds and those of our customers to new ideas. This is where we focus on co-creation, technology innovation and exploratory research that creates value for our customers, and for society as a whole. Our research used to take place behind closed doors, but now we have opened Kyōsō-no-Mori to our customers and partners.
Yoshua
You have opened the ivory tower. Some of the same ideals apply to Mila because it is an academic research center that is also focused on benefitting society. We, too, have opened our doors, and we invite close interaction with companies that are building products and services based on AI.
Yuichi
Very good. Our customers and partners come together for open discussions. We use our understanding of societal issues, combined with our experience with AI and other technologies, to help craft better solutions. Explainability and ethics are also required in AI projects. Humans will use AI-embedded systems, which must be ethical and trustworthy. What is required for ethical and trustworthy AI? Our answer is transparency, meaning that AI can explain the rationale behind the system.
Yoshua
I think a good way to set the goal for AI explainability is to think about how humans can be good at explaining some kinds of decisions. But explaining other kinds of decisions, or other aspects of a decision, can be difficult because the way that humans come a conclusion is intuitive and unconscious. We should eventually be able to build AI systems that can, just like a human, tell a story about what they are doing and why. We can provide an outline of the explanation, not the full explanation, but humans can take advantage of that. That's how, for example, we teach each other. I don't need to show you all the details of what's going on in my neurons, I just tell a story that you can understand. It's possible to increase explainability, and there's a lot of research behind that.
Tatsuhiko
You participated in crafting the Montreal Declaration for the Responsible Development of Artificial Intelligence. What is the main purpose of the declaration? And what kind of response did it get from the university, the government and local companies?
Yoshua
The main purpose of the declaration is to help draw the line between what is right and what is wrong, to apply ethics to AI and how it's going to be deployed throughout the world. The declaration was drafted by a group of scholars. We organized events for regular citizens to review the first version of the declaration and give us their opinions. Initially, we had seven principles in the declaration, and after the feedback it grew to 10 principles because we realized people had valid concerns that we hadn't really thought about. There are many societal and ethical values in play, so we tried to come to a consensus about where we should draw the line. The objective was to provide guidance for actual legal work that now has to happen in drafting legislation and regulations.
The declaration received a lot of media attention as well as support from government, companies, and the university, which organized workshops about it. All participants made a public declaration of commitment to the principles in the Montreal Declaration. You can go to the website and sign it, either for yourself or for an organization.
Tatsuhiko
I would like to hear your thoughts about what AI can do and cannot do. What’s the next challenge? And, what’s next for deep learning?
Yoshua
I think the process of science is to accumulate new ideas and concepts about understanding intelligence. Deep learning has given us a number of beautiful principles, but it's far from human-level intelligence. Some of the limitations that have to do with explainability are around the ability of deep nets to extract high-level concepts that manipulate, reason, plan and so on. We should put more constraints on representations so that they capture some of the properties humans use in their thinking to define high-level concepts, and identify the dependencies between those concepts.
We need the agent perspective, meaning that how we understand the world is not from the position of a static learner. Rather, we learn from the perspective of an agent that explores the environment, acts, and sees the effect of these actions to understand cause and effect. This kind of perspective, which is centered on an agent, allows us to build machines that will be more robust. Right now, one of the limitations of AI systems is that they are designed and then trained in the lab, then when we deploy them in the field we don’t get the same performance because the data in the field is not the same distribution as in the lab.
This is something we can build in when we train the system. The way to build it in is to first help the system build a model of the world, and to train it so that it will adapt to changes in distribution. If you want a system to work well, you have to train it to achieve that. But this is not just the usual training. It's training so that when the conditions vary, the learning continues. This is the kind of thinking that we're pushing here in my group.
Tatsuhiko
What is the most difficult challenge in releasing AI systems from limitations?
Yoshua
The most difficult challenge is to build machines that understand a world that is too complicated. We would like computers to build an abstract model, and then be able to reason with this model in a way that's computationally cheap, like how humans reason. Humans don’t explore all the possible solutions. We have a heuristic or approximate way of focusing on the few elements that matter. That is a capability that we don't have today in computers, but that we will need in order to match human intelligence.
Yuichi
Excellent point. We are pleased that Hitachi will be collaborating with Mila to do joint research on AI and deep learning. We are impressed with the advanced technology at Mila, and your vision. We look forward to sharing new concepts and AI technology with you to solve societal issues and challenges for our customers.
Yoshua
Let me add one point. One thing I like about Japanese companies is that you're willing to take a long-term view. For AI, this is important. The field is moving very fast, and you need a long-term vision for a pipeline of innovation, starting from basic research and simulated environments, for example, to develop ideas that can scale to products. But it's not just research and development, it's the work to go from the math to a product. Hopefully, we can help some of your engineers and scientists get closer to state-of-the-art research and machine learning. Together, we can do work that has a positive impact on society.
Yuichi
That is our hope too. We’d like to make positive changes that benefit society as a whole. And I appreciate that you took time for today’s enlightening conversation. Thank you.
Hitachi is working with organizations around the world to co-create innovative digital solutions for complex issues facing society. Focusing on specific challenges, we are developing solutions that provide social, economic and environmental value, channeling our knowledge and experience with that of our customers, partners, and academic research centers into advanced technologies and solutions for a better future for all.
※ 所属、役職は公開当時のものです。
Yoshua Bengio
Professor of Computer Science
University of Montreal
Yoshua Bengio is a Canadian researcher specializing in artificial intelligence, and a pioneer in deep learning.
He was born in France in 1964, studied in Montreal, obtained his PhD in computer science from McGill University in 1991 and completed post-doctoral studies at MIT.
Since 1993, he has been a professor in the Department of Computer Science and Operational Research at the Université de Montréal. He is also Scientific Director of Mila, Scientific director of IVADO and Canada Research Chair in Statistical Learning Algorithms. Recipient of the 2018 A.M. Turing Award.
His main research ambition is to understand principles of learning that yield intelligence. supervises a large group of graduate students and post-docs. His research is widely cited (over 220 000 citations found by Google Scholar in October 2019, with an H-index over 149, and rising fast).
Yuichi Yagawa, D.Eng.
General Manager, Central Research Laboratory,
Deputy GM, Center for Technology Innovation, Research & Development Group, Hitachi, Ltd.
Yuichi Yagawa is currently the General Manager of the Central Research Laboratory (CRL), and Deputy General Manager of the Center for Technology Innovation (CTI), responsible for research in digital technology, electronics and healthcare. His background is in computer science, specializing in system architecture, data management, storage system, AI, human-machine interfaces. Yagawa began his career as a corporate researcher at Hitachi in 1991 at CRL. He transferred to the RAID Systems Business Division where he proposed new concepts such as Edge-Core and Cloud on Ramp, leading the development and commercialization of key storage systems such as Hitachi Virtual File Platform and Hitachi Data Ingestor. He was appointed to his current position in April 2017.
Yagawa received his M.Eng. (Electrical) in 1991, D.Eng. from the Graduate School of Information, Production and Systems in 2019 (both from Waseda University), and is a member of Institute of Electrical Engineers, Japan (IEEJ).
Tatsuhiko Kagehiro, Ph.D.
Department Manager, Media Intelligent Processing Research Department,
Center for Technology Innovation - Digital Technology,
Research & Development Group, Hitachi, Ltd.
Tatsuhiko Kagehiro heads the Media Intelligent Processing Department and the AI Laboratory at Hitachi Ltd., where he is leading research teams working on image and sound analysis, natural language processing, and wearable data analysis. Kagehiro also heads the AI Laboratory, a cross-sectional research organization within the Research & Development Group that develops AI platforms that will be widely deployed across various Hitachi solutions, especially in public safety, industrial IoT, office system and so on.
Kagehiro has over 50 patents and has written many papers on image recognition and machine learning. He is also the author of the chapter entitled, "Multiple Hypotheses Document Analysis" in "Machine Learning in Document Analysis and Recognition," S. Marinai and H. Fujisawa (eds), Springer, 2006. His research has been recognized through several awards including the 2009 Okochi Memorial Technology Prize presented by the Okochi Memorial Foundation. He is a visiting associate professor at the University of Tsukuba. He received a Ph.D. degree in Intelligent Interaction Technologies from the University of Tsukuba, respectively.