Speakers

Prof. Fei-Fei Li / Stanford HAI

One of the most ancient sensory functions, vision emerged in prehistoric animals more than 540 million years ago. Since then animals, empowered first by the ability to perceive the world, and then to move around and change the world, developed more and more sophisticated intelligence systems, culminating in human intelligence. Throughout this process, visual intelligence has been a cornerstone of animal intelligence. Enabling machines to see is hence a critical step toward building intelligent machines. In this talk, I will explore a series of projects with my students and collaborators, all aiming to develop intelligent visual machines using machine learning and deep learning methods. I begin by explaining how neuroscience and cognitive science inspired the development of algorithms that enabled computers to see what humans see. Then I discuss intriguing limitations of human visual attention and how we can develop computer algorithms and applications to help, in effect allowing computers to see what humans don’t see. Yet this leads to important social and ethical considerations about what we do not want to see or do not want to be seen, inspiring work on privacy computing in computer vision, as well as the importance of addressing data bias in vision algorithms. Finally I address the tremendous potential and opportunity to develop smart cameras and robots that help people see or do what we want machines’ help seeing or doing, shifting the narrative from AI’s potential to replace people to AI’s opportunity to help people. We present our work in ambient intelligence in healthcare as well as household robots as examples of AI’s potential to augment human capabilities. Last but not least, the cumulative observations of developing AI from a human-centered perspective has led to the establishment of Stanford’s Institute for Human-centered AI (HAI). I will showcase a small sample of interdisciplinary projects supported by HAI.
Speaker Bio: Dr. Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford’s Human-Centered AI Institute. She served as the Director of Stanford’s AI Lab from 2013 to 2018. And during her sabbatical from Stanford from January 2017 to September 2018, Dr. Li was Vice President at Google and served as Chief Scientist of AI/ML at Google Cloud. Since then she has served as a Board member or advisor in various public or private companies. Dr. Fei-Fei Li obtained her B.A. degree in physics from Princeton in 1999 with High Honors, and her PhD degree in electrical engineering from California Institute of Technology (Caltech) in 2005. She also holds a Doctorate Degree (Honorary) from Harvey Mudd College. Dr. Fei-Fei Li’s current research interests include cognitively inspired AI, machine learning, deep learning, computer vision, robotic learning, and AI+healthcare especially ambient intelligent systems for healthcare delivery. In the past she has also worked on cognitive and computational neuroscience. Dr. Li has published more than 300 scientific articles in top-tier journals and conferences in science, engineering and computer science. Dr. Li is the inventor of ImageNet and the ImageNet Challenge, a critical large-scale dataset and benchmarking effort that has contributed to the latest developments in deep learning and AI. In addition to her technical contributions, she is a national leading voice for advocating diversity in STEM and AI. She is co-founder and chairperson of the national non-profit AI4ALL aimed at increasing inclusion and diversity in AI education.
Prof. Percy Liang / Department of Computer Science at Stanford University

Benchmarks orient AI. They have played a vital role in both the direction and velocity of how the technology develops. Traditionally, benchmarks have focused on particular tasks (e.g., object recognition or question answering). But with the rise of foundation models such as GPT-4, the scope of benchmarking has vastly expanded given their general-purpose nature. In this talk, we describe some of our efforts to benchmark foundation models, including Holistic Evaluation of Language Models (HELM), interactive and multimodal extensions, and evaluation of generative search engines such as Bing Chat. Benchmarking shines a spotlight on the capabilities and limitations of foundation models, serving as a faithful guide for researchers, application developers, and policymakers.
Speaker Bio: Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011) and the director of the Center for Research on Foundation Models and a co-founder of Together AI. His research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.
Prof. Christopher Ré / Department of Computer Science at Stanford University

This talk will first describe how I fell in love with foundation models. The short version is that foundation models radically improved data systems that I had been trying to build for almost a decade. Then, I'll talk about our work making foundation models more efficient, e.g., Flash Attention. I'll then describe work on some new asymptotically more efficient architectures--so-called subquadratic models--including S4, Hyena, and Monarch Mixer. I'll also try to describe our best understanding of the current limits of the exciting approaches in this area. Two themes in the talk will be understanding the role of inductive bias in AI models and understanding how robust or narrow our recipe to get amazing AI is.
Speaker Bio: Christopher (Chris) Ré is an associate professor in the Department of Computer Science at Stanford University. He is in the Stanford AI Lab and is affiliated with the Machine Learning Group and the Center for Research on Foundation Models. His recent work is to understand how software and hardware systems will change because of machine learning along with a continuing, petulant drive to work on math problems. Research from his group has been incorporated into scientific and humanitarian efforts, such as the fight against human trafficking, along with products from technology and companies including Apple, Google, YouTube, and more. He has also cofounded companies, including Snorkel, SambaNova, and Together, and a venture firm, called Factory. His family still brags that he received the MacArthur Foundation Fellowship, but his closest friends are confident that it was a mistake. His research contributions have spanned database theory, database systems, and machine learning, and his work has won best paper at a premier venue in each area, respectively, at PODS 2012, SIGMOD 2014, and ICML 2016. Due to great collaborators, he received the NeurIPS 2020 test-of-time award and the PODS 2022 test-of-time award. Due to great students, he received best paper at MIDL 2022, best paper runner up at ICLR22 and ICML22, and best student-paper runner up at UAI22.
Prof. Niloufar Salehi / School of Information at UC, Berkeley

How can users trust an AI system that fails in unpredictable ways? Machine learning models, while powerful, can produce unpredictable results. This uncertainty becomes even more pronounced in areas where verification is challenging, such as in machine translation or probabilistic genotyping. Providing users with guidance on when to rely on a system is challenging because models can create a wide range of outputs (e.g. text), error boundaries are highly stochastic, and automated explanations themselves may be incorrect. In this talk, I will discuss approaches to improving the reliability of ML-based systems by designing actionable strategies for a user to gauge reliability and recover from potential errors. At a higher level, I will share perspectives from the field of HCI on designing reliable AI systems by centering user needs and context of use.
Speaker Bio: Niloufar Salehi is an assistant professor in the School of Information at UC, Berkeley and faculty member of Berkeley AI Research (BAIR). She studies human-computer interaction, with her research spanning education to healthcare to restorative justice. Her research interests are in social computing, human-centered AI, and more broadly, human-computer interaction (HCI). Her work has been published and received awards in premier venues including ACM CHI and CSCW and has been covered in Venture Beat, Wired, and the Guardian. She is a W. T. Grant Foundation scholar. She received her PhD in computer science from Stanford University in 2018.