Event Summary
This event is in-person only, and no walk-ins are permitted.
Registration closed.
Supercomputers—systems designed to achieve extremely high computational performance—have been designed and used to solve the most challenging problems for six decades, from the earliest Cray supercomputer to today’s “El Capitan” (world’s current top-ranked supercomputer) at Lawrence Livermore National Laboratory. Cloud computing has increased the scale of computing and storage accessible around the globe to support massive throughput of applications and workloads for consumer and enterprise needs, and increasingly adopting high performance computing technologies into cloud datacenters to address research and industry needs for large-scale simulation, massive data analytics, and Ai model training. AI factories are bringing these approaches even together even more tightly, enabling the development of foundational and frontier AI models used by hundreds of millions of people globally.
Join us to learn about the differences, commonalities, and convergence of the technologies and approaches, the current and future applications and challenges, and other considerations of ultrafast computing at massive scale. Our expert panel will discuss the recent history, current state and trends, and expected evolution of massive computing scale and performance, how it will influence science and society, and what we need to do to achieve the awesome benefits while addressing the challenges of huge infrastructure requirements and the potential negative uses of the output of these systems. This is sure to be an informative, inspiring, and thought-provoking presentation and conversation with computing technology leaders. It will be of value to anyone whose careers and companies depends—directly or indirectly—on the availability, scale, and performance of massive computing, and of interest to anyone who cares about how the applications we use every day—including AI—are enabled.
This event is sponsored by Google Cloud and held at Google Austin.
Attendance Instructions
In-person
Check-in at 5:30 pm. Please allow ample time to navigate traffic and find parking. You must bring an ID. All guests must be registered. Visitors will be checked in via your Eventbrite ticket in the Google lobby (500 W. 2nd Street). This event will be at capacity. Unregistered guests and walk-ins will not be admitted. Registration does NOT ensure attendance.
Parking Options
Google 500 W Upper Garage: paid garage parking; Entrance on San Antonio St.
Paid Hourly Street Parking
Austin City Hall: paid garage parking; Entrance is on the Guadalupe St. side.
Austin Public Library: paid garage parking; Entrance on West Avenue.
Moderators
Bill Magro,
Director and Chief Technologist, High Performance Computing, Google
William (Bill) Magro is Chief Technologist for High Performance Computing at Google, where he drives HPC strategy and customer success for Google Cloud. Magro joined Google in 2020, after 20 years at Intel, where he was Intel Fellow and Chief Technologist for HPC. There, he served as a key strategist and driver for Intel’s HPC business, with a focus on software, solutions, and emerging technologies and trends, including HPC/Cloud and Exascale Computing.
A recognized leader in the InfiniBand industry, Magro helped found the OpenFabrics Alliance and served as InfiniBand Trade Association Technical Working Group co-chair from 2007-2020. Magro has been a prominent voice in the HPC community for over two decades and regularly participates and presents in HPC conferences, advisory boards, and panels.
He joined Intel in 2000 with the acquisition of Kuck & Associates Inc. (KAI). Prior to KAI, Magro spent 3 years as a post-doctoral fellow and staff member at the Cornell Theory Center at Cornell University. He holds a bachelor's degree in applied and engineering physics from Cornell University and a master's degree and Ph.D. in physics from the University of Illinois at Urbana-Champaign.
Joseph George, Director of AI and Supercomputing, AMD
Joseph George is the Director of AI and Supercomputing at AMD, and is focused on helping customers succeed with AI at scale to solve some of the world's biggest scientific challenges, with an emphasis on implementing cloud, HPC and AI solutions that address real-world challenges. He has a strong background in product management, strategy, and alliances, with a drive to bring new technologies to market. His teams have delivered solutions that power top-ranked supercomputers, that accelerate enterprise AI adoption, and that create enduring competitive advantage for his customers.
Onur Celebioglu, Fellow, Dell Technologies
Onur Celebioglu is a Fellow, responsible for AI and HPC architecture and pathfinding in Dell’s ISG CTO group. Prior to that, he had been responsible for the design, development and integration of Dell’s HPC, AI and Data Analytics solutions and lead the Dell AI and HPC Innovation Lab. Onur has been part of the team that officially started the AI solutions engineering program at Dell and has over 25 years of industry experience in AI and HPC. His main areas of expertise are application performance analysis, high speed interconnects, parallel file systems and cluster provisioning tools. Onur has worked extensively in AI and HPC product development, published technical articles in various publications, conferences and journals. Onur has a BS in Electrical Engineering from METU, Turkey and a MS in Electrical and Computer Engineering from Carnegie Mellon University.