Cloud TPU and Cloud TPU Pod: AI supercomputing for large scale machine learning
Cloud Tensor Processing Unit (TPU) is a LSI designed by Google for neural network processing. TPU features a domain specific architecture that is designed specifically for accelerating TensorFlow training and prediction workloads, provides significant performance benefit on machine learning production use. Cloud TPU Pod is a large scale computing cluster that consolidates up to 1024 cores of Cloud TPU, interconnected with Google’s high speed interconnect. In this session, we will learn the technical details of Cloud TPU and Cloud TPU Pod, and new features of TensorFlow that enables a large scale model parallelism for deep learning training.
Kaz Sato is Staff Developer Advocate at Google Cloud for machine learning and AI products, such as TensorFlow, Cloud AI and BigQuery. Kaz has been invited as a speaker at major events including Google Cloud Next, Google I/O, NVIDIA GTC and etc. Also, authoring many GCP blog posts, supporting developer communities for Google Cloud for over 9 years. He is also interested in hardwares and IoT, and has been hosting FPGA meetups since 2013.
If you wish to modify any information or update your photo, please contact the Publicity Chair at the following address: