New GPU Acceleration for Py Torch on M1 Macs + using with BERT
GPUacceleration on Mac is finally here Today s deep learning models owe a great deal of their exponential performance gains to ever increasing model sizes. Those larger models require more computations to train and run. These models are simply too big to be run on CPU hardware, which performs large stepbystep computations. Instead, they need massively parallel computations. That leaves us with either GPU or TPU hardware. Our home PCs aren t coming with TPUs anytime soon, so we re left with the GPU option. GPUs use a highly parallel structure, originally designed to process images for visual heavy processes. They became essential components in gaming for rendering realtime 3D images. GPUs are essential for the scale of today s models. Using CPUs makes many of these models too slow to be useful, which can make deep learning on M1 machines rather disappointing. Fortunately, this is changing with the support of GPU on M1 machines beginning with PyTorch In this video we will explain the new integr
|
|