Facebook’s open-sourcing of AI hardware is the start of the deep-learning revolution
A few days ago, Facebook open-sourced its artificial intelligence (AI) hardware computing design. Most people don’t know that large companies such as Facebook, Google, and Amazon don’t buy hardware from the usual large computer suppliers like Dell, HP, and IBM but instead design their own hardware based on commodity components. The Facebook website and all its myriad apps and subsystems persist on a cloud infrastructure constructed from tens of thousands of computers designed from scratch by Facebook’s own hardware engineers.
Open-sourcing Facebook’s AI hardware means that deep learning has graduated from the Facebook Artificial Intelligence Research (FAIR) lab into Facebook’s mainstream production systems intended to run apps created by its product development teams. If Facebook software developers are to build deep-learning systems for users, a standard hardware module optimised for fast deep learning execution that fits into and scales with Facebook’s data centres needs to be designed, competitively procured, and deployed. The module, called Big Sur, looks like any rack mounted commodity hardware unit found in any large cloud data centre.