Cerebras CS-2 accelerates artificial intelligence work while radically reducing power consumption
SUNNYVALE, Calif.–(BUSINESS WIRE)–Cerebras Systems, the pioneer in high performance artificial intelligence (AI) computing, and AbbVie, a global biopharmaceutical company, today announced a landmark achievement in AbbVie�s AI work. Using a Cerebras CS-2 on biomedical natural language processing (NLP) models, AbbVie achieved performance in excess of 128 times that of a graphics processing unit (GPU), while using 1/3 the energy. Not only did AbbVie train the models more quickly, and for less energy, due to the CS-2s simple, standards-based programming workflow, the time usually allocated to model set up and tuning was also dramatically reduced.
A common challenge we experience with programming and training BERT LARGE models is providing sufficient GPU cluster resources for sufficient periods of time, said Brian Martin, Head of AI at AbbVie. The CS-2 system will provide wall-clock improvements that alleviate much of this challenge, while providing a simpler programming model that accelerates our delivery by enabling our teams to iterate more quickly and test more ideas.
With a focus on cutting edge R&D across immunology, neuroscience, oncology, and virology, its essential for AbbVies scientists to keep abreast of research findings from around the world. To that end, AbbVie employs large, sophisticated AI language models to build its machine translation service, Abbelfish. This service accurately translates and makes searchable vast libraries of biomedical literature across 180 languages using large, state-of-the-art Transformer models such as BERT, BERT LARGE, and BioBERT.
Ensuring Abbelfish is both accurate and always up to date requires training and re-training the NLP models from scratch with domain-specific biomedical data. However, the Abbelfish model is very large 6 billion parameters. Such a model is impractical to train on even the largest GPU clusters. Cerebras Systems makes this type of large-scale AI training fast and easy.
Large language models like BERT LARGE have demonstrated state-of-the-art accuracy on many language processing and understanding tasks. Training these large language models using GPUs is challenging and time-consuming. Training from scratch on new datasets often takes weeks, even on large clusters of legacy equipment. As the size of the cluster grows, power, cost, and complexity grow exponentially. Programming clusters of graphics processing units requires rare skills, different machine learning frameworks and specialized tools that take weeks of engineering time to each iteration.
The CS-2 was built to directly address these challenges and radically reduce the time to insight. The CS-2 delivers the deep learning performance of 100s of GPUs, with the programming ease of a single node. As a result, less time is spent in set up and configuration, less time is spent training and more ideas are explored. The AbbVie team was able to set up and train their custom BERT LARGE model from scratch in less than two days with the Cerebras CS-2.
At Cerebras Systems, our goal is to enable AI that accelerates our customers mission, said Andrew Feldman, CEO and co-founder of Cerebras Systems. Its not enough to provide customers with the fastest AI in the market — it also must be the most energy efficient and the easiest to deploy. Its incredible to see AbbVie not only accelerating their massive language models, but doing so while consuming a fraction of the energy used by legacy solutions.
The Cerebras CS-2 is powered by the largest processor ever built the Cerebras Wafer-Scale Engine 2 (WSE-2), which is 56 times larger than the nearest competitor. As a result, the CS-2 delivers more AI-optimized compute cores, more fast memory, and more fabric bandwidth than any other deep learning processor in existence. It was purpose built to accelerate deep learning workloads reducing the time to answer by orders of magnitude.
With customers and partners in North America, Asia, Europe and the Middle East, Cerebras is delivering industry leading AI solutions to a growing roster of customers in the enterprise, government, and high performance computing segments including GlaxoSmithKline, AstraZeneca, TotalEnergies, nference, Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center, Edinburgh Parallel Computing Centre (EPCC), and Tokyo Electron Devices.
For more information about the Cerebras CS-2 system and its application in health and pharma, please visit https://cerebras.net/industries/health-and-pharma/.
About Cerebras Systems
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer system, designed for the singular purpose of accelerating AI and changing the future of AI work forever. Our flagship product, the CS-2 system is powered by the worlds largest processor the 850,000 core Cerebras WSE-2, enables customers to accelerate their deep learning work by orders of magnitude over graphics processing units.
Contacts
Press contact (for media only)
Kim Ziesemer
Email: pr@zmcommunications.com
JAKARTA, INDONESIA - Media OutReach Newswire - 22 November 2024 - VinFast Auto has officially…
SYDNEY, AUSTRALIA - Media OutReach Newswire - 22 November 2024 - The global cryptocurrency market…
HANOI, VIETNAM – Media OutReach Newswire - 22 November 2024 - By capitalizing on its…
HANGZHOU, CHINA - Media OutReach Newswire - 22 November 2024 - As the 2024 World…
BEIJING, CHINA - Media OutReach Newswire - 22 November 2024 - The 2024 Beijing Changping…
Tickets Now Available via Urbtix HONG KONG SAR - Media OutReach Newswire - 22 November…