The Topic view for WPID= simulated rendering in new page.

Visual layout may differ depending on browser and as rendered by Older view in Website



Cerebras’ CS-2 brain-scale chip can power AI models with 120 trillion


ItemDate=2021-08-25 16:45:32 Status=publish

TopicTaglist=['H3']

#Discussion(General) [ via IoTGroup ]


Cerebras Systems said its CS-2 Wafer Scale Engine 2 processor is a “brain-scale” chip that can power AI models with more than 120 trillion parameters.And that’s why Cerebras believes its latest processor — which is actually built on a wafer instead of just individual chips — is going to be so powerful founder and CEO Andrew Feldman said in an interview with VentureBeat.

“The industry is moving past a trillion-parameter models and we are extending that boundary by two orders of magnitude enabling brain-scale neural networks with 120 trillion parameters.” Feldman said the Cerebras CS-2 is powered by the Wafer Scale Engine (WSE-2) the largest chip ever made and the fastest AI processor to date.The WSE-2 also has 123 times more cores and 1 000 times more high-performance on-chip memory than graphic processing unit competitors.As noted the largest AI hardware clusters were on the order of 1% of a human brain’s scale or about 1 trillion synapse equivalents or parameters.

But Feldman said a single CS-2 accelerator — the size of a dorm room refrigerator — can support models of over 120 trillion parameters in size.On top of that he said Cerebras’ new technology portfolio contains four innovations: Cerebras Weight Streaming a new software execution architecture; Cerebras MemoryX a memory extension technology; Cerebras SwarmX a high-performance interconnect fabric technology; and Selectable Sparsity a dynamic sparsity harvesting technology.The Cerebras Weight Streaming technology can store model parameters off-chip while delivering the same training and inference performance as if they were on-chip.This new execution model disaggregates compute and parameter storage — allowing researchers to flexibly scale size and speed independently — and eliminates the latency and memory bandwidth issues that challenge large clusters of small processors.The Cerebras MemoryX will provide the second-generation Cerebras Wafer Scale Engine (WSE-2) up to 2.4 petabytes of high-performance


Read More..
AutoTextExtraction by Working BoT using SmartNews 1.03976957683 Build 04 April 2020

Footer info Your browser may cache and not show current data. On windows use CNTRL+F5 key and on Mac Shift+Refresh(browser). See more details. You may need to rotate small screen phones to landscape mode for using some menu or some views.You may contact us here if needed.

topic_viewHTML2_landing.html