AI AI accelerators for the cloud using 12LP and 12LP+ FinFET AI accelerators for the edge using 12LP/12LP+ and 22FDX™ 6 Billion 75% of the world’s population — 6 billion consumers — will interact with data daily by 2025.1 100 MW Large data centers require 100+ megawatts of power — enough to power ~80,000 U.S. households.2 1Data Age 2025, November 2018, refreshed May 2020, seagate.com/files/www-content/our-story/trends/files/dataage-idc-report-07-2020.pdf/> 2US DOE 2020. AIExplosive growth in the AI silicon market is fueled by ballooning data sets, 4/5G connectivity and the need for more powerful semiconductor chips to handle associated real-time analytics requirements.GF high-performance and ultra-low power AI accelerator solutions are optimized for training and inferencing the models both in the cloud and at the edge. Built on proven silicon platforms complemented by robust ecosystems, they are designed to help chip designers reduce development time and solution providers to get to market faster. By using AI-optimized architectures and features, these solutions can help solve the power/memory bottleneck in audio, video and image processing, smart edge devices and even autonomous vehicle applications#ai 21 Billion The AI silicon market will hit $21 billion dollars in 2024.1 25% In 2025, 25% of the world’s data will require real-time processing.2 1 ABI Research, ABI Research, Artificial Intelligence and Machine Learning – 2Q 2020 (MD-AIML-105).2 Data Age 2025, November 2018, refreshed May 2020, seagate.com/files/www-content/our-story/trends/files/dataage-idc-report-07-2020.pdf AI accelerators for the cloud using 12LP and 12LP+ FinFETGlobalFoundries® (GF®) 12LP and 12LP+ AI accelerator solutions can help solve memory and power bottlenecks while speeding up AI applications such as high-end training and model inferencing in the cloud. The two FinFET-based solutions offer 1 GHz+ performance, with purpose-specific AI innovations providing significant power efficiency and area advantages. 12LP+ builds upon GF’s established 14LPP/12LP solutions, of which GF has shipped more than one million wafers.These AI-specific solutions, complemented by GF AI design reference packages and design technology co-optimization (DTCO) services, enable cost-efficient, streamlined design and faster time to market.#ai-accelerators-for-the-cloud-using-12lp-and-12lp+-finfet AI-optimized performance & power, without moving to a smaller node. Best-in-class IP and comprehensive third-party design and packaging ecosystem. Design smarter, not smaller 12LP and 12LP+ deliver a superior combination of AI performance, power and area benefits and offer the same global routing capability as 7 nm solutions so chip designers can avoid the need to migrate to smaller. Maximize performance, minimize power consumption Clients are already leveraging GF’s 12LP solution for dramatic power and performance benefits. 12LP+ builds on those advantages with optimized MAC designs, a 0.5 V Vmin SRAM bitcell for 2X lower power at 1 GHz and a dual-work function FET that enables >20% faster logic performance or >40% lower power. Differentiate and accelerate time to market 12LP/12LP+ offer Tier 1 supplier I/O interfaces, while best-in-class IP and a rich third-party partner design ecosystem enable cost-efficient designs and quick-turn prototyping for lower NRE and faster time to production. A 2.5D interposer is available for clients using high bandwidth memory (HBM2/2e). Featured Resources AI cloud accelerators using 12LP and 12LP+ AI accelerators for the edge using 12LP/12LP+ and 22FDX™A primary driver in the growth of the AI silicon market is the edge taking on compute and delivering local processing and filtered data to the cloud.GlobalFoundries® (GF®) 12LP/12LP+ FinFET and 22FDX™ FD-SOI edge AI accelerator solutions are optimized to reduce latency and actionable response times while enabling enhanced security and data privacy by managing data at the edge. The purpose-built solutions combine a spectrum of power, performance and area advantages that enable chip designers to choose the best fit for their discrete or embedded AI SoCs.#ai-accelerators-for-the-edge-using-12lp/12lp+-and-22fdx™ 22FDX is up to 1000x more power-efficient than current industry edge AI accelerator offerings.* 12LP/12LP+ offer AI-optimized performance with same global routing capability as 7 nm, so you can design smarter, not smaller. *Assumes typical power consumption of edge device is ten to hundreds of watts. 22FDX can achieve 20 milliwatts power consumption. Accelerate AI at the edge GF 12LP/12LP+ and 22FDX solutions are optimized to deliver the performance horsepower you need to handle the demands of AI inferencing at the edge, instead of in the data center. Solve the power challenge Leverage the low dynamic power and best-in-class leakage power from 22FDX, excellent thermal performance from 12LP/12LP+ and a low-voltage SRAM available with 12LP+ to minimize power consumption in AC-wired or battery-powered devices. Differentiate with confidence Take advantage of a combination of AI-tuned features, including the AI reference package available with 12LP/12LP+ and the eMRAM AI storage core available in automotive grade 1-qualified 22FDX to stand out from your competition. Featured Resources AI edge accelerators using 22FDX™ FD-SOI and 12LP+/12LP FinFET