Pervasive connectivity and the breakneck speed of data growth has led to significant memory and power bottlenecks. The combination of the need for even more speed, low-power but powerful processing at the edge and AI going mainstream is fueling new levels of semiconductor innovation, beyond Moore’s law alone, to address these challenges.
75% of the world’s population (6 billion consumers) will interact with data daily by 2025.1
Large data centers require 100+ megawatts of power—enough to power ~80,000 U.S. households.2
1Data Age 2025, November 2018, refreshed May 2020, seagate.com/files/www-content/our-story/trends/files/dataage-idc-report-07-2020.pdf/>
2US DOE 2020.
Explosive growth in the AI silicon market is fueled by ballooning data sets, 4/5G connectivity and the need for more powerful semiconductor chips to handle associated real-time analytics requirements.
GlobalFoundries® (GF®) high-performance and ultra-low power AI accelerator solutions are optimized for training (creating computer models) and inferencing (deploying the models) both in the cloud and at the edge. Built on proven silicon platforms complemented by robust ecosystems, they are designed to help chip designers reduce development time and solution providers to get to market faster. By using AI-optimized architectures and features, these solutions can help solve the power/memory bottleneck in audio, video and image processing, smart edge devices and even autonomous vehicle applications.
The AI silicon market
will hit $21 billion dollars in 2024.1
In 2025, 25% of the world’s data will require real-time processing.2
1 ABI Research, ABI Research, Artificial Intelligence and Machine Learning – 2Q 2020 (MD-AIML-105).
2 Data Age 2025, November 2018, refreshed May 2020, seagate.com/files/www-content/our-story/trends/files/dataage-idc-report-07-2020.pdf
AI accelerator solutions from GF
Proven and robust offering with outstanding performance and area for cloud and edge AI inference
A >20% increase in performance or a >40% decrease in power plus a 10% improvement in logic area scaling over base 12LP platform for cloud and edge AI inference
Power-performance with high level of integration and ultra-low power (1 pA/cell) with 0.5 V logic operation for edge AI inference
AI accelerators for the cloud using 12LP and 12LP+ FinFET
GlobalFoundries® (GF®) 12LP and 12LP+ AI accelerator solutions can help solve memory and power bottlenecks while speeding up AI applications such as high-end training and model inferencing in the cloud. The two FinFET-based solutions offer 1 GHz+ performance, with purpose-specific AI innovations providing significant power efficiency and area advantages. 12LP+ builds upon GF’s established 14LPP/12LP solutions, of which GF has shipped more than one million wafers.
These AI-specific solutions, complemented by GF AI design reference packages and design technology co-optimization (DTCO) services, enable cost-efficient, streamlined design and faster time to market.
AI-optimized performance & power, without moving to a smaller node.
Best-in-class IP and comprehensive third-party design and packaging ecosystem.
12LP and 12LP+ deliver a superior combination of AI performance, power and area benefits and offer the same global routing capability as 7 nm solutions so chip designers can avoid the need to migrate to smaller and much costlier geometries.
Clients are already leveraging GF’s 12LP solution for dramatic power and performance benefits. 12LP+ builds on those advantages with optimized MAC designs, a 0.5 V Vmin SRAM bitcell for 2X lower power at 1 GHz and a dual-work function FET that enables >20% faster logic performance or >40% lower power.
12LP/12LP+ offer Tier 1 supplier I/O interfaces, while best-in-class IP and a rich third-party partner design ecosystem enable cost-efficient designs and quick-turn prototyping for lower NRE and faster time to production. A 2.5D interposer is available for clients using high bandwidth memory (HBM2/2e).
AI accelerators for the edge using 12LP/12LP+ and 22FDX™
A primary driver in the growth of the AI silicon market is the edge taking on compute and delivering local processing and filtered data to the cloud.
GlobalFoundries® (GF®) 12LP/12LP+ FinFET and 22FDX™ FD-SOI edge AI accelerator solutions are optimized to reduce latency and actionable response times while enabling enhanced security and data privacy by managing data at the edge. The purpose-built solutions combine a spectrum of power, performance and area advantages that enable chip designers to choose the best fit for their discrete or embedded AI SoCs.
22FDX is up to 1000x more power-efficient than current industry edge AI accelerator offerings.*
12LP/12LP+ offer AI-optimized performance with same global routing capability as 7 nm, so you can design smarter, not smaller.
GF 12LP/12LP+ and 22FDX solutions are optimized to deliver the performance horsepower you need to handle the demands of AI inferencing at the edge, instead of in the data center.
Leverage the low dynamic power and best-in-class leakage power from 22FDX, excellent thermal performance from 12LP/12LP+ and a low-voltage SRAM available with 12LP+ to minimize power consumption in AC-wired or battery-powered devices.
Take advantage of a combination of AI-tuned features, including the AI reference package available with 12LP/12LP+ and the eMRAM AI storage core available in automotive grade 1-qualified 22FDX to stand out from your competition.
AI edge accelerators using 22FDX™ FD-SOI and 12LP+/12LP FinFET
*Assumes typical power consumption of edge device is ten to hundreds of watts. 22FDX can achieve 20 milliwatts power consumption.