Empowering your Embedded AI with 22FDX+

By Anand Rangarajan
Director, End Markets, GlobalFoundries

I’ll admit it: Enterprise AI is an edge-of-your-seat thriller that has rightfully garnered everybody’s attention — including mine. However, here at GlobalFoundries (GF), we’ve been watching a silent AI revolution in the embedded space that is affecting all its end markets, from the edge to the end point.

Just like enterprise AI, embedded AI is doing a horizontal sweep across all applications where developers are trying to meaningfully add AI. They are keeping faster latency and data/asset protection, strong selling points for embedded AI, as an anchor while scaling AI model complexity to fit within power, performance and memory envelope of the application use case.  

GF’s Ultra Low Power CMOS product families are a great building block for designing your next generation Embedded AI applications.

At the edge, embedded systems are looking for high performance processing to run more complex AI models. Our 12LP+ FinFET platform is ideal for these applications, providing best-in-class power, performance and area (PPA) to pack more power into a smaller chip with enhanced efficiency.

As applications operate away from edge and closer or at the endpoint, there is a greater need for SoCs (System on a Chip) that maximize performance without sacrificing power efficiency. This is where our customers have been successfully using 22FDX®, our fully depleted silicon-on-insulator (SOI) process technology that delivers FinFET-class performance and energy efficiency in a planar technology.

From smart home security to wearable fitness and medical devices, our 22FDX platform has been successfully utilized in a wide range of always-on, battery-operated devices that rely on responsive and reliable wireless connectivity and extreme power efficiency. Today, GF continues to push the envelope of performance and efficiency forward with the 22FDX+ platform, our latest generation FD-SOI process technology in volume production that is purpose-built for today’s demanding applications.

Here’s what sets the 22FDX+ platform apart for embedded AI applications:

  • Adaptive Body Biasing (ABB), a feature developed from a successful collaboration with our ecosystem partners at Synopsys, Racyics and Dolphin Semiconductor, dynamically adjusts transistor threshold voltage to keep the application fit within the power envelope or to incrementally add more compute power as need. ABB supports lower nominal voltage (Vnom) at 0.4v, reducing total power (mW/MHz) by 30% compared to current Vnom at 0.5v.
  • New logic-based bitcell memory supports voltage ranges from 0.4v to 0.9v with a single rail offering up to 30% power savings and 1.8-2x performance improvement over foundry bitcell memories which are typically limited to 0.65v on a single rail and need dual rail solution to get to lower voltage. Enabling performance improvement with power efficiency, such as more inferences/second for same watt of power, takes 22FDX+ devices to the next level of vision applications*.
  • 22FDX+ offers ultra-low leakage (ULL) SRAM retention leakage down to 0.35pA/cell (with source bias), about 5x lower than a 12nm process.
  • An “always on” block designed with 22 FDX+ offers up to 50% lower leakage compared to other technologies. Lowering the active power is crucial in extending as much life as possible in battery-powered applications, especially asset tracking.
  • 22 FDX+ offers a wide variety of library options (ULP, HP, ULL) supporting operating voltages (VDD) ranging from 0.4 to 0.9V and a variety of memory IPS.

To find out more about how 22FDX+ and our Ultra Low Power CMOS process technologies can support your next generation Embedded AI devices, you can contact us anytime through gf.com.

*On Logic mem cells, GF is working on achieving excellent power performance without area impact.